Physica A 312 (2002) 636 – 648
www.elsevier.com/locate/physa
Self-organized critical tra%c in parallel computer networks Sergi Valverdea , Ricard V. Sol+ea; b; ∗ a ICREA-Complex
Systems Lab, Universitat Pompeu Fabra-IMIM, Dr. Aiguader 80, 08003 Barcelona, Spain b Santa Fe Institute, 1399 Hyde Park Road, NM 87501, USA Received 12 February 2002
Abstract In a recent paper, we analysed the dynamics of tra%c /ow in a simple, square lattice architecture. It was shown that a phase transition takes place between a free and a congested phase. The transition point was shown to exhibit optimal information transfer and wide /uctuations in time, with scale-free properties. In this paper, we further extend our analysis by considering a generalization of the previous model in which the rate of packet emission is regulated by the local congestion perceived by each node. As a result of the feedback between tra%c congestion and packet release, the system is poised at criticality. Many well-known statistical c 2002 features displayed by Internet tra%c are recovered from our model in a natural way. Published by Elsevier Science B.V. PACS: 87.10.+e; 0.5.50.+q; 64.60.Cn
1. Introduction Statistical physics has been shown to be a powerful approach to the analysis of network dynamics. Scaling concepts have provided the framework to understand, for example, the origin of scale-free properties of Internet [1,2]. In this context, simple models of network growth reveal that the scale-free nature of the web is an emergent pattern resulting from the mechanisms of growth plus preferential attachment of links. As a result of this process, the topology of the web provides a source of robustness against random removal of nodes and, simultaneously, an intrinsic fragility ∗
Corresponding author. Santa Fe Institute, 1399 Hyde Park Road, NM 87501, USA. E-mail address:
[email protected] (R.V. Sol+e).
c 2002 Published by Elsevier Science B.V. 0378-4371/02/$ - see front matter PII: S 0 3 7 8 - 4 3 7 1 ( 0 2 ) 0 0 8 7 2 - 5
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
637
against intentional attack. These properties and others (such as the spread of computer viruses [3,4]) are associated to the presence of phase transitions in network topology or dynamics. Beyond the topological features exhibited by these networks, a diDerent (but obviously related) problem concerns the dynamics of information /ow among their units. In this context, the use of nonlinear dynamics techniques has contributed to the development of the Eeld of computational ecologies (CE). These CE are deEned in terms of distributed parallel processing in large computer networks [5,6]. As Huberman and co-workers have shown, the interactions arising in CE leads to “self-regulating computational entities very diDerent in nature from their individual components”. Since the discovery of the self-similar patterns displayed by the time /uctuations of packets in computer networks [7–9] many further studies supported the view that Internet tra%c might be related to the presence of near-critical dynamics. In this context, dynamic phase transitions have been observed to occur in the tra%c going through a link [10,11]. Heavy-tailed distributions are observable in most characteristic features of computer network dynamics, from queue lengths to latency times [10,12]. Many dedicated studies have been devoted to the analysis of (multi)fractal features of tra%c [13] although most of them lack an explanatory framework for the origin of the self-similarity, since no microscopic approach to the dynamics is considered in an explicit form. In this context, complex /uctuations displaying self-similar behaviour are often (though not always) related to the presence of criticality. In order to test this scenario, appropriate models of tra%c dynamics are required. In two recent papers [14,15], such possibility has been explored, inspired by previous models of vehicular tra%c [16,17]. The two models used a very simple square-lattice structure. Although not realistic as a model of Internet topology [1,2] these architectures have been already used in real parallel multiprocessor nets [18–20]. Apart from the torus topology, meshes, hypercubes [21] and hierarchic trees [22] are also common. In these multiprocessor networks, the routing algorithm addresses common objectives (minimization of packet delivery time, or latency, and maximization of throughput) with distributed networks (local and wide area networks). Besides the observed similarities found in these networks, an important diDerence between parallel computer networks and distributed networks (such as the Internet) arises: multiprocessor networks use blocking routing algorithms in which messages that cannot be forwarded to next router are blocked until they can be serviced. On the other hand, in distributed networks it is quite common to discard packets if there is not enough room available at the router [23]. This problem was avoided here by allowing inEnite memory at every router of our model. We recognize the importance of realistic memory constraints at routers. But, in the context of our present study, we suspect that those memory limitations only introduce Enite size eDects. Some another studies show that in some circumstances, even if a huge memory is available, congestion will continue to appear [24]. Parallel multiprocessor networks have been shown to display complex dynamics and a phase transition separating a congested from a non-congested phase, both in the real
638
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
[25] and the simulated counterparts. However, our previous model lacks an important ingredient present in the Internet: the reaction of nodes to congestion. SpeciEcally, users will increase the number of packets in the system depending of the degree of congestion that they perceive. In a related context, Huberman and Lukose already explored this problem within the context of social dilemmas: the actions of individuals lead to a negative eDect on the network performance, which feeds back into users [26]. This notion of load control in presence of congestion is very important in computer networks and it is commonly accepted that this mechanism ensures fair access to the shared resources the network oDers to its users [23]. That is, misbehaved users should be constrained because it introduces more load and more congestion. In other words, a feedback between the system’s state (number of packets) and rate of packet release by users must be present. We can easily identify these ingredients as order and control parameters, respectively. Their interaction can lead to a system poised close a critical state. In this paper we explicitly explore this possibility by means of a simple model of tra%c /ow on a square lattice with periodic boundary conditions.
2. Trac dynamics: model Our goal is to construct a minimal, microscopic model of tra%c to be able to recover some of the basic features exhibited by real computer networks. Additionally, our model should be able to show if the networks can be self-organized in such a way that local hosts self-regulate their throughput by depending on local congestion. Following the previous approaches [27,15] let us consider network deEned on a two-dimensional lattice L(L) formed by L × L nodes, with four nearest neighbours per node. The model considers two types of nodes: hosts and routers. 1 The Erst are nodes that can generate and receive messages and the second can only store and forward messages. All our simulations are performed using periodic boundary conditions. As mentioned in the previous section, although this might seem a limited topological arrangement, it has been successfully tested on real hardware and the presence of a phase transition fully conErmed [25]. As in our previous analysis [15], only a fraction of the nodes are hosts and the rest are routers. The location of each object, r ∈ L(L), will be given by r = icx + jcy , where cx ; cy are Cartesian unit vectors. So the set of nearest neighbours C(r) is given by C(r) = {r − cx ; r + cx ; r − cy ; r + cy } :
(1)
Each node maintains a queue of stored packets as they arrive to it. The local number of packets (the queue length Q) will be indicated as n(r; t) and thus the total number of
1 Under the term router we group gateways, switches and routers. The term host is used to name all end-systems.
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
packets in the system will be n(r; t) N (t) =
639
(2)
r∈L(L)
and the metric used in our system will be given by the Manhattan metric deEned for lattices with periodic boundaries: L L dpm (r1 ; r2 ) = L − |i1 − i2 | − − |j1 − j2 | − ; 2 2 where rk = (ik ; jk ). In the previous model [15] the rate of inserted tra%c into the network was a Exed, external parameter . The model displayed a phase transition for a certain value c that results in two phases: the free or non-jammed phase and the jammed phase. Since the global activity, as measured in terms of the average number of packets N (t) acts as an order parameter and the driving is introduced through the (control) parameter , we conjectured that an appropriate feedback between the two of them would be able to self-organize the system into a critical state [29,30]. In order to test the previous conjecture, a new model will be introduced. The main idea here is that hosts can modify their rates of packet release by depending on the local rate of congestion that they detect. To properly react to congestion, sources of tra%c (hosts) must be informed in some way. Two basic rules are used (and summarized in Fig. 1). The Erst describes the feedback existing between rate of emission and local congestion experienced by the host. The second describes the routing algorithm.
Fig. 1. Network model. Two types of nodes are considered: hosts (squares) and routers (open circles). The nodes are connected through bi-directional links. From top to bottom and left to right, the Egure shows a sample sequence of routing steps of a packet that travels from host S to host D. The resulting path is marked with thick black line. The counter attached to each outgoing link is updated every time a packet travels through it. The routing algorithm only allows minimal paths (bottom sequence) and avoids link overloading when several choices are possible (top sequence).
640
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
2.1. Rate control Let us indicate by the number of local congested neighbours: = [n(k; t)] ;
(3)
k∈C(r)
where [x] = 1 for x ¿ 0 and zero otherwise. The rate of packet release changes with time, for a given router located at r ∈ L(L), is updated following: (r; t + 1) = min{1; (r; t) + }
(4)
if ¡ 4 and goes down to zero for = 4. The host tries to maximize their use of network resources, injecting more and more tra%c into the network until congestion is detected. In our setting, each time step the local creation rate goes up a Exed rate (here we take = 0:01) and drops to zero if the neighbouring nodes are congested. This rule is inspired by the “additive increase=multiplicative decrease” [31] and widely used in distributed networks. 2.2. Routing Each node picks up the packet at the head of its queue and decides which outgoing link is better suited to the packet destination. Consider the packet is at node r = (rx ; ry ) and its Enal destination is a host at d = (dx ; dy ). DeEne v = (vx ; vy ) as: v = d − r (v = 0 when the packet is at destination). Otherwise, we have only two possibilities: going out through an horizontal link (r + cx or r − cx ) or going out through a vertical one (r + cy or r − cy ). Which one is chosen depends on what outgoing link takes the packet closer to its destination. In case x = 0 and y = 0 (Fig. 1, bottom left), then next hop m is: m = r + cx if x ¿ 0 and m = r − cx if x ¡ 0. A similar rule is deEned for the case of x = 0 and y = 0. The general situation corresponds to x = 0 and y = 0 and in this case we look at the counters associated with the outgoing links to decide which one is chosen by the routing algorithm (Fig. 1, top left and top right). Each pair of neighbouring nodes r; r are connected through a pair of directed links !(r; r ) and !(r ; r). The strength of these links is updated by one unit each time a packet /ows through them. If a packet /ows from r → r , then !(r; r ) → !(r; r ) + 1. The counter is thus used to remember how many packets have passed through that link. In order to avoid overloading a particular link, the router chooses the link with minimum counter. 3. Mean eld model A simple mean Eeld model can be obtained for the total density of packets ≡ N (t)=L2 and . The number of travelling packets increases as a consequence of the constant pumping from the hosts, which occurs at a rate . On the other hand packets are removed from the system if the lattice is not too congested (i.e., if free space for movement is available) but accumulate as a consequence of already jammed nodes.
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
641
We can roughly state that once the number of packets exceeds the number of lattice sites, congestions will lead to packet accumulation. In a previous study, Fuk+s and Lawniczak present a simple argument to End the critical load rate c [28]. Following Little’s law from queuing theory [32], at the sub-critical (free) phase, the number of packets created per unit time (i.e., L2 ) equals the number of packets delivered per unit time. If (L) indicates the average transit time, then N (5) = L2 : Criticality is given by the condition N = L2 and for that case, c = 1=(L). It can be shown that the average transient time in this phase is = L=2, and thus c = 2=L, in very good agreement with numerical results [28]. The time evolution of the density of packets will follow the mean Eeld equation: d = − q(1 − ) : (6) dt The last term indicates the rate of removal, which is proportional to the number of input pathways (number of neighbours q) available to incoming packets. The rate is easily computed: it corresponds to the inverse of (L). For constant (the case already considered in our previous study) we have the Exed points ± =[1±(1−4=q)1=2 ]=2. For ¿ c ≡ q=4, the Exed points vanish and no Enite density exists. In this situation (which we labelled the congested phase) the density of packets grows without bounds. For ¡ c , a Enite stable density 1=2 4 1 1− 1− (7) − = 2 q is observable (the other Exed point + is unstable). For the particular case = 1 analysed by Fuk+s and Lawniczak [28], we recover their critical point c =2=L assuming that = L=2. Now, assuming that a feedback exists between packet delivery rates and density, some Enite equilibrium density ∗ will be achieved in the previous model and thus no divergence will be allowed to occur. In this case, the mean Eeld model indicates that a scaling relation will be observed, i.e., ∼ −1
(8)
between the (self-organized) packet release rate and the density of hosts (which is here the only relevant external parameter). As shown below, this scaling relation holds in the simulated model, together with the presence of several scaling properties consistent with a critical state. 4. Spatiotemporal dynamics Let us Erst consider the spatiotemporal features exhibited by our model. In Fig. 2(A) and (B) we show examples of the time series obtained. Here the changes
642
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
Fig. 2. (A,B) Two examples of the time series displayed by the local creation rate at two given (host) lattice points, indicated by arrows on the network L(L), where the periodic boundary conditions are explicitly displayed. Here: = 0:01; L = 64, and = 0:0325. We can appreciate the wide /uctuations in , including periods of stasis at maximum = 1 rate. The network in (C) indicates the amount of local congestion at each node by means of a grey scale. Lighter nodes indicate larger congestion (using logarithmic scale).
in at two given hosts are shown. The upper plot (A) is rather characteristic: the host experiences periods of stasis with maximum throughput ( = 1 and many spikes with a broad range of delivery rates. In (B) a node experiencing frequent congestion is shown. Other nodes in the system display very long periods with = 1 (not shown). In (C) the spatial pattern of activity is displayed. Here the topology of L(L) is explicitly shown and each node is indicated as a square. The grey scale provides
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
10
643
7
ρ=0.20
Frequency
10
6
ρ=0.10 10
5
ρ=0.032 10
4
10
-2
10
-1
10
0
Local creation rate Fig. 3. Distributions of local creation rates for L = 64 at three diDerent host densities. We can see that a wide range of values is observable, thus indicating that a high heterogeneity is present.
a measure of the queue length (here in log scale). Lighter squares indicate higher congestion. The system displays a wide variability in space and time, in spite of the homogeneous nature of the rules. The quenched randomness present in the host distribution, together with the emergence and competition of diDerent tra%c pathways creates a dynamic pattern of tra%c /ow and congestion. In this context, although the network self-organizes towards a given (average) c , wide /uctuations in the loads are observable in diDerent parts of the lattice. By analysing a given part of the net, we will observe that the load can be high or small, but scaling is always present, with diDerent local values. This observation helps to understand the apparent disagreement [33] between the presence of a given average critical and the fact that scaling is observed in real networks with very diDerent loads. Such heterogeneity is also present in our system and is well illustrated in Fig. 3, where the distribution of local creation rates N () is displayed for three diDerent host densities . The peaks at = 1 indicates that, as a result of the feedback between congestion and delivery rate, the system is able to maintain a large fraction of hosts in an essentially non-congested state. This seems to be consistent with the high e%ciency displayed by the original tra%c model close to criticality [15]. 5. Scaling laws The Erst scaling relation to be checked is the one derived from the mean Eeld approximation, relating and . In Fig. 4 we show our results for a L = 64 lattice.
644
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
10
0
λ(ρ)
-1.03
10
-1
-2
10
-1
10
0
10
ρ Fig. 4. Scaling dependence between the average (critical) and host density. Simulations have been performed on a L = 64 lattice. The dashed line displays the predicted functional relation as derived from the mean Eeld theory. The average has been computed over T = 5 × 104 steps after 104 transients have been discarded.
Here was averaged (using all hosts) over T = 5 × 104 steps after 104 transients were discarded. A scaling relation is obtained ∼ −1:03±0:03 ;
(9)
where the slope was estimated for ¿ 0:03. Below this host density, the /uid /ow of packets guarantees that no congestion will typically occur, and thus the system does not reach the predicted critical load rate. In order to conErm that the system is self-organized close to criticality, two relevant quantities can be analysed in relation with system size. These are the average transit time (L) and the packet density N (L). As discussed before, at the free phase (in the original model) the characteristic transit time will scale linearly with system’s size, while the number of packets will scale as N ∼L2 at criticality. Our results are shown in Fig. 5(A) and (B). The transit time scales linearly, with an exponent very close to one, and we also obtain a scaling relation: (Fig. 6) N (L) ∼ L2:14±0:07
(10)
consistently with the theoretical prediction. These results indicate that, in spite of the high heterogeneity displayed by the system, the average congestion rate is the one expected for the system at criticality. Finally, we can also estimate the distribution N (D) of congestion duration lengths D. Here, D is deEned as the time between two consecutive (non-contiguous) moments of no congestion. Here a node will be labelled as non-congested if n(r; t) 6 1. In a previous experimental study, Takayasu and co-workers showed that these distributions
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
645
4
10
(a) 3
1.11
τ(L)
10
2
10
1
10 1 10
2
3
10
10
(b) 6
N(L)
10
2.14
4
10
2
10
1
2
10
3
10
10
L Fig. 5. (a,b) Scaling behaviour in average transient times (L) and packet numbers N (L) for diDerent lattice sizes (here L = 16; 32; 64; 126; 252; 512) with = 0:10. As predicted by the critical condition, N ∼ L2 and the transient time is linear with L.
4
10
Frequency
-1.5
2
10
0
10
0
10
1
10
2
10
3
10
Congestion duration length Fig. 6. Congestion duration length frequencies N (D) measured during T = 5 × 104 steps at hosts L = 64 and under diDerent host densities ( = 0:1 and 0.2). Both distributions follow power laws N (D) ∼ D−" with same exponent " ≈ 1:5, in agreement with analyses of real tra%c (see text).
646
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
display a scaling N (D) ∼ D−# with # ≈ 1:5 − 2:0, measured at diDerent /ow densities (close to the critical-supercritical domain). In our model, we obtain a scaling N (D) ∼ D−1:66±0:23 ;
(11)
where the slope has been estimated for 101 ¡ D ¡ 103 using diDerent host densities. Again, in spite of the quantitative diDerences arising from local tra%c /ow, the presence of scaling is widespread and consistent with (globally tuned) criticality. 6. Discussion In this paper, we have extended our previous approach to the tra%c dynamics on parallel networks by including a simple feedback mechanism between packet release and net congestion. The main goal of our study was to show that under this feedback control, the network self-organizes into a critical state characterized by scaling in several relevant quantities such as congestion duration lengths. Using a simple mean Eeld model, it was shown that at the stationary state the average packet release should scale as the inverse of the host density. This scaling behaviour has been conErmed by the simulation model. Although our results are inspired in a particular spatial arrangement (characteristic of real parallel computer designs) we think that some of these ideas would be easily extended to Internet dynamics. The general validity of our approach to diDerent types of nets has already been suggested by some authors [34]. The model shows a considerable spatial heterogeneity: diDerent characteristic patterns of activity are identiEed at diDerent locations, consistently with observed networks. In this sense, although the average packet release reaches a steady (critical) state, this does not mean that the same levels of congestion are reached everywhere. On average, the system is able to maintain the number of packets circulating close to the critical value N ≈ Nc = L2 and the average transit time is also shown to scale as ∼ L, consistently with previous predictions for systems close to the phase transition point [28]. Our model is oversimpliEed in a number of ways, specially when considering the square topology and the simpliEed nature of the rules. Conventional Internet protocols (TCP=IP) interpret packet losses as an implicit signal of congestion [23]: packets are drop due to limited storage at intermediate nodes. In our setting, there cannot be packet losses since queues are unbounded. Note that memory constraints will be no longer the main bottleneck in the future [24], as router capacity is continuously increasing. In spite of this, real protocols already considered the possibility of sending an explicit congestion signal by diDerent considerations [35]. Since latency is directly related not only to the number of hops between source and destination, but also to current load (local number of packets) at traversed routers, it is reasonable to keep queue lengths below some threshold [36]. For simplicity, we consider in our model that a router is congested if its queue length is greater than one packet. Also, there is the question if the congestion state information must be exchanged between distant pairs of source and destination hosts (as in classic TCP=IP) or in a more local fashion. In general, it is believed that no single scheme will be able to address all
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
647
congestion patterns observed in a real computer network. In order to avoid introducing ‘artiEcial’ long range correlations between distant hosts of the system, we considered a local exchange of congestion information. We only allow the hosts to gather and interpret the congestion state of their neighbour nodes, because only this kind of node can regulate its packet injection rate. Surprisingly, we still observe synchronization eDects in the congestion state of distant nodes. Some authors have repeatedly argued that computer networks cannot be understood in terms of this type of analysis. It is said that these nets involve a high degree of complication and Ene structure that has to be taken into account [33]. This claim is true but obvious, and can be used in any other context. However, simple approaches based on statistical physics have been successful in gathering real understanding of how complex systems behave from microscopic rules. The current wave of new quantitative results on Internet topology and dynamics indicates that a new area of research has emerged at the interplay of physics, graph theory and technology. Acknowledgements This work has been supported by a grant PB97-0693 and by the Santa Fe Institute (RVS). References [1] R. Albert, H. Jeong, A. Barabasi, Nature 401 (1999) 130; R. Albert, A. Barabasi, Science 286 (1999) 510. [2] G. Caldarelli, R. Marchetti, L. Pietronero, Europhys. Lett. 52 (2000) 386. [3] R. Pastor-Satorras, A. Vespignani, Phys. Rev. Lett. 86 (2001) 066117. [4] A. Lloyd, R.M. May, Science 292 (2001) 1316–1317. [5] B. Huberman (Ed.), The Ecology of Computation, North-Holland, Amsterdam, 1989. [6] J.O. Kephart, T. Hogg, B.A. Huberman, Phys. Rev. A 40 (1989) 404. [7] W.E. Leland, M.S. Taqqu, W. Willinger, IEEE Trans. Networking 2 (1994) 1. [8] I. Csabai, J. Phys. A: Math. Gen. 27 (1994) L417. [9] M. Takayasu, H. Takayasu, T. Sato, Physica A 233 (1996) 824. [10] M. Takayasu, K. Fukuda, H. Takayasu, Physica A 274 (1999) 248. [11] M. Takayasu, K. Fukuda, H. Takayasu, Physica A 277 (2000) 248. [12] A.E. Crovella, A. Bestavros, IEEE Trans. Networking 5 (1997) 835. [13] W. Willinger, M.S. Taqqu, R. Sherman, D.V. Wilson, IEEE Trans. Networking 5 (1997) 71. [14] T. Ohira, R. Sawatari, Phys. Rev. E 58 (1998) 193. [15] R.V. Sol+e, S. Valverde, Physica A 289 (2001) 595. [16] K. Nagel, M. Paczuski, Phys. Rev. E 51 (1995) 2909. [17] K. Nagel, S. Rasmussen, in: R.A. Brooks, P. Maes (Eds.), ArtiEcial Life IV, MIT Press, Cambridge, MA, 1994, p. 222. [18] H. Li, M. Manesca, IEEE Trans. Comput. 38 (1989) 1345. [19] C. Germain-Renaud, J.P. Sansonnet, Ordinateurs Massivement Paralleles, Armand Colin, Paris, 1991. [20] V.M. Milutinovic, Computer Architecture, North-Holland, Elsevier, Amsterdam, 1988. [21] W.D. Hillis, The Connection Machine, MIT Press, Cambridge, MA, 1985. [22] A. Arenas, A. Diaz-Guilera, R. GuimerSa, Phys. Rev. Lett. 86 (2001) 3196. [23] V. Jacobson, M.J. Karels, Proceedings of the SIGCOMM, 1988, p. 314.
648
S. Valverde, R.V. Sol0e / Physica A 312 (2002) 636 – 648
[24] R. Jain, IFIP TC6, Proceedings of the Fourth Conference on Information Networks and Data Communication, Finland, 1992. [25] K. Bolding, M.L. Fulgham, L. Snyder, Technical Report CSE-94-02-04. [26] B.A. Huberman, R.M. Luckose, Science 277 (1997) 535. [27] T. Ohira, R. Sawatari, Phys. Rev. E 58 (1998) 193. [28] H. Fuk+s, A.T. Lawniczak, preprint adap-org=9909006. [29] P. Bak, C. Tang, K. Wiesenfeld, Phys. Rev. Lett. 59 (1987) 381. [30] H.J. Jensen, Self-Organized Criticality, Cambridge University Press, Cambridge, 1998. [31] D.M. Chiu, R. Jain, Comput. Networks ISDN Systems 17 (1989) 1. [32] R. Nelson, Probability, Stochastic Processes and Queuing Theory, Springer, New York, 1995. [33] W. Willinger, R. Govindan, S. Jamin, V. Paxson, S. Shenker, Proc. Natl. Acad. Sci. USA 99 (2002) 2573. [34] L. Zhang, S. Shenker, D. Clark, ACM Comput. Commun. Rev. (1991). [35] R. Jain, K.K. Ramakrishnan, Proceedings of the IEEE Computer Networking Symposium, Washington, DC, April 1988, p. 134. [36] S. Floyd, V. Jacobson, IEEE=ACM Trans. Networking (1993).