1
Adaptive Non-linear Congestion Controller for a Differentiated-Services Framework Andreas Pitsillides, Member IEEE, Petros Ioannou, Fellow IEEE, Marios Lestas, Loukas Rossides
Abstract— The growing demand of computer usage requires efficient ways of managing network traffic in order to avoid or at least limit the level of congestion in cases where increases in bandwidth are not desirable or possible. In this paper we developed and analyzed a generic Integrated Dynamic Congestion Control (IDCC) scheme for controlling traffic using information on the status of each queue in the network. The IDCC scheme is designed using non-linear control theory based on a nonlinear model of the network that is generated using fluid flow considerations. The methodology used is general and independent of technology, as for example TCP/IP or ATM. We assume a differentiated-services network framework and formulate our control strategy in the same spirit as IP DiffServ for three types of services: Premium Service, Ordinary Service, and Best Effort Service. The three differentiated classes of traffic operate at each output port of a router/switch. An IDCC scheme is designed for each output port, and a simple to implement non-linear controller, with proven performance, is designed and analyzed. Using analysis performance bounds are derived for provable controlled network behavior, as dictated by reference values of the desired or acceptable length of the associated queues. By tightly controlling each output port, the overall network performance is also expected to be tightly controlled. The IDCC methodology has been applied to an ATM network. We use OPNET simulations to demonstrate that the proposed control methodology achieves the desired behavior of the network, and possesses important attributes, as e.g. stable and robust behavior, high utilization with bounded delay and loss, together with good steady state and transient behavior. Index Terms— Congestion control, Non-linear Adaptive Control theory, Differentiated-Services Framework, Internet, ATM.
I. I NTRODUCTION It is generally accepted that the problem of network congestion control remains a critical issue and a high priority, especially given the growing size, demand, and speed (bandwidth) of the increasingly integrated services demanded from fixed and mobile networks. Moreover, congestion may become unmanageable unless effective, robust, and efficient methods for congestion control are developed. One could argue that network congestion is a problem unlikely to disappear in the near future; it is a well-known that the optimal control of networks of queues is a notoriously difficult problem, even for simple cases [1]. This assertion is also supported by the fact Manuscript received September 14, 2000; revised October 21, 2003; approved by IEEE/ACM TRANSACTIONS ON NETWORKING editor Leandros Tassioulas. This work was supported in part by the University of Cyprus and in part by the National Science Foundation under Grant Number ECS 9877193. A. Pitsillides and L. Rossides are with the Department of Computer Science, University of Cyprus (email:
[email protected]). P. Ioannou and M. Lestas are with the Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90007, USA (email:
[email protected];
[email protected]).
that despite the vast research efforts, spanning a few decades, and large number of different control schemes proposed, there are still no universally acceptable congestion control solutions to address these challenges. It is worth noting that a number of popular congestion control designs were developed using intuition, mostly resulting in simple non-linear control schemes. One example is the eternal congestion control solution deployed in the Internet Transport Control Protocol (TCP) [2], [3] and subsequent “fixes” [4], [5], [6], [7]. It is worth noting that the Available Bit Rate (ABR) problem [8] in Asynchronous Transfer Mode (ATM) has witnessed a similar approach, with popular congestion control schemes (see e.g. [9], [10], [11]) also developed using intuition, again resulting in simple non-linear control designs. Despite the ad-hock approach and their simplicity, these have shown remarkable performance and were demonstrated to be robust in a variety of real life and simulated scenarios. But, under certain conditions, empirical and analytical evidence demonstrate the poor performance and cyclic behavior of the controlled TCP/IP Internet ([12], [13], [14]). This is exacerbated as the link speed increases to satisfy demand, and also as the demand on the network for better quality of service increases. Note for WAN networks a multifractal behavior has been observed [15], and it is suggested that this behavior — cascade effect—may be related to existing network controls [16]. To understand, and importantly to predict, this demonstrated behavior is no easy task, especially since these schemes are designed with significant non-linearities (e.g. two-phase—slow start and congestion avoidance—dynamic windows, binary feedback, additive-increase multiplicative-decrease flow control etc). The formal, rigorous, analysis of the closed loop behavior is difficult if at all possible, even for single control loop networks. Furthermore, the interaction of additional nonlinear feedback loops can produce unexpected and erratic behavior [17]. Clearly, proven effective congestion control schemes are needed. Despite the successful application of control theory to other complex systems (e.g. power, traffic, chemical plants, space structures, aerospace systems, etc.), the development of network congestion control based on control theoretic concepts is quite unexplored. This in spite of the significant demands placed on the network system over recent years for the delivery of guaranteed performance in terms of quality of service to the users. One may attribute this to the complexity of the control problem, coupled with the lack of collaboration between teletraffic engineers and control systems theorists (though lately there are signs of increased collaboration). Most of the current congestion control methods are based on intuition and
2
ad hoc control techniques together with extensive simulations to demonstrate their performance. The problem with this approach is that very little is known why these methods work and very little explanation can be given when they fail. Several attempts have been made to develop congestion controllers using optimal [18]; linear [17], [19], [20], [21], [22]; predictive adaptive [23], [24]; fuzzy and neural [25], [26], [27]; and non-linear [28], [29], [30] control. Despite these efforts, the design of congestion network controllers whose performance can be analytically established and demonstrated in practice is still a challenging unresolved problem. Recent advances in non-linear adaptive control theory [31] offer potential for developing effective network congestion controllers whose properties can be analytically established. This paper proposes a generic scheme for congestion control based on non-linear and adaptive control ideas. It uses an integrated dynamic congestion control approach (IDCC). A specific problem formulation for handling multiple differentiated classes of traffic, operating at each output port of a switch is illustrated. IDCC is derived from non-linear adaptive control theory using a simple fluid flow model. The fluid flow model is developed using packet flow conservation considerations and by matching the queue behavior at equilibrium. While the fluid model may not be that accurate all the time the control technique used takes into account the presence of modelling errors and inaccuracies and minimizes their effect. Recently, there is pressure on the Internet to transform into a multi-services high-speed network, see for e.g Intserv and DiffServ architectures [32], [33]. Lately, interest is mainly for DiffServ architectures, as scalability problems have been reported for Intserv. Following the same spirit adopted by the IETF DiffServ working group for the Internet [33] we define classes of aggregated behavior. In this paper we define three Services: Premium Traffic Service, Ordinary Traffic Service, and Best Effort Traffic Service. It should be noted that the methodology used is general and independent of technology, as for example TCP/IP or ATM. The proposed IDCC algorithm can be classified as Network-assisted Congestion Control [34] and uses queue length information for feedback. It is becoming clear [35] that the existing end-toend TCP congestion avoidance mechanisms, while necessary and powerful, are not sufficient to provide good service in all circumstances. Basically, there is a limit as to how much control can be accomplished from the edges of the network. Some mechanisms are needed in the routers to complement the endpoint congestion avoidance mechanisms, as suggested by several researchers ([36], [12],[37],[38],[39]). Note that the need for gateway control was realized early; e.g. see [2], where for future work the gateway side is advocated as necessary. For TCP traffic, the newly developed strategies [36], [37], [35], [38], [39] advocate a more active router participation in the generation of a more responsive feedback signal. In particular RED [36] has stimulated a plethora of activities, including extensive evaluations and further techniques [40]. IDCC operates locally for the Premium Traffic Service (note similarity in concept with RED [36]) and for the Ordinary Traffic Service it sends feedback to the sources to regulate their rate. Several approaches for explicit or implicit feedback
to the sender can be adopted, as well as conversions to TCP type window; these are briefly discussed in section II.C. The IDCC has a number of important control attributes ([25], [22]]), such as: •
• •
•
•
• •
•
•
• •
Exhibits provable stable and robust behavior at each port. By tightly controlling each output port, the overall network performance is also expected to be tightly controlled. Achieves high utilization with bounded delay and loss performance. Exhibits good steady state and transient behavior; no observable oscillations and fast rise and quick settling times. Uses minimal information to control system and avoids additional measurements and noisy estimates: (i) Uses only one primary measure, namely queue length; (ii) Does not require per connection state information; (iii) Does not require any state information about set of connections bottlenecked elsewhere in the network (not even a count of these connections). In order to improve speed of response, an estimate of number of active sources at the ˆ ) can be useful. However, in simulations good switch (N performance was obtained for a constant value set to one; (iv) Computes Ordinary Traffic allowable transmission rate only once every Ts msec (the control update period) thereby reducing processing overhead. The controller is ˆ. fairly insensitive to choice of value for Ts and N Achieves max/min fairness in a natural way without any additional computation or information about bottleneck rates of individual connections. It can guarantee a minimum agreeable service rate without any additional computation. It works over a wide range of network conditions, such as round trip (feedback) delays (evaluated from 0 to 250 msec RTT), traffic patterns, and controller control intervals (evaluated from 32 – 353 celltimes), without any change in the control parameters. It works in an integrated way with different services (e.g. Premium Traffic, Ordinary Traffic, Best Effort Traffic) without the need for any explicit information about their traffic behavior. The proposed control methodology and its performance are independent of the size of the queue reference values as long as they are below the saturation point of the associated queues. As a result the network operator can dynamically steer the network operating region in accordance with global considerations, and has the flexibility to be more or less aggressive, in accordance with the current network and user needs. Has simple implementation and low computational overhead. It features a very small set of design constants, that can be easily set (tuned) from simple understanding of the system behavior.
This paper is organized as follows. Section II presents the control problem and objective, and section III illustrates the formal derivation of the integrated dynamic congestion
3
controller (IDCC). The analytic performance evaluation of the derived algorithm is presented in Appendices I and II. Section IV discusses the implementation of IDCC and evaluates its performance. The attributes discussed above are demonstrated using simulations. Finally Section V presents our conclusions. II. T HE CONTROL PROBLEM AND OBJECTIVE We propose a generic scheme for handling multiple differentiated classes of traffic, using an integrated dynamic congestion control approach, derived using non-linear control theory. By differentiating each class, the control objective for each class is ‘decoupled’ from the rest, thus simplifying the overall control design. The control strategy is model based dynamic feedback linearization, with proportional plus integral action and adaptation. It should be noted that the methodology used is general and independent of technology, as for example TCP/IP or ATM1 . Generically, we use the terms packet for both IP packets and ATM cells, and switch for ATM switch and IP routers. A. Proposed Differentiated-Services framework Recently, the DiffServ working group adopted two broad aggregate behaviour groups: the Expedited Forwarding (EF) Per-Hop Behaviour (PHB) [41] and the Assured Forwarding (AF) PHB [42]. The EF-PHB can be used to build a low loss, low latency, low jitter, assured bandwidth end-to-end service, thus indirectly providing some minimum ’aggregated’ quality of service. The AF-PHB group provides delivery of IP packets in four independently forwarded AF classes. Within each AF class, an IP packet can be assigned three different levels of drop probabilities. Each class can be provided with some minimum bandwidth and buffer guarantees. We adopt the same spirit as the IETF DiffServ working group [33] and divide traffic into three basic types of service: Premium Traffic Service, Ordinary Traffic Service, and Best Effort Traffic Service. The Premium Traffic Service may belong to the EF-PHB in a DiffServ architecture and is designed for applications with stringent delay and loss requirements that can specify upper bounds on their traffic needs and required quality of service. It is envisaged that the user may contract with the network. The only commitment required by the user is not to exceed the peak rate. Note that policing units at the edge of the network may provide enforcement of this commitment. The network contract then guarantees that the contracted bandwidth will be available when the traffic is sent. Typical applications include video on demand, audio, video conferencing, etc. . . The Ordinary Traffic Service may belong to the first class of the AF-PHB in a DiffServ architecture. Note that different priorities may be assigned, without complicating greatly the design. The Ordinary Traffic Service is intended for applications that have relaxed delay requirements and allow their rate 1 Since paper was submitted IDCC was successfully integrated within RMD (Resource Management in DiffServ) framework, which extends DiffServ principles to provide dynamic resource management and admission control in IP-DiffServ domains. This work was partially funded by EC research project SEACORN: Simulation of Enhanced UMTS Access and Core Networks, IST2001-34900, 2002.
Fig. 1.
Generic output buffered K input-output switch
into the network to be controlled. These Services use any left over capacity from the Premium Traffic. Note that to ensure that bandwidth is leftover from the Premium Traffic Service a minimum bandwidth may be assigned, e.g. by using bandwidth allocation between services or connection admission. Typical applications include web browsing, image retrieval, e-mail, ftp, etc. Finally, the Best Effort Traffic Service may belong to the last class of the EF-PHB in a DiffServ architecture. It has no delay or loss expectations. It opportunistically uses any instantaneous leftover capacity from both Premium and Ordinary Traffic Services. B. Proposed integrated dynamic congestion control approach Each service transmits packets to destination terminals. The packets from several Origin-Destination (OD) pairs traverse a number of switches on route to the destination. Each OD flow may be classified as Premium Service, Ordinary Service, or Best Effort Service. We assume a generic output buffered switch as a reference model. The switch has K input and K output ports (see Fig. 1). Each output port has a number of physical or logical queues: one for each traffic class. There is a potential bottleneck at each output port of the switch, caused as a result of the rate mismatch between the flow into and out of the queue. Since the cause of the bottleneck is limited link capacity at the output ports of the switch, the congestion control scheme will be explained with respect to a specific output-port (note that there is no coupling between the output-ports). A congestion controller is installed at each output port. By tightly controlling each output port, the overall performance is also expected to be tightly controlled. At each output port of the switch we assume that dedicated buffer space is allocated for each one of the three services and that the server can be shared between the three in a controlled fashion (see Fig. 2). Premium Service requires strict guarantees of delivery, within given delay and loss bounds. It does not allow regulation of its rate (or at least regulation that will affect the given delay bounds). Any regulation of this type of traffic has to be achieved at the connection phase. Once admitted into the network the network has to offer service in accordance with the given guarantees. This is the task of
4
the Premium Traffic Controller. Ordinary Traffic on the other hand allows the network to regulate its flow (pace it) into the network. It cannot tolerate loss of packets. It can however tolerate queueing delays. This is the task of the Ordinary Traffic Controller. Best Effort Service on the other hand offers no guarantees on either loss or delay. It makes use of any instantaneous leftover capacity. For Premium Traffic Service, our approach is to tightly control the length of the Premium Traffic queue to be always close to a reference value, chosen by the network operator, so as to indirectly guarantee acceptable bounds for the maximum delay and loss. The capacity for the Premium Traffic is dynamically allocated, up to the physical server limit, or a given maximum. In this way, the Premium Traffic is always given resources, up to the allocated maximum (Cmax : maximum available or assigned capacity, and xmax : maximum buffer size) to ensure the provision of Premium Traffic Service with known bounds. Due to the dynamic nature of the allocated capacity, whenever this service has excess capacity beyond that required to maintain its QoS at the prescribed levels (as set by the queue length reference value) it offers it to the Ordinary Traffic Service. This algorithm uses the error between the queue length of the Premium Traffic queue xp (t) and the reference queue length xref as the feedback information and calculates p the capacity Cp (t) to be allocated to Premium Traffic once every control interval Ts ms, based on the control algorithm discussed in section III. The Ordinary Traffic Service Controller regulates the flow of Ordinary Traffic into the network, by monitoring the length of the Ordinary Traffic queue and the available capacity (leftover after the capacity allocated to Premium Traffic). The length of the Ordinary Traffic queue is compared with the reference value (could be chosen by network operator) and using a nonlinear control strategy it calculates and informs the sources of the maximum allowed rate they can transmit over the next control interval. This algorithm takes into account leftover capacity Cr (t) = max [0, Cserver − Cp (t)], uses error between queue length xr (t) of Ordinary Traffic queue and reference queue length xref r (t), and calculates the common rate λr (t) to be allocated to the Ordinary Traffic users once every control interval Ts ms, based on the control algorithm discussed in section III. Once the common rate is calculated it is sent (fed back) to all upstream sources. Based on the received common rate, the source does not allow its transmission rate to exceed this value over the next control interval. Note that any excess source demand (above received common rate) is queued at the source queues, rather than be allowed to enter the network, and thus cause congestion. The Best Effort Traffic Service operates at the packet/cell scale and uses any instantaneous left over capacity. This is achieved by monitoring the combined server buffer at server scheduler. In the absence of any packets in the server buffer awaiting transmission it allows a packet from the Best Effort Service queue to enter the server buffer (buffer has a maximum of 2 packets; one in service and 1 in queue). Note that for ATM this function may be trivial, but for variable size packets more care is required so that time sensitive packets are not caught behind very large Best Effort packets.
Fig. 2.
Implementation of the control strategy at each switch
C. Feedback signalling schemes for Ordinary Traffic As discussed above, for Ordinary Traffic the common rate (the feedback signal) must be communicated to the sources for action. Several approaches may be adopted. Indicatively, some feedback signalling schemes include: using full feedback by updating special fields in packets or cells (e.g. RM cells in an ATM setting [8], [10],[9]] or in TCP by modifying the receiver window field in the TCP header [5] of a packet sent by receiver to source); using Explicit Congestion Notification (ECN) as proposed for the Internet [37], [38] and ATM [8]; using implicit feedback, as for e.g. timeout due to lost packet [2], the end-to-end approach in [34], using round-trip delay values as indicators of the level of congestion [7]; a conversion from rate to window for TCP like control [43]; or even more sophisticated schemes, such as adaptive binary marking [44], where sources change their rate according to variations in the binary signals present in feedback stream, using similar principles to adaptive delta modulation (ADM) used in communication systems. In this paper, the implementation details of the feedback signalling scheme are left for further study. For a simulative evaluation of the proposed control scheme we use explicit feedback, provided by updating special fields in packets (RM cells in an ATM setting). D. Dynamic Network Models Most of the current congestion control techniques use intuition and ad hoc control techniques together with extensive simulations to demonstrate their performance. The problem with this approach is that very little is known why these methods work and very little explanation can be given when they fail. The use of dynamic models could provide a better understanding of how the network operates and can be used to develop control techniques whose properties can be established analytically even when such techniques are based on intuition and ad hoc guesses. For control design purposes the model does not need to be accurate. It is because of the inability of modelling the real world accurately that feedback was invented and control theory is widely used. A good feedback control design (e.g. based on robust, possibly adaptive, control techniques [31]) should be able to deal with considerable uncertainties and inaccuracies that are not accounted for in the model.
5
Using the above principle, below we present a known simple dynamic model, which we assume captures the essential dynamics, and is used for designing the proposed congestion controller. 1) Fluid Flow Model : A dynamic model is sought, in a form suitable for a distributed control solution. The objective is to find a model which captures the ‘essential’ dynamic behavior, but has low order complexity, as for example relative to detailed probabilistic models such as the ChapmanKolmogorov equations for determining time-dependent state probability distribution for a Markovian queue [45]. Using the approximate fluid flow modelling approach proposed by Agnew [46], various dynamic models have been used by a number of researchers [45], [47], [48], [49], [40] to model a wide range of queueing and contention systems. Note that several variants of the fluid flow model have been extensively used for network performance evaluation and control, see for example an early reference that stimulated a lot of interest thereafter [50], and a recent reference of the present interest [51]. Using the flow conservation principle, for a single queue and assuming no losses, the rate of change of the average number of cells queued at the link buffer can be related to the rate of cell arrivals and departures by a differential equation of the form: x(t) ˙ = −fout (t) + fin (t)
(1)
where x(t) is the state of the queue, given by the ensemble average of the number of cells N (t) in the system (i.e. queue + server) at time t, i.e. x(t) = E{N (t)}; fout (t) is the ensemble average of cell flow out of the queue at time t; and fin (t) is the ensemble average of cell flow into the queue at time t. The fluid flow equation is quite general and can model a wide range of queueing and contention systems as shown in the literature [45], [48], [47], [49]. Assuming that the queue storage capacity is unlimited and the customers arrive at the queue with rate λ(t), then fin (t) is just the offered load rate λ(t) since no packets are dropped. The flow out of the system, fout (t), can be related to the ensemble average utilization of the link ρ(t) by fout (t) = C(t)ρ(t), where C(t) is defined as the capacity of queue server. We assume that ρ(t) can be approximated by a function G(x(t)) which represents the ensemble average utilization of the queue at time t as a function of the state variable. Thus, the dynamics of the single queue can be represented by a nonlinear differential equation of the form: x(t) ˙ = −G(x(t)) C(t) + λ(t),
x(0) = xo
(2)
which is valid for 0 ≤ x(t) ≤ xbuf f er size and 0 ≤ C(t) ≤ Cserver where xbuf f er size is the maximum possible queue size and Cserver the maximum possible server rate. Different approaches can be used to determine G(x(t)). A simple, commonly used, approach to determine G(x) is to match the steady-state equilibrium point of (2) with that of an equivalent queueing theory model, where the meaning of ”equivalent” depends on the queueing discipline assumed. This method has been validated with simulation by a number
Fig. 3. Time evolution of network system queue state obtained using OPNET simulation (broken line) and solution of fluid flow model (solid line). The input to both OPNET and fluid flow is the same on-off source (see Fig. 5)
of researchers, for different queueing models [45], [48], [47]. Other approaches, such as system identification techniques and neural networks, can also be used to identify the parameters of the fluid flow equation. We illustrate the derivation of the state equation for an M/M/1 queue following [45]. We assume that the link has a First-In-First-Out (FIFO) service discipline and a common (shared) buffer. The following standard assumptions are made: the packets arrive according to a Poisson process; packet transmission time is proportional to the packet length; and that the packets are exponentially distributed with mean length 1. Then, from the M/M/1 queueing formulas, for a constant arrival rate to the queue the average number in the system at λ λ steady state is C−λ . Requiring that x(t) = C−λ when x˙ = 0, the state model becomes x(t) ˙ =−
x(t) C(t) + λ(t), 1 + x(t)
x(0) = xo
(3)
The validity of this model has been studied by a number of researchers, including [47], [48]. In [52] we present an example for modelling an OriginDestination path in a packet based network derived using fluid flow arguments and also demonstrate the ability of the fluid flow model to model queueing systems by verifying its behavior in comparison with an event based simulation using OPNET. A typical time evolution of the queue state from both the model and OPNET simulation are presented in Fig. 3. We can observe that there is a reasonable agreement between the proposed model and the observed behavior of the system, as simulated by a discrete event simulator, which demonstrates confidence to the model for use in the design of the control system. Note that similar fluid flow models in both the discrete and continuous time form have been used by a number of researchers for designing or analyzing the behavior of network systems under control [53], [17], [19], [14], [54], [40]. For example, [40] using fluid flow arguments have developed a nonlinear dynamic model of TCP to analyze and design Active Queue Management (AQM) Control Systems using RED. For ATM, Rohrs [17] using similar fluid flow arguments derived a discrete fluid flow model of the state of
6
the buffer at the output port of an ATM switch, and used this model to evaluate the performance of a binary Backward Explicit Congestion Notification (BECN) control algorithm. He demonstrates the undesired cyclic behavior of the controlled system. This (undesired) cyclic behavior is also presented in [14] for TCP/IP, using dynamic models of the behavior of the different phases of the TCP/IP congestion algorithms (slow start and congestion avoidance phase) for high bandwidthdelay products and random loss. Their results are demonstrated using simulations. In [54] they use a similar model, as given by (3), and intuition to design an ABR flow control strategy (referred to as queue control function) to keep the queue controlled. They use analysis and simulation to evaluate the proposed strategy. It is worth noting that many other types of models have been proposed, either using queueing theory arguments, or others, but in most cases the derived models are too complex for deriving simple to understand and implement controllers. Efforts to simplify these models for control design purposes often lead to ignoring the dynamic aspects of the network system. For example, in [55] the analysis of the performance of simple (binary) reactive congestion control algorithms is carried out using a queueing theory approach model, which is limited to steady state analysis only due to the inability to handle the resultant computational complexity for the dynamic case. In this paper we explore the simple fluid flow dynamic model presented above (3) to demonstrate the derivation of simple to implement, yet powerful congestion controller. III. P ROPOSED INTEGRATED CONGESTION CONTROL STRATEGY: D ESIGN AND ANALYSIS At each output port of a switch, we implement IDCC, the integrated congestion control strategy (see Fig. 2). IDCC is an integrated strategy developed for Premium Traffic, Ordinary Traffic and Best Effort Traffic. It is based on the fluid flow model (3) used to model the input-output characteristics of the switch (see Fig. 2), as follows: µ ¶ xp (t) x˙ p (t) = −Cp (t) + λp (t) (4) 1 + xp (t) where xp (t) is the measured (averaged) state of the Premium Traffic buffer, Cp (t) is the capacity allocated to the Premium Traffic, and λp (t) is the rate of the incoming Premium Traffic; ¶ µ xr (t) + λr (t) (5) x˙ r (t) = −Cr (t) 1 + xr (t) where xr (t) is the measured (averaged) state of the Ordinary Traffic buffer, Cr (t) is the capacity allocated to the Ordinary Traffic and λr (t) is the rate of the incoming Ordinary Traffic. Model (4) is used to develop the Premium Traffic control strategy and model (5) the Ordinary Traffic strategy, below: 2) Premium Traffic control strategy: : The selected control strategy for Premium Service is developed using the model (4) as follows: Let x ¯p (t) = xp (t) − xref ¯˙ p (t) = x˙ p (t) where xref is p , then x p the desired average state of the Premium Traffic buffer. Then from (4)
µ x ¯˙ p (t) = −Cp (t)
xp (t) 1 + xp (t)
¶ + λp (t)
(6)
where λp (t) ≤ kˆp < Cserver and kˆp is a constant indicating the maximum rate that could be allocated to incoming Premium Traffic (e.g. through a connection admission policy) and Cserver is the physical capacity of the server. The control objective is to choose the capacity Cp (t) to be allocated to the Premium Traffic under the constraint that the incoming traffic rate λp (t) is unknown but bounded by kˆp so that the averaged buffer size xp (t) is as close to the desired value xref (chosen by the operator or designer) as p possible. In mathematical terms we need to choose Cp (t) so that x ¯p (t) →0 under the constraints that Cp (t) ≤ Cserver and λp (t) ≤ kˆp < Cserver . Using feedback linearization and robust adaptive control ideas we choose the control input i.e. capacity Cp (t) as Cp (t) = max [0, min {Cserver , µ(t)}] 1 + xp (t) µ(t) = ρp (t) [αp x ¯p (t) + kp (t)] xp (t)
(7) (8)
where 0 ρp (t) = 1.01xp (t) − 0.01 1
if xp (t) ≤ 0.01 if 0.01 < xp (t) ≤ 1 if xp (t) > 1
(9)
and k˙ p (t) = Pr [δp x ¯p (t)] ,
0 ≤ kp (0) ≤ kˆp
(10)
where Pr [·] is a projection operator defined as: ¯p (t) if 0 ≤ kp (t) ≤ kˆp δp x ¯p (t) if kp (t) = kˆp , x ¯p (t) ≤ 0 (11) Pr [δp x ¯p (t)] = δp x ¯p (t) if kp (t) = 0, x ¯p (t) ≥ 0 δp x 0 otherwise where αp > 0, and δp > 0, are design constants that affect the convergence rate and performance. The stability analysis of the above control strategy is presented in Appendix I. 3) Ordinary Traffic control strategy: : The control strategy is developed using the fluid flow model (5) as follows: ¯˙ r (t) = x˙ r (t)where xref is Let x ¯r (t) = xr (t) − xref r , then x r the desired average state of the Ordinary Traffic buffer. Then from (5) µ ¶ xr (t) ˙x ¯r (t) = −Cr (t) + λr (t) (12) 1 + xr (t) The control objective is to choose Cr (t) and λr (t) so that the average buffer size xr (t) remains close to the desired value xref r , chosen by the operator or designer. The value of Cr (t) is given by Cr (t) = max
£
0,
Cserver − Cp (t)
¤
.
(13)
In other words the capacity allocated to the outgoing Ordinary Traffic is whatever is left after allocation to the Premium
7
Traffic. Using feedback linearization we choose the controlled traffic input rate λr (t) as λr (t) = max [0, min {Cr (t), g(t)}] µ ¶ xr (t) g(t) = Cr (t) − αr x ¯r (t) 1 + xr (t)
(14) (15)
where αr > 0 is a design constant. The analysis of the above control strategy is given in Appendix II. Note that to achieve decoupling of the stability and transient properties of the system from time varying parameters, such as the number of connections N (t), the calculated common rate λr (t) is ˆ , an estimate of N (t): divided by N λcr (t) =
λr (t) ˆ N
(16)
ˆ is a separate research topic. The estimation algorithm for N We have derived an algorithm, based on on-line parameter identification techniques, which offers guaranteed convergence to the true N (t) exponentially fast. Here we are assuming that such an estimate exists. This establishes that we can decouple the control algorithm from the number of bottlenecked sessions at each link and so the analysis presented above is still valid. The simulation examples presented later show ˆ =1 good performance, even with using a constant value of N which exhibits robustness of the algorithm with respect to inaccuracies in the estimation of N (t). 4) Best Effort traffic control strategy:: The Best Effort traffic controller operates on an instantaneous (packet or cell) time scale. It utilizes any instantaneous left over capacity to transmit a packet from the Best Effort buffer. This increases the network utilization during periods of insufficient supply of packets from both the Premium and Ordinary Traffic Services. IV. P ERFORMANCE EVALUATION In this section we use simulations to evaluate the performance of IDCC. We firstly present the implementation details of the control algorithm, and then the simulation scenarios and simulation results. A. Implementation of integrated control strategy At each switch output port (see Fig. 1 and Fig. 2) we implement the integrated control strategy derived in the previous section. ref The references xref r , xp , and the design constants αp , αr ,δp , and kˆp are first selected. At each instant n (n = 0, 1, 2, . . .) which corresponds to time t = nTs , where Ts is the sampling period, we calculate: Cp (n) = max [0, min {Cserver , µ(n)}] Cr (n) = max
£
0, Cserver − Cp (n)
λr (n) = max [0, min {Cr (n), g(n)}]
¤
(17) (18) (19)
λcr (n) =
λr (n) ˆ N
(20)
where 1 + xp (n) [αp x ¯p (n) + kp (n)] xp (n) xr (n) g(n) = Cr (n) − αr x ¯r (n) 1 + xr (n)
µ(n) = ρp (n)
(21) (22)
For computational reasons the computation of (11) is performed in discrete time as x ¯p (n) k¯p (n + 1) = βp (n)kp (n) + δp (n) q (23) 1+x ¯2p (n) Then if k¯p (n + 1) > Cserver kp (0) kp (n + 1) = kp (0) if k¯p (n + 1) > kpmin ¯ kp (n + 1) otherwise
(24)
where kp (0) is chosen as 21 Cserver , 0< kpmin kˆp (25) δp (n) = 0 if kp (n) ≤ 0 0.8 otherwise 0.9 if kp (n) > kˆp (26) βp (n) = 1.1 if kp (n) ≤ 0 1 otherwise Remarks: Cp (n) is used at the switch output port by the scheduler to dynamically allocate capacity to the Premium Traffic queue (see Fig. 2). The allocated capacity is held constant over the period of the control interval Ts ms. The calculated common rate λcr (n) is sent to each of the Ordinary Traffic sources every Ts ms using feedback signalling included in RM cells, as discussed earlier. B. Simulations For the evaluation of the performance using simulation, we use a network comprising a number of ATM switches. As discussed earlier, if the ATM switches are replaced by routers similar performance is expected, if the calculated common rate is signalled to the sources in a similar fashion. 1) Simulation model: Our ATM network model is shown in Fig. 4. It consists of 3 ATM switches. This reference model has been designed to capture: the interference between traffic travelling a different number of hops; the interference from real-time (Premium) traffic competing with Ordinary Traffic for the finite server resources; the effect of propagation delay on the effectiveness of the control scheme; and the fairness (or lack of it) among traffic travelling a different number of hops. Using the reference model described earlier, we assume all queueing occurs at the output buffers of the switches and that there is no internal blocking in the switch. In each ATM
8
Fig. 5. Ordinary Traffic source model. (a) Connection activity (b) Packet activity (c) Cell activity. The model parameters selected for simulations can be seen in Table I Fig. 4.
Simulation Network Model
switch there are three separate logical buffers (per output port) collecting Premium Traffic, Ordinary Traffic and Best Effort Traffic. The Premium Traffic buffers can accommodate 128 cells, and the Ordinary Traffic buffer can accommodate 1024 cells. Best Effort Traffic is selected to have infinite buffer space. The queues are serviced in accordance with the strategy, outlined in section III. We use the same network model for the simulation of ATM LAN and ATM WAN, but the distances between the switches are changed to reflect the different geographic spans of the two network types. In the ATM WAN case, the delay between each switch, due to the link distance is set at 20 msec for each link, and the delay between the last switch and the destination station is also set at 20 msec (thus a round trip delay of 120 msec is present). All of the links are assumed to transmit at 155 Mbits/sec. For the Ordinary Traffic we consider 40 connections at the edge of the network (20 are connected directly to ATM switch 0, and 10 in each of ATM switches 1 and 2), which can have their transmission rate controlled by the network. Three of the Ordinary Traffic flow paths are 1-hop paths, and one is a 3-hop path. Also four VBR and two CBR sources are directly connected to ATM switch 2 (1 hop-path) representing Premium Traffic. Each Ordinary Traffic terminal generates traffic based on a 3-state model (see Fig. 5 for model, and Table I for the selected parameters). In the idle state no traffic is generated. The idle period is generated from a geometric distribution with a mean period chosen to adjust the offered load on the link. In the active state the source generates a series of packets or bursts which are interspersed by short pauses. The period of each pause is drawn from a negative exponential distribution. The packet size and the number of packets generated during an active period are also geometrically distributed. We have considered the Ordinary Traffic source terminal buffers as infinite. Each VBR source is simulated by using the autoregressive model proposed by Maglaris et al [56] (we consider a video source with 480000 pixels/frame). The CBR source generates 25 Mbits/sec and paces the cells into the network uniformly. In case of cell losses, which occur during the periods of congestion, we use a simple retransmission protocol. A packet is presumed to be
lost even if a single cell is lost. Packets that are received by the receive terminal with missing cells are retransmitted by the source until successful delivery. The ”useful network throughput” represents the actual throughput of packets (in Mbits/sec) that are eventually delivered to the destination without packet loss (after retransmission if necessary). TABLE I O RDINARY T RAFFIC SOURCE MODEL PARAMETERS PARAMETER Idle period Generated packets in an active period Packet size Pause period
DISTRIBUTION Geometric Geometric
MEAN VALUE Chosen to adjust network load 20
Geometric Exponential
8 Kbytes 0.002 sec
We have used OPNET simulation tools for our experiments. Using simple understanding of the ATM, we set the controller design constants as follows: For the Premium Traffic controller: xref ={50 or 100} for a physical buffer size of 128 cells, p αp =2000, kˆp =210000, kp (0)=2000, kpmin =1000. For the Ordinary Traffic controller: xref = {900 or 300 or 600} for a physical buffer of 1024 r ˆ =1. cells, αr =2000, N The control update period Ts msec was set at several values (32 celltimes≡0.085 msec, 75 celltimes, 175 celltimes, and 353 celltimes≡1 msec) in order to investigate the sensitivity of control to the value of the control update period. C. Simulation results 1) Steady state and transient behavior: Using the simulation model we evaluate the performance of ATM LAN and ATM WAN. As noted previously Premium Traffic (CBR/VBR sources) has a priority. We can guarantee them maximum queuing delays not exceeding in an average sense the sum of the reference value of each of the buffers in the path, as set by the network administrator. In order to test the responsiveness of our controller (transient behavior) we set a variable reference point for this service. At the beginning we set the reference point to 100 cells. After t = 0.4 sec it is set to 50 cells and after t = 0.8 sec it is again raised to 100 cells (where t stands
9
for time in seconds). In this way we not only show that our controller can match the reference values but that it can also cope with dynamic changes that occur in the network (e.g. another connection is set-up, more bandwidth is required for real-time services etc). To simulate a more realistic scenario, we also change the reference on the Ordinary Traffic. Since we can accept higher delays the reference values are set at 900 cells for t < 0.5 sec. After that time it is set to 300 cells until t = 1 sec and after t = 1 sec it is raised to 600 cells. It can be noticed that the reference point changes between the Premium and Ordinary Traffic are not synchronized between them. Note that the Premium and Ordinary Traffic sources generate traffic according to a realistic scenario (they are not saturated sources). The controlled system performance for the case of an ATM under heavy load (140%) is demonstrated here, for both a LAN and a WAN. Fig. 6 shows the behavior of the Premium Traffic. As expected the controlled system behavior is the same for both WAN and LAN networks, as the feedback is local. In Fig. 7 the behavior of the Ordinary Traffic queue length is shown with varying control periods for both LAN and WAN configurations. The most heavily congested switch (Switch 2) is selected, where Ordinary Traffic competes with Premium Traffic for the scarce network resources. The figure shows that the controller adapts very quickly to reference point changes (could be likened to abrupt changing network conditions), as well as showing a reasonable insensitivity to control periods ranging from 0.085 msec to 1 msec (a more than 10 tenfold increase). It is very important to notice that there are no observable overshoots and undershoots (except for the undershoot at 0.5 seconds for the longest controller period of 1 msec), no oscillations or cyclic behavior, and that the controlled system responds very quickly to the changes introduced in both queues. In other words, the system exhibits a good transient behavior. So we can say that we can dynamically control the buffer state and the sources sending rate, which in turn implies that the network is well controlled and congestion is avoided, or quickly controlled. Note that the case of the WAN exhibits comparable performance with the LAN, even though the propagation delay (and therefore the forward and feedback delay) have substantially increased due to round trip time of about 120 msec. Also, the observed deterioration due to the ten-fold increase in control period is acceptable. Observe that for the case of Premium Traffic queue (Fig. 6) the reference point matches exactly the observed behavior (100 cells and 50 cells). However, in the case of Ordinary Traffic (Fig. 7) a sizeable offset is observed for each reference setting. Note that introducing integrating action in the controller can rectify this offset, however one can argue whether the extra complexity is justified, as an exact reference value may not be necessary for this service. We have also monitored the queue length behavior for Switch 0 and Switch 1. We observe that both queues are well controlled with no overshoots or undershoots exceeding 2%. For both switches, the reference point is set equal to a constant 600 cells for the Ordinary Traffic and the Premium Traffic is set to zero. Note that even though the 3 hop traffic behavior is dictated by the bottleneck switch downstream (Switch 2) there
Fig. 6. Switch 2 (last switch) time evolution of Premium Traffic queue length for a LAN and WAN for 140% load demand. Note that as the feedback information is local, there is no deterioration in performance due to the increased WAN propagation delay
(a) LAN
(b) WAN for 140% load demand. (The control period varies between 32 celltimes≡0.085 msec to 353 celltimes≡0.94 msec) Fig. 7. length
Switch 2 (last switch) time evolution of the Ordinary Traffic queue
is no observable performance degradation. Again, an offset from the reference value is observed, which can be rectified by introducing integrating action. The throughput for the bottlenecked switch was also monitored, exhibiting a constant and close to 100% utilization; ∼98 % for typical simulation runs. This is very important since the controller not only avoids congestion but also fully utilizes the available resources, even for demands considerably exceeding the available link capacity (140% in this case). The time evolution of the calculated common allowed cell rate for the congested switch is shown in Fig. 8 for both LAN and WAN, for 140% load demand. Note that this common
10
(a) LAN
(a) LAN configuration
(b) WAN configuration
Fig. 9. Typical behavior of the time evolution of the transmission rate of controlled sources using Switch 2
(b) WAN Fig. 8. Typical behavior of the time evolution of the common calculated allowed cell rate at switch 2
allowed cell rate is sent to all sources using this switch. This rate is used by the sources as the maximum rate that they are allowed to transmit over the next control update period. The time evolution of the transmission rate of a number of controlled sources is shown in Fig. 9. In the figure 3-hop and 1 hop c sources are shown for both LAN and WAN networks. Note that the sources are not saturated. The source rates quickly adapt fairly to their steady-state values, even though the 3-hop sources, in the case of the WAN, are located about 12000 km away (equivalently 60 msec delay for cells before they arrive at the switch). The issue of fairness is discussed next. 2) Fairness: Fairness is another important attribute of any congestion control system. Of course fairness in networks is relative. Since we have Premium Traffic Services and Ordinary Traffic Services it means that the latter must be satisfied with the leftover capacity. All Ordinary Traffic sources should dynamically share the available bandwidth with no discrimination, for example due to their geographic proximity to the switch. Every source sends with the same rules. The fairness shown by IDCC can be inferred from Fig. 9 for a number of typical on-off sources. To clearly illustrate the fairness of our scheme we adopt next a similar approach to other published works. We select the network test configuration shown in Fig. 10 and set all sources to be saturated (i.e. always have cells to transmit). The chosen configuration allows easy interpretation
of the expected behaviour. It is selected to demonstrate the fairness in the presence of large disparity in distance from the switches (local and far sources) and aggressiveness and adaptability to dynamic changes in the network state. In Fig. 11 and Fig. 12 we demonstrate aggressiveness, fairness and adaptability of the control scheme for both LAN and WAN network topologies. We let the 3-hop traffic start transmitting at t = 0, thus all link bandwidth is totally available to the 3-hop traffic. The 1-hop-a traffic at switch 0 is next started at t = 0.2, thus forcing the 3 hop traffic to share equally the available bandwidth between them. At t = 0.4 the two 1-hop-b sources are started at switch 1, thus forcing the 3-hop traffic to share the available link bandwidth between the 3 sources competing for it at switch 1. Their fair share is 51.6 MBits/sec (∼117000 cells/sec). Since the 3hop traffic was forced to reduce its rate by switch 1, it now leaves some unused capacity at switch 0, which the 1-hop-a source quickly takes up i.e. at t = 0.4 the 1-hop-a source increases its rate from 77.5 Mbits/sec (∼175000 cells/sec) to 90 Mbits/sec (∼204000 cells/sec), taking up the extra capacity. Similarly, at t = 0.6 when the three 1-hop-c sources are started, they force the 3-hop source to reduce its rate to 1/4 of the link bandwidth (i.e. (∼88000 cells/sec). The sources now at switch 0 and switch 1 readapt to claim the leftover bandwidth, sharing it fairly among them. Note that for the case of WAN, the performance degradation due to the 120 msec RTT is acceptably low, and for the case of LAN there are no observable undershoots or overshoots in the transient behaviour. Fig. 13 demonstrates the fairness using the allocation of the Ordinary Traffic for the 4 sources using Switch 2. Even though three of the sources are local and one is several thousand kms away (120 msec RTT), the rate allocated to each and every one is the same. All sources in this case start transmitting at the same time (t = 0). 3) Insensitivity of control to the value of the control update period: The control update period Ts msec was set at several values (32 celltimes≡0.085 msec to 353 celltimes≡1 msec) in order to investigate the sensitivity of control to the value of the control update period. As shown in Fig. 7, the controlled net-
11
Fig. 10. Network test configuration for demonstrating dynamic behavior and fairness
Fig. 11. Allocation of bandwidth to the Ordinary Sources for LAN. All sources are dynamically allocated their fair shair at all times
Fig. 12. Allocation of bandwidth to the Ordinary Sources for WAN. All sources are dynamically allocated their fair shair
Fig. 13. Allocation of bandwidth to the Ordinary Sources at Switch 2. Observe that the top 3 figures are for local sources and the last one is for a 3 hop source located about 12000kms away from the switch. All sources are allocated their fair share
work performance does not degrade considerably, considering the 10-fold increase in the value of the control update period (and thus 10-fold reduction in control information signalling overhead). 4) Robustness of control design constant to changing network conditions: It is worth pointing out that the behavior of the congestion controller was also observed for diverse traffic demands ranging from 50%-140% and source location (feedback signalling delays) up to about 250 msec RTT, as well control periods ranging from 0.085 msec to 1 msec. For all simulations the behavior of the network remains very well controlled, without any degradation beyond what one may consider as acceptable. This demonstrates the robustness of the proposed congestion controller. Given also that there was no change in the selected design constants the proposed scheme has demonstrated its universality and suitability to operate effectively and efficiently under diverse network conditions in both LAN and WAN configurations. Also it is worth observing that due to the universality of the fluid flow model, the proposed congestion controller is expected to operate effectively in the case of the Internet1 . V. CONCLUSIONS This paper proposes a generic scheme for congestion control. It uses an integrated dynamic congestion control approach (IDCC). A specific problem formulation for handling multiple differentiated classes of traffic, operating at each output port of a switch is illustrated. IDCC is derived from non-linear control theory using a fluid flow model. The fluid flow model depicts the dynamical system behavior, using packet flow conservation considerations and by matching the queue behavior at equilibrium. Despite the simplicity of the model the developed control strategy takes care of modelling error effects and other inaccuracies and allows the establishment of performance bounds for provable controlled network behavior. We divide traffic into three basic types of service (in the same spirit as those adopted for the Internet by the IETF DiffServ working group, i.e. Premium, Ordinary, and Best Effort). The proposed control algorithm possesses a number of important attributes such as provable stable and robust behavior, with high utilization and bounded delay and loss performance (can be set by reference values), and good steady state and transient behavior. It uses minimal information to control the system and avoids additional measurements. That is, it uses only one primary measure, namely the queue length. The controller for Ordinary Traffic computes and transmits to the sources the common allowable transmission rate only once every Ts msec (the control update period) thereby reducing processing overhead. The controller is reasonably insensitive ˆ and achieves max/min fairness. It to the value of Ts and N guarantees a minimum agreeable service rate, and it exhibits robustness in the sense that it works over a wide range of network conditions, such as round trip delays, traffic patterns, and control update intervals, without any change in the control parameters. Furthermore, the controller works in an integrated way with different services and has simple implementation and low computational overhead, as well as featuring a very
12
small set of design constants that can be easily set (tuned) from simple understanding of the system behavior. These attributes make the proposed control algorithm appealing for implementation in real, large-scale heterogeneous networks. In this paper full explicit feedback was used in the simulations, signalled using RM cells in an ATM setting1 to illustrate the properties of the designed strategy and the nonlinear control methodology adopted. A challenging task is to investigate other explicit and implicit feedback and signalling schemes and other network settings. Also a matter of further research is the analytical assessment of the global stability of IDCC. A PPENDIX I P ROOF OF STABILITY OF P REMIUM T RAFFIC CONTROL STRATEGY
Theorem 1: The control strategy described by the equations(7)-(11) guarantees that xp (t) is bounded, and Cp (t) ≤ Cserver and xp (t) converges close to xref with time, p with an error that depends on the rate of change of λp (t). Proof: The closed system is described by the equations (6)-(11). From (7) we have that Cp (t) could take the following values over time: Cp = 0 or Cserver or ρp
1 + xp [αp x ¯ p + kp ] xp
(27)
If Cp (t) = 0 for some t ≥ t1 then x ¯˙ p = λp and x ¯p (t) = x ¯p (t1 ) + λp (t − t1 ). Since λp (t) > 0 it follows that after some time t2 ≥ t1 we will have x ¯p (t) ≥ 0 for t ≥ t2 and x ¯p (t) will be growing with t. Increasing x ¯p (t) implies increasing xp (t) which means that there exists a time t3 close to t2 , i.e. t3 ≥ t2 ≥ t1 such that Cp (t) takes the value Cp (t) = ρp
1 + xp [ap x ¯ p + kp ] xp
(28)
and ρp in this case is equal to 1, since for x ¯p (t) ≥ 0 we have xp (t) > 1. Then (6) becomes: x ¯˙ p = −αp x ¯p − (kp − λp ) ,
t ≥ t3
(29)
αp t − t3
Z
t
t3
x ¯2p dτ ≤
V (t3 ) − V (t) k1 + t − t3 t − t3
Z t¯ ¯ ¯˙ ¯ ¯λp ¯ dτ . (33) t3
For t large equation (33) implies that the average value of the deviation of xp from the desired reference xref is bounded from above by the average value of the variations [31]. Lets us now examine the possibility of switching of Cp (t) from the value given by (28) to Cp (t) = Cserver after some time t > t3 . Since for V (t) ¯ > ¯ k2 , where k2 is some finite ¯ ¯ constant that depends on ¯λ˙ p ¯, V (t) is non-increasing and is decreasing in the space of xp (t), and kp is constrained not to exceed the value of k¯p < Cserver due to projection it follows that if k¯p is chosen to be less than Cserver , i.e. k¯p < 0.9Cserver say, then no switching will take place. If instead of Cp = 0 we have Cp = Cserver for some time t ≥ t4 ≥ 0 then that would imply according to (7) that ¤ 1+x £ both xp (t) and x ¯p (t) are large so that ρp xp p ap x ¯p + k¯p > Cserver . In that case x ¯˙ ≈ −Cserver +λp < 0 which means that x ¯p (t) is decreasing and therefore after a finite time Cp (t) = 1+x ρp xp p [ap x ¯p + kp ] . The same analysis as above could be repeated to establish that xp (t), x ¯p (t) are bounded and x¯p (t)¯ gets closer to xref p ¯ ¯ with time, depending on the size of ¯λ˙ p ¯. Therefore no matter which value Cp (t) takes according to (7), xp (t) and x ¯p (t) will always be bounded and xp (t) will be forced after finite time to be in the region where ref x ¯p (t) decreases and ¯ ¯ xp (t) approaches xp within an error ¯˙ ¯ of the order of ¯λp ¯ in the average sense. The number of possible switchings could be reduced considerably by properly selecting the design constants xref and kp . p A PPENDIX II P ROOF OF STABILITY OF THE O RDINARY T RAFFIC CONTROL STRATEGY
Consider the function: V =
unbounded. That is large x ¯p leads to V˙ < 0 which implies that V is decreasing for large x ¯p . This argument implies that V and therefore x ¯p are bounded. From (32) we have that
2 x ¯2p (kp − λp ) + 2 2δp
(30)
Then V˙ = −αp x ¯2p − (kp − λp ) x ¯p + (kp − λp ) λ˙ p + (kp − λp ) Pr [¯ xp ] + 2δp
(31)
It can be shown [31] that (kp − λp ) (−¯ xp + Pr [¯ xp ]) ≤ 0. Therefore ¯ ¯ ¯ ¯ V˙ ≤ −αp x ¯2p + k1 ¯λ˙ p ¯ (32) for some finite constant k1 ≥ 0. ¯ ¯ ¯ ¯ Since kp is bounded by projection and ¯λ˙ p ¯ is bounded from above by a finite constant it follows that x ¯p cannot go
Theorem 2: The control strategy given by equation (14) guarantees that xr (t) is bounded. When bandwidth becomes available xr (t) approaches xref with time. r Proof: Since 0 ≤ Cp (t) ≤ Cserver it follows that Cserver > Cr (t) ≥ 0. If Cr (t) > 0 and λr (t) = Cr (t) xr Cr (from (12)) then x ¯˙ r = −Cr 1+x + Cr = 1+C > 0 which r r implies that x ¯r (t) increases. From (12) it follows that there xr − αr x ¯r and (12) exists a finite time at which λr = Cr 1+x r becomes x ¯˙ r = −αr x ¯r , which implies that x ¯r (t) reduces to zero exponentially fast. If Cr (t) = 0 then λr (t) = 0 i.e. no bandwidth is allocated and no traffic is admitted. In such case x ¯r (t) = constant. Therefore, in all cases xr (t) will be bounded within acceptable bounds and if Cr (t) > 0 then the proposed control strategy guarantees that xr (t) approaches xref with time r exponentially fast.
13
R EFERENCES [1] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of optimal queueing network control. Mathematics of Operations Research, 24(2):293–305, May 1999. [2] V. Jacobson. Congestion avoidance and control. In Symposium proceedings on Communications architectures and protocols, pages 314– 329. ACM Press, 1988. [3] W. Stevens. Tcp slow start, congestion avoidance, fast retransmit and fast recovery algorithms. RFC 2001, January 1997. [4] P. Karn and C. Partridge. Improving round-trip time estimates in reliable transport protocol. In Proceedings of the ACM workshop on Frontiers in computer communications technology, pages 2–7. ACM Press, October 1987. [5] W. Stevens. TCP/IP illustrated, Volume 1, The Protocols. AddisonWesley, 1994. [6] V. Jacobson, R. Braden, and D. Borman. Tcp extensions for high performance. RFC 1323, May 1992. [7] L. Brakmo and L. Peterson. Tcp vegas: End to end congestion avoidance on a global internet. IEEE Journal of Selected areas in Communications, 13(8):1465–1480, October 1995. [8] ATM Forum. Traffic management specification version 4.0. Tech. Report AF-TM-0056.000, April 1996. [9] L. Roberts. Enhanced prca (proportional rate control algorithm). Tech. Report AF-TM 94-0735R1, August 1994. [10] R. Jain, S. Kalyanaraman, R. Goyal, S. Fahmy, and R. Viswanathan. Erica switch algorithm; a complete description. ATM Forum, AF/961172, August 1996. [11] P. Newman. Backward explicit congestion notification for atm local area networks. In Proceedings of GLOBECOM’93, pages 719–723, 1993. [12] S. Shenker, L. Zhang, and D. D. Clark. Some observation on the dynamics of a congestion control algorithm. Computer Communications Review, pages 30–39, October 1990. [13] J. Martin and A. Nilsson. The evolution of congestion cotnrol in tcp/ip: from reactive windows to preventive flow control. TR-97/11, North Carolina State University, August 1997. [14] T. V. Lakshman and U. Madhow. The performance of tcp/ip for netwroks with high bandwidth delay products and random loss. IEEE/ACM Transactions on Networking, 5(3):336–350, June 1997. [15] A. Feldmann, A. C. Gilbert, and W. Willinger. Data networks as cascades: investigating the multifractal nature of internet wan traffic. In Proceedings of the ACM SIGCOMM ’98 conference on Applications, technologies, architectures, and protocols for computer communication, pages 42–55. ACM Press, 1998. [16] A. Feldmann, A. C. Gilbert, P. Huang, and W. Willinger. Dynamics of ip traffic: a study of the role of variability and the impact of control. In Proceedings of the ACM SIGCOMM ’99 conference on Applications, technologies, architectures, and protocols for computer communication, pages 301–313. ACM Press, 1999. [17] C. E. Rohrs, R. A. Berry, and S.J. O’Halek. A control engineer’s look at atm congestion avoidance. In Proceedings of GLOBECOM’95, 1995. [18] A. Segall. The modelling of adaptive routing in data communication networks. IEEE Transactions on Communications, 25(1):85–95, January 1977. [19] S. Keshav. A control-theoretic approach to flow control. SIGCOMM Comput. Commun. Rev., 25(1):188–201, 1995. [20] L. Benmohamed and S. M. Meerkov. Feedback control of congestion in packet switching networks: The case of a single congested node. IEEE/ACM Transactions on Networking, 1(6):693–708, 1993. [21] L. Benmohamed and Y. T. Yang. A control-theoretic abr explicit rate algorithm for atm switches with per-vc queueing. In Proceedings of INFOCOM’98, volume 1, pages 183–191, 1998. [22] A. Kolarov and G. Ramamurthy. A control theoretic approach to the design of an explicit rate rate controller for abr service. IEEE/ACM Transactions on Networking, 7(5):741–753, October 1993. [23] A. Pitsillides and J. Lambert. Adaptive connection admission and flow control: quality of service with high utilization. In Proceedings of INFOCOM’94, volume 1, pages 1083–1091, 1994. [24] A. Pitisillides and J. Lambert. Adaptive congestion control in atm based networks:quality of service with high utilization. Journal of Computer Communications, 20:1239–1258, 1997. [25] A. Pitsillides, A. Sekercioglou, and G. Ramamurthy. Effective control of traffic flow in atm networks using fuzzy explicit rate marking (ferm). IEEE Journal on Selected Areas in Communications (JSAC), 15(2):209– 225, February 1997.
[26] Y. C. Liu and C. Douligeris. Rate regulation with feedback controller in atm networks-a neural network approach. IEEE Journal on Selected Areas in Communications (JSAC), 15(2):200–208, February 1997. [27] A. Pitsillides and A. Sekercioglu. Congestion Control in Computational Intelligence in Telecommunications Networks. ISBN: 0-8493-1075-X. CRC Press, September 2000. [28] A. Pitsillides and P. Ioannou. Combined nonlinear control of flow rate and bandwidth for for virtual paths in atm based networks. In Proceedings of 3rd IEEE Mediterranean Symposium on new directions in control and automation, Limassol, Cyprus, July 1995. [29] A. Pitsillides, P. Ioannou, and D. Tipper. Integrated control of connection admission, flow rate and bandwidth for atm based networks. In Proceedings of IEEE INFOCOM’96, volume 2, pages 785–793, March 1996. [30] A. Sekercioglu, A. Pitsillides, and P. Ioannou. A simulation study on the performance of integrated switching strategy for traffic management in atm networks. In IEEE Symposium on Computers and Communications (ISCC’98), pages 13–18, June 1998. [31] P. Ioannou and J. Sun. Robust Adaptive Control. Prentice Hall, 1996. [32] R. Braden, V. Jacobson, and S. Shenker. Integrated services in the internet architecture: an overview. RFC 1633, July 1994. [33] D. Black, S. Blake, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An architecture for differentiated services. RFC 2475, December 1998. [34] J. F. Kurose and K. W. Ross. A top down approach featuring the Internet. Addison-Wesley, 2000. [35] D. Black et al. Recommendations on queue management and congestion avoidance in the internet. RFC 2309, April 1998. [36] S. Floyd and V. Jacobson. Random early detection for congestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397–413, August 1993. [37] K. K. Ramakrishnan and S. Floyd. A proposal to add explicit congestion notification (ecn) to ip. draft-kksjf-ecn-03.txt, October 1998 (RFC2481,January 1999, RFC 3168, September 2001). [38] K. K. Ramakrishnan, B. Davie, and S. Floyd. A proposal to incorporate ecn in mpls. draft-mpls-ecn-00.txt, July 1999. [39] S. Floyd and K. Fall. Promoting the use of end-to-end congesion control in the internet. IEEE/ACM Transactions on Networking, 7(4):458–472, August 1999. [40] C. Hollot, V. Misra, D. Towsly, and W. Bo Gong. Control theoretic analysis of red. University of Massachusetts, CMPSCI Technical Report TR 00-41, July 2000. [41] V. Jacobson, K. Nichols, and K. Poduri. An expedited forwarding phb. RFC 2598, June 1999. [42] J. Heinamen, F. Baker, W. Weiss, and J. Wroclawski. Assured forwarding phb group. RFC 2597, June 1999. [43] R. Satyavolu, K. Duvedi, and S. Kalyanaraman. Explicit rate control of tcp applications. ATM Forum/98-0152R1, February 1998. [44] A. Almeida and C. Belo. Explicit rate congestion control with binary notifications. In Proceedings of 10th IEEE Workshop on Local and Metropolitan Area Networks (LANMAN), Sydney, Australia, November 1999. [45] D. Tipper and M. K. Sundareshan. Numerical methods for modelling computer networks under nonstationary conditions. IEEE Journal of Selected Areas in Commincations, December 1990. [46] C. Agnew. Dynamic modelling and control of congestion-prone systems. Operations Research, 24(3):400–419, 1976. [47] J. Filipiak. Modelling and Control of Dynamic Flows in Communication Networks. Springer-Verlag, 1988. [48] S. Sharma and D. Tipper. Approximate models for the study of nonstationary queues and their application to commincaion networks. In Proceedings of IEEE ICC’93, May 1993. [49] X. Gu, K. Sohraby, and D. R. Vaman. Control and performance in packet, circuit and ATM networks. Kluwer Academic Publ., 1995. [50] D. Anick, D. Mitra, and M. Sondhi. Stochastic theory of data handling system and multiple sources. Bell Systems Technical Journal, 61:1871– 1894, 1982. [51] Y. Wardi and B. Melamed. Continuous flow models: modeling simulation and continuity properties. In Proceedings of 38th Conference on Decision and Control, volume 1, pages 34–39, December 1999. [52] L. Rossides, A. Pitsillides, and P. Ioannou. Non-linear congestion control: Comparison of a fluid-flow based model with opnet. TR-991, University of Cyprus, 1999. [53] J. C. Bolot and A. U. Shankar. Analysis of a fluid approximation to flow control dynamics. In Proceedings of IEEE INFOCOM’92, pages 2398–2407, May 1992.
14
[54] B. Vandalore, R. Jain, R. Goyal, and S. Hahmy. Design and analysis of queue control functions for explicit rate switch schemes. In Proceedings of IC3N’8, pages 780–786, 1998. [55] K. Kawahara, Y. Oie, M. Murata, and H. Miyahara. Performnce analysis of reactive congestion control for atm networks. IEEE Journal of Selected Areas in Communications, 13(4):651–661, May 1995. [56] B. Maglaris, D. Anastassiou, P. Sen, G. Karlsson, and J. Robbins. Performnce models od statistical multiplexing in poacket video communcations. IEEE Transactions on Communications, 36(7):834–844, July 1988.
Andreas Pitsillides (M’89) received the B.Sc.(Hns) degree from University of Manchester Institute of Science and Technology (UMIST) and PhD from Swinburne University of Technology, Melbourne, Australia, in 1980 and 1993 respectively. From 1980 till 1986 he worked in industry (Siemens and Asea Brown Boveri). In 1997 he joined Swinburne University of Technology and in 1994 the University of Cyprus where he is currently Associate Professor in the Department of Computer Science and Chairman of the Cyprus Academic and Research Network (CYNET). In 1992, he spent a six month period as an academic visitor at the Telstra (Australia) Telecom Research Labs (TRL). His research interests include fixed and wireless/cellular Integrated Services Networks, congestion control and resource allocation, computational intelligence and non-linear control theory and its application to solve telecommunication problems, Internet Technologies and their application in Mobile e-Services, e.g. in Tele-Healthcare. He is the author of over 90 research papers, participates in European Commission and locally funded research projects, presented invited lectures at major research organizations, has given short courses at international conferences and short courses to industry. He is also widely consulted by industry. He regularly serves on international conference technical committees, journal guest editor, and as a reviewer for conference and journal submissions. Among others he has served as the chairman of the EuroMedNet’98 conference, and on the executive committee of IEEE INFOCOM 2001, 2002, and 2003 (international vice chair). He is also a member of the IFIP working group WG 6.3.
Petros Ioannou (S’80-M’83-SM’89-F’94) received the B.Sc. degree (first class honors) from University College, London, U.K., and the M.S. and Ph.D. degrees from the University of Illinois, Urbana, in 1978, 1980, and 1982, respectively. In 1982, he joined the Department of Electrical Engineering Systems, University of Southern California, Los Angeles, where he is currently a Professor and the Director of the Center of Advanced Transportation Technologies. His research interests are in the areas of adaptive control, neural networks, nonlinear systems, vehicle dynamics and control, intelligent transportation systems, and marine transportation. He was Visiting Professor at the University of Newcastle, Australia, in the fall 1988, the Technical University of Crete in summer 1992, and served as the Dean of the School of Pure and Applied Science at the University of Cyprus in 1995. He is the author/coauthor of five books and over 150 research papers in the area of controls, neural networks, nonlinear dynamical systems and intelligent transportation systems. He has been an Associate Editor for the IEEE Transactions on Automatic Control, the International Journal of Control and Automatica. He is currently an Associate Editor of the IEEE Transactions on Intelligent Transportation Systems, Associate Editor at Large of the IEEE Transactions on Automatic Control, Member of the Control System society on IEEE ITS Council Committee and ViceChairman of the IFAC Technical Committee on Transportation Systems. Dr. Ioannou was a recipient of the Outstanding IEEE Transactions Paper Award in 1984, and the recipient of a 1985 Presidential Young Investigator Award.
Marios Lestas (S 00) received his B.A degree in Electrical and Information Engineering and M.Eng degree in Control Engineering from the University of Cambridge U.K in 2000. He joined the University of Cyprus as a PhD candidate in August 2000 and also worked for the EC funded SEACORN project. Since September 2001, Marios continues his PhD at the University of Southern California, USA. His research interests include application of nonlinear control theory and optimization methods in Computer Networks.
Loukas Rossides (S 97) received his B.Sc and M.Sc degrees from the University of Cyprus in 1997 and 2001 respectively. He is a network administrator at Cy.T.A where he designs and implements the company’s network