and, for both classes, the pros and cons have been identified. In particular, the most interesting features of completely dis- tributed MAC protocols (e.g., DQDB ...
IEEE JOURNAL ON SELECTED AREAS I N COMMUNICATIONS, VOL. 9. NO. 2, FEBRUARY 1991
24 I
DCP: A Distributed-Control Polling MAC Protocol: Specifications and Comparison with DQDB Marco Conti, Enrico Gregori, and Lucian0 Lenzini
Abstract-This paper describes and analyzes a novel MAC protocol named distributed-control polling (DCP), which has been designed to bring together the most interesting features of distributed-control MAC protocols (e.g., DQDB) and centralized token-passing MAC protocols (e.g., FASNET, FDDI, EXPRESS-NET). From the fully distributed MAC protocols, DCP acquires the capability to guarantee both a complete utilization of the medium capacity and an access delay of only a few slots at light loads. From the centralized token-passing MAC protocols, DCP inherits a more predictable and fair behavior at heavy loads. The basic ideas of our proposal are: a cycle for acquiring transmission rights and a balancing function between reservations and empty slots. The analysis reported in this paper shows that DCP guarantees a complete utilization of the medium capacity and its behavior at light loads is close to that of DQDB, while at heavy loads it approaches a polling system with limited service and zero reply interval.
ditions has been extensively analyzed in [6]-[8], [131, [231[25]. For this reason, some of these results are reported in this paper. The paper is organized as follows. In Section 11, we present the DCP MAC protocol. Sections 111-IV contain the performance analysis of DCP and provide the comparison with DQDB. Specifically, Section 111 contains the study of MAC protocol capacity, Section IV reports the asymptotic analysis, and Section V describes the results obtained in underload and overload conditions. The summary and conclusions are drawn in Section VI. 11. MAC PROTOCOL DESCRIPTIONS
I. INTRODUCTION
E
XTENSIVE analysis of distributed and “centralized” token-passing MAC protocols have already been carried out and, for both classes, the pros and cons have been identified. In particular, the most interesting features of completely distributed MAC protocols (e.g., DQDB [14]) are the capability to guarantee a utilization of the medium capacity which does not depend on bus length, medium capacity, or packet length, and the capability to provide access delays of only a few slots at light loads ([22], [2], [6], [8]). Although the centralized MAC protocols (e.g., FASNET, FDDI, EXPRESS-NET) ([ 161, [l], [12]) do not exhibit these desirable features, their behavior is both more predictable and more fair than that of the distributedcontrol MAC protocols ([3], [4], [15], [19]). The distributedcontrol polling (DCP) MAC protocol has been designed in order to bring together the most interesting features of the above two classes of MAC protocols. In this paper, the DCP MAC protocol is introduced, and then its performance and fairness figures, obtained by using both analytic and simulative approaches, are compared with the performance and fairness figures of DQDB, which is being standardized as an IEEE 802.6 MAN [14]. To compare DCP with DQDB, we first analyze the utilization factor of both MAC protocols, and then we examine some of the most widely accepted quality of service indexes (access delay, packet loss, service time). The analysis is carried out under various well-defined workload conditions: normal conditions (i.e., when the offered load is lower (underload condiiions) or slightly higher (ovrrload conditions) than the medium capacity), and asymptotic conditions (i.e., when each node is trying to seize all the medium capacity). DQDB behavior under the same workload conManuscript received April 25, 1990; revised September 14, 1990. This paper was presented in part at the 15th Conference on Local Computer Networks, Minneapolis, MN, October 1990. The authors are with CNR, lnstituto CNUCE, 56100 Pisa, Italy. IEEE Log Number 9040835.
A DCP MAC protocol has been designed for a slotted dual bus environment. Similar to a DQDB MAN, the principal components of a DCP MAN are the nodes responsible for generating empty slots (head and end), two contra-directional buses (forward and reverse), and a multiplicity of intermediate nodes-addressed by an integer number (node index) ranging from 1 to K-which access both buses. In each node, the arriving segments’ are put in a local queue, as determined by the destination address (there are two local queues: one for each bus). Without loss of generality, in the following text we will focus our attention on segment transmission on the forward bus, the procedure for transmission on the reverse bus being the same. The description of the DQDB MAC protocol state machine (DQSM) can be found in [ 141, and so it is unnecessary to repeat it here. Instead, throughout this section, we focus on the description of the DCP MAC protocol by borrowing some very well-known terminology (e.g., busy and REQ bits) and concepts (e.g., cycle) from the distributed and centralized tokenpassing MAC protocols.
A . DCP MAC Protocol Description
To ease the protocol description throughout this section, we will make reference to a simplified version of the protocol finite state machine (FSM) (see Fig. 1). A complete description of the DCP MAC protocol state machine can be found in [5]. Fig. 1 shows that each node may be in one of the following states: idle, master, or slave. The transitions between these states are determined by control bits (carried by the access control field, ACF, of each slot) and local counters. The ACF for DCP is made up of three bits: busy ( B ) , start ( S ), and request (REQ). The busy bit is used to avoid collisions, i.e., a slot with B = 0 is empty while a slot with B = 1 is occupied. The use of the ‘In this paper, the words segment and packet are used interchangeably.
0733-87 16/9I/0200-0241$01.OO
0 1991 IEEE
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 9, NO. 2, FEBRUARY 1991
242
MASTER
IDLE
t
SLAVE
[IN=A(S=O&B=1)]&LQC=JJ I
[[IN=A(S=l& B=O)J &PTC-C=l &RQC=O] ;&=A(S=l&
B=O)] & R Q C = l & PTC-C = 01
OR [[IN=A(S=O & B=O)] & PTC-C=l& LQC=O & RQ-C=O]
OR
[[IN=A(S=O & B d ) ] & (PTCC + L Q C d ) & RQ-C>o]
4-
Fig. 1. DCP finite state machine (FSM)
REQ and S bits will become clear after the description of the following DCP MAC protocol mechanisms. Acquisition and Notijication of Transmission Rights: Packets amving at a node are queued in a local queue, from which they are removed when their transmission is enabled. The number of packets queued in a local queue is registered in a counter named LQ-C. To determine when a packet transmission is enabled, DCP makes use of the cycle concept. A generic node { i } is allowed to get transmission rights (a quota) of up to a maximum number of packets ( P max ( i ) ) per cycle. The transmission rights are managed by means of a counter, called the packet transmission credit counter (PTC-C). The start of each new cycle causes an increase in the value of the PTC-C by an amount equal to the lesser of the two values LQ-C and P Tax ( i ) . These new transmission rights are communicated to the upstream nodes by setting the same number of REQ = 1 on the reverse bus. To manage the setting of the REQ bits, a counter, called the pending request counter (PR-C ), has been introduced. The value of this counter is decreased by one whenever it is greater than zero and node { i } sets REQ = 1 on a slot traveling on the reverse bus. Credit Balancing: Node { i } , to keep track of the credits of the downstream nodes, maintains a counter: the request counter (RQ-C ). The RQ-C is increased by one every time a REQ = 1 is observed on the reverse bus, and it is decreased by 1 (down to zero) for each slot left empty by node { i } on the forward bus. Scheduling Algorithm for Transmission: Node { i } can transmit if its PTC-C > 0 and RQ-C = 0, and any time node { i } transmits a packet it decreases its PTC-C by one unit. This condition implies a priority of the transmission rights of the downstream nodes (i.e., node { j } with j > i ) over the transmission rights of node { i }. This choice represents an attempt to recover from the disadvantages caused by the time it takes a REQ = 1 to be taken into account by node { i } (due to the physical distance between nodes). Cycle Management: Every cycle is managed by means of the
start bit. An active node in the idle state (where an active node is a node with LQ-C > 0) Observing a slot with S = 0 and B = 0 begins a new cycle, transmits a packet in this slot (i.e., sets B = l ) , and assumes the master role for this cycle (transition 1). All the active downstream nodes will realize that a new cycle has begun by observing a slot with S = 0 and B = 1. From this time on, up until the end of the cycle, these nodes will play the ;lave role (transition 3). The master is in charge of evaluating the length of the cycle. To this end, it sets the start bit to one ( S = 1) in all the subsequent slots until the end of the cycle, i.e., until it has satisfied the credits received from the slaves and it has transmitted its quota for this cycle (i.e., RQ-C = 0 and PTC-C = 0) (transition 2). The same event characterizes the end of the cycle for each slave (transition 4). It may seem that the cyclic behavior of DCP is negatively influenced by the physical distance between nodes. In fact, due to the control information traveling time it may happen that: more than one node may, at any given time, play the master role; the master may underestimate the length of a cycle; the master may overestimate the length of a cycle. In the following text, we show how these situations are managed by DCP. More than one master is quite frequent when there is light traffic on the nodes towards the head node. If node { i }, while still playing the master role, observes a slot with S = 0 and B = 1, it realizes that a new cycle has been started by an upstream master and thus, though its cycle has not been completed, it becomes a slave in the new cycle. The new cycle inherits the remaining part of the olde? (unfinished) cycle as the RQ-C of the new master also takes into account the unsatisfied REQ’s related to the previous cycle (transition 5) (for further details, see [lo]). An underestimation of the length of a cycle occurs when the master considers its cycle ended while some reservations are still traveling. The MAC protocol manages an underestimation by shifting the unfinished work to the next cycle, which thus inherits all the unsatisfied credits of the previous one. As a consequence, it may happen that a node in the slave state observes a slot with S = 0 and B = 0 and so moves into the master role for the new cycle (transition 6). An overestimation of the length of a cycle occurs when the master takes into account reservations (i.e., REQ = 1 ) which have already been satisfied, and this may obviously cause a significant waste of bandwidth (for further details, see [lo]). To avoid such waste, an additional mechanism, called anticipated transmission, is included in the DCP MAC protocol. Anticipated Transmission: A node in the idle state with LQ-C > 0 and RQ-C = 0 is authorized to transmit whenever it sees a slot with S = 1 and B = 0. Such a transmission is “flagged” as an anticipated one by increasing a counter. Two counters are used to implement this mechanism: the anticipated cycle counter ( AC-C) and the anticipated transmission counter (AT-C). The AT-C counts each single anticipated transmission and every time its value reaches P max, the AT-C counter is reset to zero and the AC-C is increased by one to register that a complete cycle has been anticipated. Thus, the total number of packets flagged as “anticipated” is given by value AC-C * P max AT-C. To guarantee fair bandwidth sharing among
+
2We define a cycle C1 “older” than cycle C2 when the head node has generated the first slot of C1 earlier than the first slot of C2.
243
CONTI er al.: DCP: A DISTRIBUTED-CONTROL MAC PROTOCOL
nodes, all the anticipated transmissions made by a node will be subtracted from its quota in a subsequent cycle. In particular, a node in the idle state with its AC-C greater than 0, upon observing (S = 0, B = 0 ) , starts a new cycle in which it plays the master role. In this case, it does not acquire any new transmission rights, but decreases its AC-C value by one unit and indicates the beginning of the new cycle by setting B = 1 . Furthermore, if the node has LQ-C > 0, it uses this slot and this transmission is considered an anticipated transmission. Thus, in this case, even though we may have nothing to transmit, we need to indicate the start of a new cycle by setting the configuration ( S = 0, B = 1). Obviously, this may result in a potential wastage of bandwidth. This waste may be avoided by using an additional bit in the control header to indicate the start of the cycle. According to this extension of the FSM, transitions 1 and 3 can be executed only if AC-C = 0. An analysis of the effectiveness of the anticipated transmission mechanism in reducing bandwidth wastage can be found in [ 101. 111. ANALYSISOF MAC PROTOCOLCAPACITY In this section, we analyze the limitations on the aggregate throughput imposed by each of the two MAC protocols, DQDB and DCP. To this end, we look at the utilization factor (aggregate throughputlmedium capacity). In the literature, this index is often called the MAC protocol capacity [ 181. Since the utilization factor may depend on the number of active nodes and on their contribution to the offered load, we have identified the following indexes. psingle: the fraction of the medium bandwidth used by the
node in the least favored position (if such exists) when it is the only active node and it attempts to seize all the medium capacity. pmax: the fraction of the medium bandwidth used by the nodes when each node tries to seize all the medium capacity. the fraction of the medium bandwidth used by the p: nodes when at least one node tries to seize all the medium capacity. While psingleand pmaxprovide information about the MAC protocol overhead in two extreme situations, p describes the intermediate situations. In fact, psingleand pmaxrepresent two specific instances of p . In a MAC protocol which is ideal from the utilization standpoint, all the above indexes must be equal to 1 . The value of psinglefor the DCP MAC protocol is quite obvious: when there is only one active node, this node always plays the master role and it can use all the medium capacity. To compute pmax,it is enough to consider that when node { 1 } has a never-empty local queue, it always plays the master role and the other nodes obtain as many empty slots as the number of REQ = 1 they have sent. As a consequence, there is no bandwidth wastage, i.e., pmax= 1 . The computation of p can be analytically derived by using the following proposition, which has been proved in [lo]. Proposition: The maximum bandwidth wastage is: 1 - p s
1 2
+ mini=,,.
{ P max ( i ) } '
'This anticipated transmission cannot be permitted if the P max value is equal to one as, in this case, every anticipated transmission increases the AC-C.
TABLE I CAPACITY COMPARISON
DQDB
DQDB
BWB Enabled
BWB Disabled
DCP
1-a Pmax
1
+ ( K - 1)a
Pllngle
a
P
a
2P max
z 2Pmax
Legend: a = BWB-MOD/(BWB-MOD
+
+
1
1)
In DQDB with the bandwidth balancing mechanism (BWB) enabled, pslngleand pmaxdepend only on the value of the parameter BWB-MOD, and both have been analytically determined in [ 131. A lower bound for p can be analytically determined by When the BWB is disabled, DQDB considering that p 2 pSlngle. guarantees a complete utilization of the medium capacity for each network configuration, i.e., pslngle= pmax= P = 1. Table I summarizes the MAC protocol capacity analysis. The values reported in the table clearly indicate that the DCP and DQDB MAC protocols have similar capacity values. It is worthwhile noting that in DCP if the P max value is >> 1 , we have a high p value even though we always adopt the strategy that a node in the idle state with AC-C > 0, upon observing ( S = 0, B = 0 ) , decreases its AC-C, sets ( S = 0, B = 1 ) but does not use this slot. Thus, this policy can always be applied if we wish to reduce the MAC protocol complexity.
IV. ANALYSISIN ASYMPTOTICCONDITIONS In asymptotic conditions, we focus only on the bandwidth sharing among nodes. An extensive analysis of DQDB behavior in asymptotic conditions is reported in [8], [24], [ I I], and [23]. These papers have shown that DQDB with the BWB disabled presents severe fairness problems; thus, in the following analysis, we consider only DQDB with the BWB enabled. Since, throughout this paper, we make use of some of the results reported in [SI, in which the DQDB performance has been evaluated assuming a MAN defined by the parameter values reported in Table 11, we assume the same network parameter values for DCP, as well. DCP Analysis: DCP behavior in asymptotic conditions can be analytically derived by noting that all active nodes observe the same number of cycles (the proof is reported in [IO]), and this property, together with pmax= 1, implies that, under these workload conditions, the percentage of bandwidth obtained by a generic node { i } is P max ( i )/E:= I P max ( i ). Furthermore, in asymptotic conditions with an additional assumption it is possible to obtain completely deterministic behavior. To this end, we need to introduce the following definition. 1 ) is the time it takes a bit to travel Definition: A ( i , i 1). fromnode { i } tonode { i Proposition: If for each i E { 1, K - 1 } P max ( i ) . ts 2 2 . A(i, i I ) , each node transmits its quota before the end of the cycle which has induced the related reservations. In these conditions, DCP behaves like a polling system with G-limited service and zero reply interval [19], [20], [21], with different transmission order, i.e., each node transmits its quota within the cycle in which it has obtained the transmission rights, but the scheduling of the transmissions within a cycle is not exactly as that of a polling system.
+
+
+
244
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,
VOL.
9. NO. 2. FEBRUARY 1991
TABLE I1 NETWORK PARAMETER VALUES
-
Nodes were spaced equally along the two buses Capacity of each bus = 150 Mb/s Slot size = 53 octets Slot duration (ts) = 2.82 ps Length of each bus = 147 slots = 89 km Bus latency ( 7 ) = (2.82 . 147) ps 414 ps Dual bus latency ( 2 7 ) t: 828 ps - Total number of nodes ( K ) = 50 - Buffer size = 10 segments f:
0
,.
10
20
30
40
50
Node index
Fig. 2. DQDB transient time (BWB-MOD = 8 ) .
To demonstrate this behavior, let us analyze the following temporal evolution: at time TO, node { 1 } starts a new cycle4; at time TO A ( 1, 2), node { 2 } observes the start of the new cycle and begins to send its REQ’s’; at time TO 2 * A( 1, 2), node { 1 } is preempted by the REQ’s from node { 2 }; a t t i m e T O + 2 * A ( 1 , 2 )+ 2 - A ( 2 , 3 ) , n o d e { l } s t a r t s to receive the REQ’s from node { 3 } and has not yet completely satisfied the node { 2 } quota; in this way, it is easy to see that at time TO + 2 A( 1, . 2 A(K - 1, K),node { 1 ) starts 2) 2 * A ( 2 , 3 ) receiving the REQ’s from node { K } . Only when node { 1 } has satisfied all the quotas of the downstream nodes can it complete the transmisson of its quota for this cycle. Having achieved the complete transmission of the node { 1 } quota, a regeneration point is reached. In fact, from now on, cycles of [ Pmax ( 1 ) P max(2) * * . P max ( K ) ]slots will be continuously repeated, and each node will transmit its quota ( Pmax) in every cycle. This attractive feature simplifies the computation of the bounds on DCP performance figures. Comparison Between DCP and DQDB: In this section, we compare the asymptotic behavior of DCP with that of DQDB with the BWB enabled (BWB-MOD = 8). Both MAC protocols eventually reach a state in which the bandwidth is equally shared among the nodes. Obviously, in DCP this condition could not be achieved if the P max parameter were not equal for every node. In DCP it is easy to verify that, independent of the initial state, when a node becomes active it gets, from that time on, the same transmission rights as all other active nodes. In DQDB, on the other hand, equal bandwidth sharing is achieved only after a significant interval (the transient time). This transient time depends upon several factors: the initial state, BWB-MOD, and the number of slots traveling on the two buses ( 2 7 / t s ) . As far as DQDB results are concerned, we have defined the beginning of the transient time as the time immediately after the activation of all nodes, and the nodes are sequentially activated from node { 1 } to node { K } every 27 ps. No measures are taken in the first (50 * 27) p s , but since the BWB mechanism has been working from the beginning, the measured transient times are underestimated. Fig. 2 reports, for this scenario, the evolution of the bandwidth sharing between nodes measured in different periods, each period being equal to (50 27) p s . From
+ +
+
+ - +
+
+
+
4Please note that we do not make any assumptions about the network state at time TO. In particular. there may or may not be unsatisfied REQ’s from previous cycles. 5To simplify the demonstration, we assume that a node sees the beginning of a slot on both buses at the same time. Otherwise, we would need to take this difference in phase into account in the proof.
the figure, it is evident that the steady state is entered in the period [200 - 2501 * 27 ps, which corresponds to a transient time of at least 167 ms. From this comparison, we can draw the following conclusions. DCP is quicker to achieve the planned bandwidth sharing among the active nodes. DCP provides a parameter ( Pmax) which can be used to obtain a given bandwidth sharing at will. This characteristic is similar to the target token rotation time (TTRT) facility in FDDI [15], where, by varying the percentage of TTRT, a node can adjust its guaranteed bandwidth.
v.
SIMULATIVE
ANALYSIS IN NORMAL CONDITIONS
As many factors influence the behavior of both MAC protocols (topology, workload, slot size, etc.), accurate mathematical modeling is necessary to make a nontrivial estimation of the MAC protocol performance. As accurate analytic models are not available at the moment, we decided to solve a mathematical model containing sufficient detail by simulation. The simulation was carried out using the RESQ package [17]. Before showing the results of our investigation, we need to introduce the workload characterization, the fairness, and the performance indexes adopted in our analysis. Workload Characterization: The workload has been characterized by the following parameters: offered load (OL), workload type, and interarrival distribution. The offered load designates the aggregate node-generated traffic normalized to the medium capacity. The workload type defines how the nodes contribute to the offered load. The workload type for each node has been split up into two components: one which is kept equal for all nodes and one which is dependent upon the node index [9]. The policies selected for the latter component in the simulation experiments were symmetric ( S ) and uniform ( E Q ) . In both policies, we assume that the average traffic generated by each node is the same. In workload type S , the probability that a node transmits a segment on a given bus is proportional to the number of downstream nodes. In workload type EQ, all nodes generate the same average traffic on a given bus. The interarrival distribution characterizes the segment interarrival times at the local queues. In our experiments, we have selected an exponential distribution. Fairness Index: We use the MAC delay, i.e., the service time, as the fairness index. The rationale behind this choice is that this index only gives a measure of the interference among nodes and, thus, in a fully fair system it should be the same for each node independent of the load condition. In addition, it is general enough to be applicable to every load condition [7].
245
CONTI et al. : DCP: A DISTRIBUTED-CONTROL MAC PROTOCOL
Performance Indexes: The quality of service experienced by a user has been measured using two widely accepted performance indexes. These indexes are the average bus access delay; and the percentage of packet loss on a single bus, defined as Load ( i , A ) - throughput ( i , A ) Load ( i , A ) where Load ( i , A ) indicates the node { i } contribution to the offered load on bus A , and throughput (i, A ) indicates the throughput of node { i } on Bus A . A. Comparison Between DCP and DQDB in Underload
Fig. 3 plots the average MAC delay versus node index for DCP and DQDB for OL = 0.8 and shows that these MAC protocols exhibit very similar behavior in underload. This implies that we have met one of our tragets: at light loads, DCP behaves almost like a random access mechanism. Another piece of information emerging from these figures is that DCP and DQDB suffer from the same type of unfairness: in both MAC protocols, the MAC delay depends on the node index. In our analysis, we have also verified that the queueing time in the local queue under these workload conditions is almost negligible and, thus, there is no significant difference between the average MAC and bus access delays. More simulative results on DCP behavior in underload conditions which are perfectly aligned with our statements are reported in [lo].
9
g
'E 4 d
-
v
400,
.
Q
%
-
200
V
s
100:
% L
B. Comparison Between DCP and DQDB in Overload In overload conditions, some nodes achieve a throughput lower than their offered load. Thus, for these nodes, the congestion of the local queue increases greatly, and this generates two effects: first, some segments are lost and, second, the average bus access delay depends (almost linearly) on the local queue capacity. To obtain comparable results, we have maintained the same buffer size (Buf = lo), even though it would be preferable to have a bigger buffer in DCP (see below). To analyze the difference in fairness and quality of service provided by both MAC protocols, we focus on an offered load equal to 1.20. Fig. 4, plotting the average MAC delay versus node index for DCP and DQDB, shows that the steady-state behavior of these two protocols in overload is quite similar. Fig. 5 compares the DCP and DQDB average bus access delay. In this case there is marked difference, which can be justified by considering the different designs of the two MAC protocols. DCP tries to build transmission cycles in which every node can transmit a given quota ( P max). A node gets its transmission rights only at the beginning of a cycle. Moreover, as Buf = 10 and P max = 5 , no more than two cycles are necessary for a node to get the transmission rights for all packets in its local queue. These observations help explain the almost linear shape of the curve. In DQDB, on the other hand, it is easy to distinguish two sets of nodes, congested and noncongested, which experience very different (and rather extreme) bus access delays. This is because the DQDB MAC protocol manages a single packet per node at a time. This difference in average bus access delay does not significantly affect the percentage of packet loss (see Fig. 6). In fact, in both MAC protocols, the packet loss is concentrated only on those nodes with an offered load higher than ( 1 [ K ) [maximum available bandwidth]. Please note that DQDB seems more desirable because it can markedly differentiate between congested and noncongested
*DcP
300-
OL=l.2; Workload type: S; B=10
'
o
10
0
20
U
30
40
I
50
Node index
Fig. 5. Bus access delay in overload
OL=l.2; Workload type=% Buf=lO O5
7
0.1
-
0.0
.
0
,
10
.
,
20
.
I
30
40
J
50
Node index
Fig. 6. Packet loss.
nodes. This behavior derives from the choice of managing a single packet at a time. However, this choice makes DQDB performance negatively affected by batch arrivals [6]; on the other hand, DCP performance (when thp_ batch size C P max) is only slightly degraded by batch arrivals.
246
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 9, NO. 2, FEBRUARY 1991
Workload type&; Buf=20; Pmax=5
OL=l.2; Workload type = EQ; Pmax=5
0.2
-B
-
k
* 0.1
B
U Y
k
U
2
ep
40
0.0 0
20
10
40
30
-
Workload type=S; Buf=20; Pmax-5 500 'I
MACdelay
-
f
5 0.1
2 ep
0
A
10
20
30
40
Node index Fig. 8. DCP: MAC delay and packet loss (OL = 1.10).
Y
Workload type&; Buf=20; Pmax=5
-
500-7
--t
400
r 0.4
MACdelay %Packetloss
0.3
300 0.2
V
s
200
t
100
> 4
0
e
0.1
0
10
20
.
.
30
40
30
40
50
VI. SUMMARY AND CONCLUSIONS
50
\
-
20
plotting the percent packet loss and the MAC delay versus node index for different offered loads, confirm our hypothesis, as the privileges for the first two nodes disappear. To further analyze the relationship between DCP behavior and the buffer size, we have performed two additional experiments with an EQ workload type, OL = 1.2, and Buf = 10 and 20, respectively. Fig. 10, plotting the average MAC delay obtained in the two experiments, clearly shows that in this workload condition as well, Buf = 20 is required to obtain a fairer behavior in DCP.
10.3
0.2
10
Node index Fig. 10. DCP: Influence of buffer size.
Node index Fig. 7. DCP: MAC delay and packet loss (OL = 1.05).
-
A 0
50
ep
. 0.0 50
Node index Fig. 9. DCP: MAC delay and packet loss (OL = 1.20).
The above figures clearly show that DCP favors the first two nodes over those which are congested. This is only due to the buffer size. In fact, a node can get its transmission rights only at the beginning of a cycle and, thus, if at this point in time the local queue contains a number of packets which is less than the node quota, some of the transmission rights are lost. If Buf = 10 and P max = 5 , the probability of finding less than 5 packets in the local queue is not negligible, and it increases as the node's offered load decreases. Furthermore, the preemption within the same cycle (generated by the downstream nodes via the REQ's) takes some time to be effective. This startup time is mostly utilized by those active nodes which are farthest upstream. Thus these nodes, under this workload situation, experience a lower MAC delay. This, due to the chosen buffer size, results in a lower packet loss probability. To verify this hypothesis, some experiments have been camed out with a larger buffer size and with the requirement that Buf be more than three times the P max value. The objective of both choices is to absorb the statistical fluctuations of the packet interarrival time. Figs. 7-9,
!
In the paper, a novel MAC protocol named DCP is proposed and its performance is evaluated and compared with that of DQDB. The DCP MAC protocol has been designed to guarantee a complete utilization of the medium capacity, an access delay of only a few slots at light loads, and predictable and fair behavior at heavy loads. The analysis reported in the paper shows that the above objectives have been fully met. It is worthwhile noting that the basic ideas of our proposal are: a reservation cycle managed by a master (cycle management mechanism). Every node can get the master role and in this sense, DCP is a distributed-control MAC protocol. The transmission credit which a node can get in every cycle is controlled by a parameter: P max. This property provides a characteristic similar to the TTRT facility in FDDI from the bandwidth-sharing point of view; a balancing function between reservations and empty slots is active on all nodes (credit balancing mechanism). To analyze the performance of these ideas and make a comparison with DQDB, we have also defined two algorithms: one for managing the order of packet transmission (preemption of upstream nodes) and one for managing the transmission rights within a cycle (G-limited). The target features of DCP can obviously be maintained with other couples of algorithms. Thus, rather than looking at DCP as a single MAC protocol, it would be more appropriate to consider DCP as a family of MAC protocols, i.e., one MAC protocol for each couple of algorithms. Improvements, at least on some performance figures, are possible and expected using other couples of algorithms. The DCP basic ideas guarantee that the protocol exhibits the following properties: All nodes (active) observe the same number of cycles and thus, at heavy loads, a predictable bandwidth-sharing is obtained. The utilization of the medium capacity is not affected by bus length, packet size, nor medium capacity. The protocol required to implement the DCP basic ideas is
CONTI er al. : DCP: A DISTRIBUTED-CONTROL MAC PROTOCOL
more complex than DQDB (the DCP FSM has more states and transitions than DQSM). However, the DCP FSM complexity is acceptable for high-speed protocol implementation. Furthermore, the DCP FSM, as it is now has been defined only to verify its performance. Reduction in complexity is expected, e.g., using an additional bit to manage the start of a cycle, collapsing the IDLE and SLAVE states, etc. The promising results obtained so far encourage us to extend the DCP protocol to include multiple priorities and an asynchronous service able to meet the requirements of real-time applications. In particular, multiple priorities seem to be easily supported by duplicating some bits in the ACF for each priority level.
REFERENCES [ l ] “FDDI token ring media access control (MAC),” ANSI, X3.139, 1987. [2] C. Bisdikian, “Waiting time analysis in a single buffer DQDB (802.6) network,” in Proc. INFOCOM ’90, San Francisco, CA, June 1990, pp. 610-616. [3] A. Bondavalli, M. Conti, E . Gregori, L. Lenzini, and L. Strigini, “Medium access protocol for high-speed, long-distance MANs: Performance comparisons for a family of FASNET-based protocols,” Comput. Networks ISDN Syst., vol. 18, no. 2, pp. 97113, 1990. [4] W . Bux and D. Dykeman, “Analysis and tuning of the FDDI media access control protocol,” IEEE J . Select. Areas Commun., vol. 6 , July 1988, pp. 997-1010. [SI M. Conti, E. Gregori, and L. Lenzini, “DCP, a distributed-control polling MAC protocol,” CNUCE Rep. C90-23, Aug. 1990. [6] M. Conti, E. Gregori, and L. Lenzini, “DQDB media access control protocol: Performance evaluation and unfairness analysis,’’ in Proc. 3rd IEEE Workshop MAN’S,San Diego, CA, Mar. 28-30, 1989. “DQDB under heavy load: Performance evaluation and un[7] -, fairness analysis,” in Proc. INFOCOM ’90, San Francisco, CA, June 1990, pp. 313-320. [8] -, “A methodological approach to an extensive analysis of DQDB performance and fairness,” IEEE J . Select. Areas Commun., Jan. 1991, pp. 76-87. [9] M. Conti, F. Grandoni, E. Gregori, L. Lenzini, and L. Strigini, “Interconnection of MANs: Architecture and algorithms for bandwidth allocation,” in Journal of Internerworking. New York: Wiley. [lo] M. Conti, E. Gregori, and L. Lenzini, “DCP: A fully distributed MAC protocol exploiting the capabilities of polling systems,” in Proc. 15th LCN Conf., Minneapolis, MN, Oct. 1990, pp. 320326. [ l 11 J. Filipiack, “Access protection for fairness in a distributed queue dual bus metropolitan area network,” in Proc. ICC ’89 Conf.. Boston, MA, June 1989, pp. 635-639.
247
[I21 L. Fratta, F. Borgonovo, and F. Tobagi, “The EXPRESS-NET: A local area communication network integrating voice and data,” in Performance of Data Communication Systems, G . Pujolle, Ed. New York: Elsevier North-Holland, 1981, pp. 77-88. [13] E. L. Hahne, N . F . Maxemchuk, and A. K . Choudhury, “Improving DQDB fairness,” Contribution to IEEE 802.6 Working Group, Doc. 802.6-89, No. 289, Sept. 1989; and Proc. INFOCOM ’90, San Francisco, CA, June 1990, pp. 175-184. [ 141 IEEE 802.6 Distributed Queue Dual Bus-Metropolitan Area Network-Draft Standard-Version D. IO-Oct. 30, 1989. [15] M. J. Johnson, “Proof that timing requirements of the FDDI token ring protocol are satisfied,” IEEE Trans. Commun., vol. COM-35, June 1987, pp. 620-625. [I61 J. 0. Limb and C. Flores, “Description of Fasnet-A unidirectional local area communications network,” Bell Syst. Tech. J . , vol. 61, no. 7, Sept. 1982, pp. 1413-1441. [I71 C. Sauer and E. MacNair, “The research queueing package version 2: CMS user guide,” IBM Res. Rep., Yorktown, 1982. [18] M. Schwartz, J. F. Kurose, and Y. Yemini, “Multiple-access protocols and time-constrained communication,” ACM Compur. Surveys, vol. 16, no. I , pp. 43-70, Mar. 1984. [19] H. Takagi, Analysis of Polling Systems. Boston, MA: M.I.T. Press, 1986. “Queueing analysis of polling models,” ACM Comput. [20] -, Surveys, vol. 20, no. 1, Mar. 1988, pp. 5-28. “Queueing analysis of polling models: An update,” in Sro[21] -, chastic Analysis of Computer and Communication Systems, H. Takagi, Ed. New York: Elsevier North-Holland, 1990, pp. 267318. 1221 P. Tran-Gia and I. Stock, “Approximate performance of the DQDB access protocol,” in Proc. 7th ITC Sem., Adelaide, Australia, Sept. 1989, paper 16.1. [23] H. R. van As, J. W. Wong, and P. Zafiropulo, “Fairness, priority and predictability of the DQDB MAC protocol under heavy load,” in Proc. Int. Zurich Sem., Zurich, Switzerland, Mar. 1990, pp. 410-417. [24] J. W. Wong, “Throughput of DQDB networks under heavy load,” in Proc. EFOC/LAN 89, Amsterdam, The Netherlands, June 1989, pp. 146-151. [25] M. Zukerman and P. Potter, “The DQDB protocol and its performance under overload traffic conditions,” in Proc. 7rh ITC Sem., Adelaide, Australia, Sept. 1989, paper 16.4.
Marco Conti, for a photograph and biography, see p. 87 of the January 1991 issue of this JOURNAL.
Enrico Gregori, for a photograph and biography, see p. 87 of the January 1991 issue of this JOURNAL.
Lucian0 Lenzini, for a photograph and biography, see p. 87 of the January 1991 issue of this JOURNAL.