Offset-Based Scheduling With Flexible Intervals for ... - IEEE Xplore

34 downloads 332 Views 384KB Size Report
Aug 1, 2009 - networks (GPONs) employing the Offset-Based Scheduling with. Flexible ... based on the individual demands of user services. In addition, ways.
JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

3259

Offset-Based Scheduling With Flexible Intervals for Evolving GPON Networks Konstantinos Kanonakis, Member, IEEE, and Ioannis Tomkos, Member, IEEE

Abstract—This paper introduces a novel framework for dynamic bandwidth assignment (DBA) in gigabit passive optical networks (GPONs) employing the Offset-Based Scheduling with Flexible Intervals (OSFI) concept in order to achieve distribution of bandwidth in such networks ensuring not only high system utilization but also clean-cut quality of service (QoS) differentiation based on the individual demands of user services. In addition, ways to enhance efficiency in the case of next generation long-reach GPONs are discussed. A thorough description of the proposed mechanisms is provided, while the improved system performance is verified using simulations. Index Terms—Dynamic bandwidth assignment (DBA), gigabit passive optical network (GPON), medium access control (MAC), quality of service (QoS).

I. INTRODUCTION

T

HE much awaited penetration of photonics as close to the user premises as possible, coined under the general acronym FTTx (Fiber to the x, where x can stand for Curb, Building, or Home), has not yet been fully realized. However, all reasons for this delay, namely the lack of sufficiently high user application bandwidth demands, the advances in DSL technology over the omnipresent legacy copper infrastructure and the high costs for the initial placement of fiber as well as for the operation and management of optical equipment, have been or are being disproved. Already existing or envisaged bandwidth-hungry applications such as High Definition TV (HDTV), 3DTV, and more, set the demands to new orders of magnitude: The simultaneous use of the aforementioned services along with the prerequisite web browsing and IP telephony leads to an estimate of even 100 Mbps per user which can by no means be only ofachieved by current access solutions, as even DSL fers 24 Mbps at best. The concept of passive optical networks (PONs) was first proposed more than 20 years ago and constitutes a highly cost-effective solution for realizing FTTx. A typical PON architecture employs a tree-like topology consisting of one optical line termination (OLT) located at the Central Office (CO) and multiple optical network units (ONUs) at the user premises. PONs have already been successfully deployed Manuscript received November 11, 2008; revised January 14, 2009. First published April 21, 2009; current version published July 20, 2009. This work was supported in part by the EU FP7 Project SARDANA. The authors are with the High-Speed Networks and Optical Communications Group, Athens Information Technology (AIT), 19002, Peania, Athens, Greece (e-mail: [email protected], [email protected]). Digital Object Identifier 10.1109/JLT.2009.2021412

in many countries, with the lead belonging to Japan and it is expected that in the following years the rate of deployments will rise dramatically. The predominant protocols are GPON (Gigabit PON) and EPON (Ethernet PON), standardized by the ITU-T [1] and the IEEE [2], respectively, providing symmetrical (downstream and upstream) data rates of up to 2.488 Gbps (GPON) and 1 Gbps (EPON) while supporting tens of ONUs per OLT and providing a reach of up to 20 km (as opposed to the few kilometers of DSL and, most importantly, without significant signal degradation). Updated standards specifying PON networks with capacities of 10 Gbps and spanning distances of up to 100 km are expected to emerge in the next couple of years from both standardization bodies [3], [4]. At the same time, emerging services and applications impose strict quality of service (QoS) requirements. The medium access control (MAC) layer of the GPON network is responsible for offering those QoS guarantees, however the task of offering QoS in PONs has always been a crucial and challenging one, especially since services and users with diverse behaviors and requirements must coexist within the same network. The critical path in any PON network is the upstream direction since time-division multiple access (TDMA) requires mechanisms to avoid collisions and distribute upstream bandwidth in the most efficient way. The use of dynamic bandwidth assignment (DBA) improves efficiency by dynamically adjusting the bandwidth among the ONUs in response to ONU bursty traffic requirements. The practical benefits of DBA are twofold [5]. First, the network operators can add more subscribers to the PON due to the more efficient use of bandwidth. Second, the subscribers can enjoy at a high quality services requiring bandwidth peaks which under the traditional fixed allocation approach would imply gross over-provisioning. There has been a plethora of works addressing PON DBA issues, both for EPONs (most notably the IPACT protocol [6] and studies that enhance this concept, e.g., [7]–[9]) and GPONs (e.g., [10]–[14]). All IPACT-based EPON schemes involve an “adaptive cycle time” concept, whereby the frequency of serving ONUs is decided on-the-fly. On the contrary, most GPON solutions focus on how the OLT distributes slots of upstream frames among the queues of ONUs in a periodic basis, while the selection of these periods (scheduling intervals) is either assumed to be done offline (thus being fixed for the lifetime of each queue [10], [11], [14]) or is not given much notice [12], [13]. However, this selection procedure is of the utmost importance for the provision of the (time-related) QoS guarantees of packet delay and jitter. In addition, almost all solutions provide QoS differentiation at a class of service (CoS) granularity (usually three to four classes are supported). In reality, QoS needs may differ significantly

0733-8724/$25.00 © 2009 IEEE

3260

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

even among flows that belong to the same class, hence the PON MAC protocol should possess mechanisms to tackle these individualities. Moreover, extending the reach of PONs imposes additional issues with reference to network delay and the bandwidth assignment process, which have not been extensively addressed (a significant decrease in average packet delay has been presented in [15], however without satisfactory QoS differentiation). This paper proposes a feasible DBA framework for GPONs that makes use of flexible scheduling intervals together with an elaborate offset-based scheduling mechanism in order to achieve QoS differentiation at the granularity of individual queues combined with high network utilization. Although the proposed mechanism is completely novel, it is still fully compliant with the GPON standard as described in [1] and makes use of the basic tools provided within it. An outline of the proposed framework was presented in [16]. The paper is structured as follows. In Section II, the network architecture and assumptions are described. Section III explains the categorization of ONU queues into a number of types, while Section IV focuses on the concept of Offset-based Scheduling with Flexible Intervals (OSFI) and explains the way in which the proposed system works to achieve efficient operation with the aforementioned QoS differentiation and improved utilization. The exact bandwidth allocation algorithm is detailed, while possible improvements to enhance performance under long-reach operation are discussed. System performance is evaluated in Section V and finally conclusions are drawn. II. NETWORK ARCHITECTURE AND ASSUMPTIONS The network under study is a common tree-shaped PON employing the standardized GPON protocol. However, since it expected that few changes will be required regarding the MAC layer of the upcoming 10G GPON, almost everything discussed here could be directly applied to any future protocol update. Information is organized in frames of large size (the frame duration in [1] is 125 s, leading to approximately 39-kB frames for 2.5 Gbps). The frame duration in our model is FD (expressed in ), while the Byte sizes of downstream and upstream frames are not always the same, since transmission rates are not necessarily symmetrical. Downstream frames are broadcast to all ( is also called the split ratio) ONUs by means of a passive splitter (security is achieved by means of encryption) and each ONU collects only the data destined to it hence the way of operation resembles a broadcast-and-select network. In the upstream direction, frames are formed by data sent from the various ONUs in a TDMA fashion, guided by information sent by the OLT. Collision-free operation in the upstream, as well as a common timing among ONUs are achieved by using a ranging procedure during activation and registration and, if necessary, by imposing additional delay at the ONU side, so that the round-trip propagation delay can be regarded as fixed and common for all OLT-ONU pairs. We denote the fixed one-way propagation delay as . Frames in both directions include both user data and control information. Regarding the bandwidth allocation process, control information can be either slot allocations for specific queues residing at the ONUs, which are called Traffic Containers (T-CONTs) and are characterized using the so-called Alloc-ID identifiers,

TABLE I BASIC NOTATION

or queue status report requests. DBA can be achieved either when status reporting operation is employed or not. In the latter case, the OLT must observe idle slots in the upstream direction in order to determine the real bandwidth usage by each ONU and then make the appropriate allocation decisions. In this work, however, we will consider the status reporting case, since it offers the opportunity for more precise bandwidth allocation handling. There are three modes as described in [1] that can be used for queue status reporting The piggy-back mode includes the relevant queue status report in the DBR upstream frame field of the T-CONT that triggered the specific transmission. In other words, the report information is transmitted along with the upstream user data. The proposed mechanism which will be explained in the following paragraphs handles Alloc-IDs individually, hence the piggy-back reporting mode is regarded as most suitable. Following the reception of queue status report information from various T-CONTs, the OLT prepares the US BW Map field which determines which of them will be granted access to the upcoming upstream frame and specifies pointers (SStart, SStop) to denote the window within that frame when each specific T-CONT (characterized by its Alloc-ID) is allowed to send its data. Table I summarizes the notation used throughout the paper. III. ALLOC-ID CATEGORIZATION In order to provide QoS differentiation employing the OSFI mechanism, it is necessary to categorize the ONU T-CONTs into several types, treating each of them in a different manner according to their individual demands. For example, in [14] five different T-CONT types are described, based on the definitions in [5]. Herein, we will follow a different categorization, hence we will refer to Alloc-ID types rather than T-CONT types

KANONAKIS AND TOMKOS: OFFSET-BASED SCHEDULING WITH FLEXIBLE INTERVALS FOR EVOLVING GPON NETWORKS

3261

in order to avoid confusion with the types defined in [5], although the two terms are interchangeable. The proposed framework makes use of three general Alloc-ID types. The frequency of upstream slot allocations for an Alloc-ID is defined by its (a unitless integer value expressing its scheduling interval polling cycle in multiples of the frame duration), with an amount bytes being allocated during each interval. The main of difference among the three types lies in the way values to these parameters are assigned, as it is clear that the temporal bandwidth (in bytes per second) allocated to each Alloc-ID is given by (1) directly affects important metwhile the exact choice of the rics like packet delay and jitter. Below, we provide a description of each type and the way it is handled in terms of bandwidth allocation. The Fixed Type Alloc-IDs correspond to the highest priority traffic and have their guaranteed rate allocated in the form of fixed periodic allocations, i.e., the OLT assigns to them a fixed downstream frames ( is fixed amount of data once every for Alloc-ID and could even equal 1), without waiting for any queue reporting from the ONU side, therefore the amount of bytes allocated in one upstream frame for a fixed-type Alloc-ID and of course Alloc-ID is . Apart from guaranteed throughput, this Alloc-ID type also enjoys minimal packet delay and jitter, assuming that proper traffic dimensioning has taken place when choosing the and parameters. The fixed 125 s framing makes this type suitable to constant bit rate (CBR) and extremely demanding services (e.g., leased line emulation). The Flexible Type covers the middle ground between the absolute QoS of the fixed type and the non-existent QoS of best-effort traffic and is conceptually the most interesting among the Alloc-ID types. All Alloc-IDs belonging to this type are again characterized by an SLA-contracted guaranteed rate and an additional maximum surplus rate . The latter is employed in order to offer statistical multiplexing or exploit the free upstream bandwidth when available (the exact use of this surplus bandwidth is up to the provider). Moreover, each Alloc-ID is associated with an integer-valued relative weight which as will be shown later is used to distribute surplus bandwidth fairly among the flexible-type queues. Bandwidth is assigned to the flexible Alloc-IDs in a DBA manner using status reporting. The shortest possible scheduling interval for a flexible-type Alloc-ID, which we define as the basic scheduling , is bounded by the OLT-ONU interval and denote as and the processing delay at the round-trip propagation OLT and the ONUs, and , respectively (for the OLT this also includes the scheduling process delay), hence . This basic scheduling interval is rounded up to become an integer multiple of the frame duration FD. The SI value for a flexible-type Alloc-ID has to be equal or larger than the basic scheduling interval. It must be stressed at this point that we do not regard the scheduling intervals for flexible-type Alloc-IDs as fixed during network operation but we instead allow the OLT to dynamically

Fig. 1. Control message exchanges between the OLT and two flexible-type Alloc-IDs.

choose the optimal value for each , based on criteria that will be detailed below and this is one of the strongest features of the proposed scheme. The flexible nature of the scheduling interval duration should of course not affect the contracted rates of offered to each Alloc-ID . Hence, the number of bytes Alloc-ID served during each scheduling interval must obey the equation (of course as long as there are always bytes to be served The at the queue): process of updating the SI values takes place once every FD, and the scheduling interval for a specific Alloc-ID will only be updated if a new report for it has arrived. It is obvious that the OLT must easily keep track of the time left for the next allocation/report request for each Alloc-ID. This is achieved using a vector with its elements containing integer values that are interpreted as countdown timers (the time unit being FD) and which is updated once every FD. The reset of these timers happens when the corresponding scheduling interval has been updated . Fig. 1 and their starting value equals the updated SI minus depicts the control message exchanges between the OLT and an ONU, with the latter possessing two flexible-type Alloc-IDs , . The symbols and denote a grant to and a report from Alloc-ID which is sent at time . It can be seen that the scheduling intervals are different for the two Alloc-IDs, while changes during network operation. The update function actually refers to the execution of the algorithm that will be described in Section IV, part of which is the calculation of the for the Alloc-IDs which have sent a new report. new In addition, the OLT must keep a scheduling log containing all the necessary information (Alloc-ID identifiers and corresponding amounts of allocated bytes) needed to prepare the US BW Map for allocations that the OLT has scheduled in advance.

3262

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

IV. OFFSET BASED SCHEDULING WITH FLEXIBLE INTERVALS A. Overview

Fig. 2. Snapshot of the scheduling log at the OLT for the fixed and three different flexible Alloc-ID subtypes (i; j; k ).

The column indices denote future downstream frames (column 0 corresponds to the allocations that must be done within the downstream frame that is currently being prepared) and the log must be shifted by one column to the left once every FD. Each is computed, a new entry in column is time a new added, denoting the time window during which Alloc-ID will be allowed to send data in an upcoming upstream frame. Hence, frame times, the allocation will be sent downstream in and the new scheduling interval update will happen when the updated status report (along with the data) of Alloc-ID is received, i.e., in frame times, as was explained with the help of Fig. 1. An illustration of the scheduling log during network operation is given in Fig. 2. In order to simplify the example, we assume that there are three groups of flexible and within each group all queues emAlloc-ID queues offset offset . Alploy the same offset value, with offset locations that have been made for each group (or for fixed type Alloc-IDs) are indicated using different shades of grey, while blank areas in each column indicate unallocated bytes. We refer to the number of columns of the log as the scheduling depth ), since it determines how deep in time allocations can be ( value can arranged. As will become evident later, a higher provide finer differentiation among Alloc-IDs, at the cost of increased memory and computational requirements. Moreover, a very important observation is that the use of advance reservations results in further multiplexing gain, i.e., not only vertically (using each column of the scheduling log) but also horizontally (by choosing how to space the allocations in neighboring columns of the log). The proposed scheme constitutes thus a well-structured framework for the resolution of scheduling issues in the upstream of the PON network, offering at the same time (as will also be shown more clearly in Sections IV and V) the flexibility to adjust QoS performance and utilization at individual T-CONT queue granularity. Finally, the Best-Effort Type does not claim any guaranteed bandwidth and can only allocate the bandwidth that is left un. used by the previous types, up to a provisioned surplus rate However, it is necessary to guarantee the upstream transmission of the DBR field to ensure that the OLT is aware of the Alloc-ID queue status. The for this type of Alloc-IDs is fixed.

As mentioned in the previous paragraph, the OLT must devalue for each flexible-type Alloc-ID , termine the optimal taking into account several contradicting criteria in terms of QoS performance, system requirements and utilization. First of all, the piggybacked upstream queue status report DBR for each Alloc-ID covers a fixed amount of data per frame. For small scheduling intervals and low data rates, the bandwidth wasted for status reporting could be significant. The situation is further aggravated by the fact that the flexibility of handling the allocations for each Alloc-ID individually naturally comes at the cost of having additional physical layer and PLO overheads per Alloc-ID transmission. The total upstream overhead per Alloc-ID, assuming that no Alloc-ID upstream transmissions from the same ONU are contiguous and that no PLOAM and PLS fields exist, can be calculated as the sum of the Guard time, the PLO (including the Preamble, Delimiter, BIP, ONU-ID, and Ind fields) and the DBR . The BIP, ONU-ID and Ind fields occupy 8 bits each. Regarding the Preamble and Delimiter, suggested values can be found in [17]. Based on the values contained therein, we have made the assumption of 432 and 20 bits, respectively, for an upstream rate of 10 Gbps. Moreover, according to [17], the minimum Guard times are 6, 16, 32, and 64 bits for upstream data rates of 155.52, 622.08, 1244.16, and 2488.32 Mbps, respectively. Therefore, a reasonable projection for rates close to 10 Gbps would be no less than 256 bits. In fact, acceptable performance at 10 Gbps has been experimentally demonstrated [18] using guard times of 30 ns (i.e., 300 bits); however, to achieve longer reach these figures have to be increased by more than one order of magnitude. Finally, if Mode 0 is used for reporting, the length of the DBR is 2 B. Based on the aforementioned calculations, the total waste per Alloc-ID versus the selected for different aggregate upstream data rates is depicted in Fig. 3. For example, if , the waste would amount to almost 1 and 3 Mbps, for upstream rates of 2.5 and 10 Gbps, respectively. Thus, we already meet the first constraint which dictates that the ratio of report bandwidth per Alloc-ID against its guaranteed rate should be kept lower than some predefined threshold. In addition, frequent polling and bandwidth assignment for many queues should be avoided since it obviously implies heavier processing and thus increased complexity and cost for the OLT, which is already expected to reach the limits of electronics if a 10-Gbps rate is used. In fact, many queues (corresponding to elastic services) would not benefit much by a too frequent polling and bandwidth assignment; hence, it should be avoided to improve system efficiency and utilization. For the reasons listed above, we follow the approach of setfor each Alloc-ID , equal to ting a lower limit to the offset . The offset parameter actually specifies the lower bound to the queueing delay of each queue. Each Alloc-ID Alloc-ID is associated with a set of QoS requirements which in most cases are: a maximum packet delay, a maximum packet delay variation and a maximum packet loss probability. Loss can only occur due to buffer overflow at the Alloc-ID queues. As it has

KANONAKIS AND TOMKOS: OFFSET-BASED SCHEDULING WITH FLEXIBLE INTERVALS FOR EVOLVING GPON NETWORKS

3263

amount of allocated bytes per frame in addition to short scheduling ranges will possibly cause even the low-offset Alloc-IDs to “suffocate,” cancelling the desirable way of operation as detailed in the previous paragraph. B. Dynamic Bandwidth Assignment Algorithm

Fig. 3. Estimated overhead per Alloc-ID upstream transmission.

already been mentioned, both packet delay and delay variaand obviously by offset . tion are affected by the choice of Under low network loading conditions, the queueing delay is offset and expected to roughly be distributed between offset (depending on the exact instant a packet arrived between two consecutive allocations and assuming a simplistic gated grant mechanism). Thus, higher offset values imply both higher delay and delay variation while at the same time the corresponding Alloc-IDs can use a narrower range of values (being limited by the maximum scheduling depth). These effects give the opportunity to fine-tune the performance of the various flexible-type Alloc-IDs (apart from the choice of and values) using fitting offset values according to the QoS needs of each individual Alloc-ID. For example, if we denote the maximum queuing delay requirement for Alloc-ID , then offset could be computed as follows: Alloc-ID as offset

(2)

Under normal network loading conditions, the different offset values are able to provide clear performance differentiation among various Alloc-IDs. However, one may notice that when load increases, the Alloc-IDs possessing higher offsets are expected to exert a backpressure on the lower-offset ones (as a result of the advance allocations and the corresponding shifting of the scheduling log). That will cause the latter to start choosing higher scheduling intervals (even reaching the higher offset areas), sacrificing thus delay performance in order to ensure their throughput. The end result is that due to the prioritized execution of the scheduling algorithm which will be detailed in Section IV-B the higher-offset Alloc-ID queues will collapse first. Hence, the whole process leads to an elegant degradation of performance for the various queues depending on their QoS requirements, something which will become even more evident by the results presented in the performance evaluation section. Regarding the selection of the offset and parameters, special care should be taken of the following issue: A high number of supported Alloc-IDs may cause undesirable offset , system behavior at high loads if the difference which we can call the scheduling range, is not correctly chosen. The reason is that, as explained above, most Alloc-IDs will values and then the higher tend to move towards higher

In any case, at the time of the scheduling interval update (i.e., if a report has arrived), the OLT logic must determine the opand values by executing the timal combination of DBA algorithm that will be detailed in this section. As is the case for any DBA algorithm, the main objectives are two: first of all, each bandwidth-requiring entity (ONU, traffic class or individual queue—depending on the implementation) should receive its pre-negotiated guaranteed rate. In addition, the unallocated bandwidth, if any, should be distributed among active entities fairly. Consequently, the proposed algorithm consists of two phases. During the first phase, the guaranteed bandwidth component of each flexible type Alloc-ID, as well as a minor amount of bytes for the best-effort ones (enough for sending the relevant upstream DBR , i.e., just for polling purposes) is allocated. This is done by first inspecting the relevant columns of the scheduling values (beginning with log, choosing among all eligible the one dictated by the offset parameter), the lowest one that can accommodate the full amount of bytes reported by the specific Alloc-ID. If no such interval exists, the one corresponding to the largest unallocated bandwidth is chosen. Another option would be to directly choose the highest-unallocated-bandwidth interval, but this would inevitably lead to larger delay values and increase computational complexity (a large number of columns should be examined, whereas in the former case and under normal conditions only a few checks will suffice for the value is computed as the algorithm to break). Then the minimum of the reported bytes by the Alloc-ID and the amount dictated by the combination of the selected interval and the of the Alloc-ID. The order in which Alloc-IDs are examined in order to update their scheduling intervals naturally affects their performance, since the ones that are updated first may block the targeted scheduling intervals of others. This issue is solved by applying a simplistic prioritized discipline in the examination of the various types of Alloc-IDs, while among the ones belonging to the same type, a shifted round-robin examination is applied in order to provide long-term fairness. Note that the proposed mechanism will inherently spread allocations across consecuare of tive frames in case of overload (even if the various equal duration) achieving thus higher utilization of the system, something which would not be possible if a rigid, offline assignment of scheduling intervals had been employed. The second phase involves again a round of queue examinations of Alloc-IDs; however, this time only column zero of the scheduling log is considered and only Alloc-IDs that have already allocated bandwidth within this column and their timer has expired take part in the allocation process. Any remaining space in the upcoming frame can now be allocated to those Alloc-IDs in the same prioritized fashion as before, while the associated with each Alloc-ID are used to ensure weights a fair distribution of the extra bandwidth. However, no queue will finally receive more than the sum of its guaranteed and

3264

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

Fig. 5. Evolution of a scheduling log column after several algorithm executions.

Fig. 4. Pseudocode of the DBA algorithm for the flexible type.

surplus rates. Thus, the allocation of surplus bandwidth happens just before the OLT sends the allocations, because otherwise higher-offset Alloc-IDs would always consume more surplus bandwidth (by making their reservations earlier). A C-style pseudocode of the algorithm for the flexible-type Alloc-IDs is given in Fig. 4. Some clarifications are needed regarding the noflextation used therein: First of all, we assume there exist ible-type Alloc-ID queues and for the sake of simplicity we asand 2) sume that 1) their identifiers take values from 1 to the order of Alloc-ID identifiers corresponds to increased offset values, so that all queue examinations are done in a prioritized way as needed (in reality, each time a new Alloc-ID is registered, a sorting algorithm should be executed to keep the round-robin denotes the examination list in order). The symbol total number of bytes allocated to Alloc-ID within column of the scheduling log, while represents the amount of unallocated bytes within column at any time. In order to provide more insight into the operation of the proposed mechanism, let us consider, with the help of Fig. 5, an example showing the evolution of a specific scheduling log column after several algorithm executions. The actual values used are taken from the simulation scenarios (viz. Section V). For the purposes of the example, it suffices to mention that belong to the flexible type with paramAlloc-IDs Mbps, and offset while Alloc-IDs eters are also of the flexible type but with parameters Mbps, and offset . Finally, there are the best-effort Alloc-IDs with , Mbps and . The overall frame size is 19 440 B while . At time , the execution of the first phase (guaranteed bandwidth) of the algorithm for Alloc-IDs

results in their getting allocated the amount of bytes shown in the figure (the notation “Alloc-ID:Alloc. Bytes” is used) in column 8 of the log. Note that all flexible type Alloc-IDs received the amount of reported bytes plus 2 B for sending the DBR field (if Mode 0 is used for reporting [1]), with the exception of Alloc-ID 58, which reported 5040 B but was allocated only 3906. This happened because otherwise the 25-Mbps guaranteed rate would be violated according to (1), given that the is 10 . It can also be seen that best-effort selected Alloc-IDs receive only sufficient bytes for the DBR . After a time period equal to five frame durations, Alloc-IDs also receive their guaranteed bandwidth in the same column (which however has shifted to become column 3). It is interesting to remark that the 3564 B of Alloc-ID 1 actually corMbps than the respond to much higher bandwidth 3906 B of Alloc-ID 58 Mbps due to the different values of each allocation (5 and 10, respectively). Finally, three frame durations later, the Alloc-IDs which possess a surplus bandwidth component (i.e., 54, 58, 26, 51, 11) share the 7664 unallocated bytes in the column (in this example all weights where chosen to be equal), as a result of the execution of the second phase of the algorithm and as long as this does not violate —the 1532 B with an equal to 10 correany Alloc-ID’s spond to less than 10 Mbps. Now the next downstream frame can be prepared, using the values in column 0 of the scheduling log to create the US BW Map for each Alloc-ID contained therein. C. Long-Reach Considerations In a 100-km-reach GPON, the signal round trip delay will be significantly increased from the 200 s of a 20-km-reach PON to 1000 s, consequently causing excessive packet delay (as is much larger). Moreover, the increased duration between the transmission of reports and reception of grants causes the undesirable effect of outdated queue status information and a consequent inability to effectively handle fluctuations of bursty traffic. The situation can be rectified by applying a prediction method which is better explained with the help of Fig. 6. Since only Alloc-ID is referred to, the indices are dropped for simplicity. , it must predict the addiWhen the OLT sends the grant that will have been accumulated at tional amount of bytes the queue from the transmission of the report till the arrival at the ONU. The prediction is based on the known value of of the bytes accumulated between and . As is depicted was either (a) smaller or (b) larger than the actual in Fig. 6,

KANONAKIS AND TOMKOS: OFFSET-BASED SCHEDULING WITH FLEXIBLE INTERVALS FOR EVOLVING GPON NETWORKS

3265

were executed: In the first one, the algorithm and network parameters were chosen in such a way so as to facilitate the comparison with other works in the literature, while the second will help in proving that the proposed scheme can perform well also in the case of next generation long-reach GPON networks. A. First Simulation Scenario

Fig. 6. Relationship between grants, reports, and Alloc-ID queue size, with reference to Fig. 1.

queue size at . However, in both cases, can be computed , where denotes the amount as of idle slots (expressed in bytes) sent by the ONU in the second case (the idle slots should be monitored by the OLT with the reception of the upstream frame at ). Hence, for the first phase is used inof the bandwidth assignment algorithm, a value stead of the actual report, which is computed as follows: (3) In order to further improve system utilization, the bytes on top of the report are allocated in the form of colorless grants (i.e., destined to T-CONT type 5 as defined in [5]), so they can be used by other queues of the same ONU only if the specified one cannot consume all of them. This way of operation guarantees that no queue will get less bandwidth than requested and at the same time ensures that low priority queues will not starve due to excessive predictive allocations of the higher priority ones. We hereafter discriminate between simple Gated operation of OSFI (G-OSFI) and operation using Predictive Colorless Grants (PCG-OSFI). V. PERFORMANCE EVALUATION In order to evaluate the performance of the proposed scheme, a simulation model was developed using the OPNET Modeler simulator. For the purposes of this work, two main scenarios

More specifically, in the first scenario the total upstream rate s (which corresponds to a maxwas 1.244 Gbps, imum OLT-ONU distance of 20 km), s, s, and the number of ONUs was 16. Each ONU was connected via an interface of 100 Mbps. In [14], results were presented based on the categorization of traffic in four T-CONT types. T-CONT type 1 enjoys unsolicited allocations (much like the fixed Alloc-ID type in OSFI) and since its behavior is deterministic it is omitted from the discussion. In order to make our results comparable, we devise two subtypes of our general flexible type to emulate the behavior of T-CONT types 2 and 3 of [14]. The best-effort type corresponds to T-CONT type 4. The OSFI service parameters selected were such that the guaranteed and surplus bandwidth, as well as the minimum service interval for each type follow the values assumed in [14] for the corresponding T-CONT types: For flexible type 1 (FT 1), Mbps, , and offset , for flexible type Mbps, , and offset . Fi2 (FT 2), , Mbps nally, for the best-effort (BE) type . The minimum possible basic scheduling inand was selected, while the scheduling depth terval was 50. Each ONU hence possessed three Alloc-ID queues, one corresponding to each type and a maximum capacity of 1 MB. Packets arrived at each queue following a Poisson process, while packet sizes followed the well-known trimodal distribution (packet sizes of 64, 500, 1500 appear with probabilities 0.6, 0.2, and 0.2, respectively, according to [19]), which has been shown to be a realistic assumption for characterizing traffic generated from IP-based applications. Traffic is balanced among ONUs and among the individual queues of each ONU. Simulations took place for average ONU offered load ranging up to the full interface capacity (i.e., up to a ratio of 1.3 of the total upstream capacity). The rationale behind the selection of the specific service parameters mentioned above is the following. The guaranteed bandwidth components of the flexible types should in any case be available, at least within the capacity region of the considered PON. Indeed, in the simulated network it holds that Mbps Mbps Mbps, i.e., the total guaranteed bandwidth does not exceed the overall PON capacity. Apart from that, FT 1 queues should enjoy better QoS than FT 2 and this required differentiation is achieved in two ways: First, although both types can potentially receive the same total throughput (50 Mbps), in the case of FT 2 this is split in two equal parts; a guaranteed and a surplus one, so that in case of heavy congestion only FT 2 will be affected (the surplus part may not be available). Second, by assigning a lower offset value to FT 1, lower average delay is achieved and blocking from other queues in the scheduling log is avoided. Note that the selected offset and SI parameters for FT 2 and BE queues

3266

Fig. 7. Average end-to-end packet delay comparison between G-OSFI (first simulation scenario) and GIANT (from [14]).

Fig. 8. Average G-OSFI throughput per T-CONT queue (first simulation scenario).

will cause their average packet delay to remain under normal conditions around the 1.5 ms goal set in [20] for the one-way transmission in the access system. Differentiation between FT 2 and BE queues is achieved due to 1) the absence of a guaranteed bandwidth part for BE, 2) the prioritized examination of queues for the surplus bandwidth allocation, and 3) the lack of flexibility for BE Alloc-IDs regarding the choice of their SI. In Fig. 7, the average G-OSFI end-to-end packet delay (from the arrival of a packet at the ONU UNI port until its reception at the OLT NNI port) of each traffic type is compared with the results presented in [14] for the GIANT scheme (the improvements to the GIANT scheme proposed in [14] are not considered, since they could be applied to our mechanism as well and offer similar performance benefits). It is evident from Fig. 7 that G-OSFI manages to offer a very significant delay reduction for BE traffic (compared to the GIANT T-CONT 4), which actually reflects the much improved degree of system utilization provided by OSFI. Regarding FT 1 and FT 2, at low loading conditions the delay reduction with G-OSFI is not as large as in the case of BE (although for FT 1/T-CONT 2 it comes at a factor of more than 3); however, the main reason for that is that the

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

Fig. 9. Average G-OSFI end-to-end packet delay increase due to longer reach (first scenario versus second scenario).

delay in this case is governed by the choice of offset parameters. Further reduction of the delay at those load regions can be achieved by adopting PCG-OSFI, as will be shown below. On the other hand, at higher loads G-OSFI manages to offer a delay reduction of almost one order of magnitude for FT 1 and by a factor of more than 4 for FT 2. However, the most significant feature of OSFI that can be observed in Fig. 7 is its ability to enforce clear QoS differentiation based on the service parameters of each type; In contrast to GIANT, whereby the delays of both T-CONT 2 and T-CONT 3 increase in an intertwined way, OSFI keeps the most demanding FT 1 Alloc-ID queues at low delay levels under any loading, degrading—if necessary—the performance of the lower-priority queues. Of course the same happens regarding FT2 and BE, respectively: As load approaches and exceeds the PON upstream capacity, FT 2 queues begin to consume part of their provisioned surplus rate leaving even less bandwidth available for the BE type and only begin to collapse when the interface capacity is reached. BE queues actually collapse earlier, at an offered load of 0.6, since from that point of on their average produced load exceeds the best-effort 20 Mbps. In general, the system operates as expected, i.e., results in an overall increase in system utilization respecting at the same time the differentiated QoS requirements of each traffic type. In Fig. 8, the average throughput per individual T-CONT queue is shown. FT1 and FT2 queues display almost the same and rates is identical throughput since the sum of their and there is always enough upstream capacity to accommodate both, at the expense of reduced throughput for BE. Moreover, as mentioned above, the throughput of BE queues can never ex. Note though that even ceed 20Mbps, which is their chosen at the highest possible loads BE queues still receive an average of more than 10 Mbps which is deemed adequate for the kind of services (web browsing, file downloading, e-mail) supported by this type. B. Second Simulation Scenario In the second scenario, a long-reach GPON network was modeled, whereby the maximum OLT-ONU distance was 100 km while the number of ONUs was again set to 16. All the service and traffic parameters were the same as in scenario 1,

KANONAKIS AND TOMKOS: OFFSET-BASED SCHEDULING WITH FLEXIBLE INTERVALS FOR EVOLVING GPON NETWORKS

Fig. 10. Average end-to-end packet delay comparison between the G-OSFI and PCG-OSFI schemes (second simulation scenario).

with the exception of the fixed scheduling interval for the . BE type which due to the increased reach was From the results shown in Fig. 9, it is evident that the longer network reach causes excessive delay increase when G-OSFI is employed. What is even worse is that the relative increase appears to be more significant for the higher priority queues. Hence, in order to improve overall performance and fix the aforementioned imbalance, we have applied the PCG-OSFI scheme in the simulated network and the results are compared with G-OSFI in Figs. 10 and 12. The effectiveness of PCG-OSFI can easily be observed in Fig. 10: although the absolute average end-to-end delay values for the FT queues remain in both schemes at low levels, under almost any ONU offered load, the PCG-OSFI scheme actually displays a significant decrease of delay compared to G-OSFI for both FT 1 and FT 2. In particular, FT 1 in the PCG-OSFI scheme manages to always remain below the 1.5-ms limit. Regarding the BE queues, although they collapse at similar load levels (around 0.6 with PCG-OSFI) compared to the scheme in [15], superior performance is achieved for the rest of the queues, an effect which again signifies both improved QoS differentiation and resource utilization. Note that the use of PCG-OSFI instead of G-OSFI results in slightly increased delay for FT 2 and BE at high loads. This happens because the PCG-OSFI dynamically assigns most of its bandwidth to guarantee the QoS of FT 1 traffic at extremely high load regions. The probability density function (pdf) of end-to-end delay under PCG-OSFI for the three Alloc-ID types and at a load of 0.55 is depicted in Fig. 11. It is evident that apart from offering performance differentiation in terms of average delay, the proposed scheme also manages to differentiate the experienced delay variation depending on the priority of traffic; FT 1 exhibits the lowest average delay with individual values distributed only within a very limited range, while on the opposite side the delay values for BE queues are not only located around a much higher average but also significantly more diffused. Finally, as it is depicted in Fig. 12, the flexible type queues display zero packet loss ratio, while for the BE type the use of PCG-OSFI can provide loss ratio improvements of up to two orders of magnitude.

3267

Fig. 11. Probability density function (pdf) of end-to-end packet delay for G-OSFI at 0.55 offered load (second simulation scenario).

Fig. 12. Average packet loss ratio comparison between the G-OSFI and PCGOSFI schemes (second simulation scenario).

VI. CONCLUSION We have proposed a novel bandwidth allocation framework built on the concept of Offset-based Scheduling with Flexible Intervals (OSFI). OSFI can be applied to both current and next generation GPON networks, while issues related to long-reach operation are addressed by the enhanced PCG-OSFI scheme also proposed in this paper. The main focus of the work has been the improvement of performance in terms of system utilization and QoS differentiation, the latter including the most important QoS performance measures, i.e., delay, delay variation and packet loss. The framework was extensively tested using computer simulations, which verified its ability to fulfil the aforementioned goals. REFERENCES [1] “Gigabit-capable passive optical networks (G-PON): Transmission convergence layer specification,” 2004, ITU-T Rec. G.984.3. [2] IEEE Draft P802.3ah (tm), P802.3ah (tm), 2004. [3] R. Davey, J. Kani, F. Bourgart, and K. McCammon, “Options for future optical access networks,” IEEE Commun. Mag., vol. 44, no. 10, pp. 50–56, Oct. 2006. [4] M. Hajduczenia, P. R. M. Inacio, H. J. A. D. Silva, M. M. Freire, and P. P. Monteiro, “10G EPON standardization in IEEE 802.3av project,” in Proc. Opt. Fiber Communication/National Fiber Opt. Eng. Conf. 2008, OFC/NFOEC’08, Feb. 24–28, 2008, pp. 1–9.

3268

[5] “A Broadband optical access system with increased service capability using dynamic bandwidth assignment,” ITU-T Rec. G.983.4. [6] G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT: A dynamic protocol for an Ethernet PON (EPON),” IEEE Commun. Mag., vol. 40, no. 2, pp. 74–80, Feb. 2002. [7] J. Zheng and H. T. Mouftah, “An adaptive MAC polling protocol for Ethernet passive optical networks,” in Proc. IEEE Int. Conf. Commun. ICC 2005, May 16–20, 2005, vol. 3, pp. 1874–1878. [8] M. Ma, L. Liu, and T. H. Cheng, “Adaptive scheduling for differentiated services in an Ethernet passive optical network,” OSA J. Opt. Netw., vol. 4, no. 10, pp. 661–670, 2005. [9] T. Berisa, A. Bazant, and V. Mikac, “Bandwidth and delay guaranteed polling with adaptive cycle time (BDGPACT): A scheme for providing bandwidth and delay guarantees in passive optical networks,” OSA J. Opt. Netw., vol. 8, no. 4, pp. 337–345, Apr. 2009. [10] H. C. Leligou, C. Linardakis, K. Kanonakis, J. D. Angelopoulos, and T. Orphanoudakis, “Efficient medium arbitration of FSAN compliant GPONs,” Int. J. Commun. Syst., vol. 19, pp. 603–617, 2006. [11] J. D. Angelopoulos, T. Argyriou, S. Zontos, and T. V. Caenegem, “Efficient transport of packets with QoS in an FSAN-Aligned GPON,” IEEE Commun. Mag., vol. 42, no. 2, pp. 92–98, Feb. 2004. [12] C. H. Chang, P. Kourtessis, and J. M. Senior, “GPON service level agreement based dynamic bandwidth assignment protocol,” IET Electron. Lett., vol. 42, no. 20, pp. 1173–1174, Sep. 2006. [13] J. Jiang, M. R. Handley, and J. M. Senior, “Dynamic bandwidth assignment MAC protocol for differentiated services over GPON,” IET Electron. Lett., vol. 42, pp. 653–655, 2006. [14] M.-S. Han, H. Yoo, B.-Y. Yoon, B. Kim, and J.-S. Koh, “Efficient dynamic bandwidth allocation for FSAN-Compliant GPON,” OSA J. Opt. Netw., vol. 7, no. 8, pp. 783–795, Jul. 2008. [15] C.-H. Chang, P. Kourtessis, J. M. Senior, and N. M. Alvarez, “Dynamic bandwidth assignment for multi-service access in long-reach GPON,” in Proc. ECOC’07, VDE Verlag, GmbH, 2008, pp. 277–278.

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 27, NO. 15, AUGUST 1, 2009

[16] K. Kanonakis and I. Tomkos, “Efficient scheduling disciplines for next generation QOS-aware GPON networks,” in Proc. 10th Anniv. Int. Conf. Transparent Opt. Netw. ICTON 2008, Jun. 22–26, 2008, vol. 4, pp. 135–138. [17] “Gigabit-capable passive optical networks (GPON): Physical media dependent (PMD) layer specification,” 2003, ITU-T Rec. G.984.2. [18] B. Belfqih et al., “10 Gbit/s TDM passive optical network in burst mode configuration using a continuous block receiver,” in Proc. Nat. Fiber Opt. Eng. Conf., OSA Tech. Dig. (CD) (Opt. Soc. Amer.), 2008, paper JWA112. [19] K. Claffy, G. Miller, and K. Thompson, “The nature of the beast: Recent traffic measurements from an internet backbone,” presented at the Internet Soc. (ISOC) INET’98, Washington, D.C., Jul. 21–24, 1998 [Online]. Available: http://www.caida.org/publications/papers/1998/Inet98/Inet98.ps.gz [20] “One-way transmission time,” 2003, ITU-T Rec. G.114.

Konstantinos Kanonakis (M’09) received the Dipl.-Ing. and Ph.D. degrees from the School of Electrical and Computer Engineering, National Technical University of Athens (NTUA), Athens, Greece, in 2007 and 2004 respectively. His main research interests are in the area of traffic engineering and architectures and control protocols for optical core and broadband access networks. He has coauthored more than 20 papers that appeared in international journals and conferences and has participated in various EU-funded projects.

Ioannis Tomkos (M’02) has coauthored about 65 peer-reviewed articles published in international scientific journals, magazines, and books and over 175 presentations at conferences, workshops, and other events. His work focuses on optical networking and techno-economic studies of broadband networks.

Suggest Documents