User-oriented hierarchical bandwidth scheduling ... - Semantic Scholar

1 downloads 1647 Views 801KB Size Report
Feb 1, 2010 - nities) per-cycle for high priority traffic that is significant in improving delay ... essential to deliver effective QoS, especially for an ONU hosting.
Computer Communications 33 (2010) 965–975

Contents lists available at ScienceDirect

Computer Communications journal homepage: www.elsevier.com/locate/comcom

User-oriented hierarchical bandwidth scheduling for ethernet passive optical networks Yongning Yin, Gee-Swee Poo * NTRC, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore

a r t i c l e

i n f o

Article history: Received 7 January 2008 Received in revised form 3 November 2009 Accepted 23 January 2010 Available online 1 February 2010 Keywords: EPON OLT ONU User-oriented hierarchical bandwidth scheduling algorithm QoS

a b s t r a c t Ethernet passive optical networks (EPONs) are being designed to deliver different quality of service (QoS) to carry heterogeneous traffic of end users. For this purpose, hierarchical scheduling is needed for upstream bandwidth allocation, with high-level scheduling for inter-optical network unit (ONU) allocation and low-level scheduling for intra-ONU distribution. In this paper, we propose new User-oriented Hierarchical bandwidth Scheduling Algorithms (UHSAs) that support differentiated services and guaranteed fairness among end users. For inter-ONU scheduling, we adopt an improved hybrid cycle approach that separates a frame into a static part for high priority traffic and an adaptive dynamic part for low priority traffic. For intra-ONU scheduling, we propose credit-based scheduling approach to guarantee fairness among end users. To improve scheduling efficiency and lower queue management complexity, we design a novel credit-based common queue (CCQ) for each traffic class to enhance scheduling architecture and minimize average number of queues in the ONU. On the other hand, we propose a transmission priority scheme for different queue groups, which together with CCQ mechanism serves the objective of improving delay and delay variation performance of high priority traffic, guaranteeing throughput for bandwidth sensitive medium priority traffic, as well as providing fairness and throughput protection among different users. The UHSAs exhibit a feature of multiple transmission opportunities (M-opportunities) per-cycle for high priority traffic that is significant in improving delay and delay variation performance for high priority traffic as compared with previous solutions of single transmission opportunity (Sopportunity) per-cycle. Detailed simulation experiments are conducted to study the performance and validate the effectiveness of the proposed protocols. Ó 2010 Elsevier B.V. All rights reserved.

1. Introduction Ethernet passive optical network (EPON) has been developed as one of the most promising solutions for next generation broadband access networks. This is attributed to the convergence of low-cost Ethernet equipment and low-cost fiber infrastructure. Typically, an EPON consists of a centralized optical line terminal (OLT) that connects to a number of optical network units (ONUs) through a point-to-multipoint architecture. The OLT resides in a centre office (CO) that links up the optical access network to metropolitan area network (MAN) and wide area network (WAN), while the ONUs are placed at either end users’ premises or the curb. In the downstream direction, OLT broadcasts Ethernet frames through a 1:N passive splitter to all ONUs in one wavelength. Each ONU selectively receives frames destined to it based on its medium access control (MAC) address. In the upstream direction, all ONUs transmit data to the OLT through a N:1 passive * Corresponding author. Tel.: +65 67904512; fax: +65 67933318. E-mail addresses: [email protected] (Y. Yin), [email protected] (G.-S. Poo). 0140-3664/$ - see front matter Ó 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2010.01.018

combiner using another wavelength. Since all ONUs share the common uplink channel, a MAC arbitration mechanism is needed at OLT to avoid data collision and at the same time efficiently and fairly distribute upstream bandwidth among all ONUs. Moreover, an ONU may host one or more network subscribers (end users) and carry heterogeneous traffic with diverse quality of service (QoS) from the subscribers. Each user has a service level agreement (SLA) with the service provider defining desired services for its traffic. To further fairly distribute bandwidth among end users under an ONU and offer differentiated QoS, a decentralized scheduling and queuing algorithm is required at the ONU to implement local bandwidth distribution and buffer management. Therefore, a hierarchical scheduling system is needed to handle the EPON upstream bandwidth allocation, with high-level scheduling for interONU allocation and low-level scheduling for intra-ONU distribution. An illustration of the schemes is given in Fig. 1. Supporting differentiated services is a crucial issue for the successful deployment of the EPON system, carrying heterogeneous traffic with diverse quality of service (QoS) requirements. For example, voice communications have stringent requirements on delay and jitter but tend to be of narrow-band nature. Conversely,

966

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

Inter-ONU scheduling (High level)

OLT

Intra-ONU scheduling (Low level) ONU1

ONUi

ONUN

• • • Local Queuing

Users

Users

Fig. 1. Architecture of the hierarchical scheduling.

standard and high-definition video (STV and HDTV) are more delay tolerant but require a wide-band support with throughput guarantee in a relatively large-time scale. The principle mechanism for providing enhanced QoS is to classify packets of application flows into a limited number of service classes according to their QoS demands and offer differentiated services to each class. Service classes are developed by enumerating the types of applications supported in the network and each of the service classes is characterized by a set of QoS parameters such as latency, jitter, loss, and throughput assurances. In this paper, we adopt the service class recommendations as in [1] to categorize services into three priorities (classes), namely expedited forwarding (EF), assured forwarding (AF), and best effort (BE). While EF services require stringent delay and delay variation specification, AF is designed for services that are not delay sensitive but require statistical bandwidth guarantees. BE is proposed for traffic with no strict delay and throughput requirements. The reporting and granting mechanism of EPON allows OLT to explicitly allocate transmission opportunities (timeslots) to specific traffic classes of an ONU. Furthermore, a local scheduling algorithm with appropriate queuing mechanism is also essential to deliver effective QoS, especially for an ONU hosting multiple end users. In this paper, we propose a collection of innovative user-oriented hierarchical bandwidth scheduling algorithms (UHSAs) to support differentiated services and guarantee fairness among end users. For inter-ONU scheduling, we adopt an improved hybrid cycle approach [19] that separates a frame (cycle) into a static part for high priority (EF) traffic to ensure guaranteed service and better performance, and an adaptive dynamic part for low priority (AF and BE) traffic to cater to the bursty nature of AF/BE traffic. The EF static part covers a whole set of fixed EF timeslot each for an ONU in each cycle. Nonetheless, for the AF/BE part, we adopt two different strategies. For the first strategy, the AF/BE part covers a whole set of dynamic AF/BE timeslot each for an ONU like that of EF part. We call it UHSA with single cycle (for short UHSA-S), in which a whole set of AF/BE timeslot spans a single cycle. For the second strategy, the AF/BE part covers variable number of AF/BE timeslot in each cycle, which is to be determined dynamically depending on the network condition. This implies that a whole set of AF/BE timeslot may span multiple cycles, i.e., an ONU gets an AF/BE timeslot in multiple cycles. We name this approach as UHSA-M.

For intra-ONU scheduling, we propose a credit-based scheduling algorithm to guarantee fairness among end users. To improve scheduling efficiency and lower queue management complexity, we design a novel credit-based common queue (CCQ) for each traffic class to enhance scheduling architecture and minimize average number of queues in the ONU. On the other hand, we propose a transmission priority scheme for different queue groups in both EF and AF/BE timeslot. Together with the CCQ mechanism, it serves the objective of improving delay and delay variation performance of EF traffic, guaranteeing throughput for AF traffic, as well as providing fairness and throughput protection among different users. Both UHSA-S and UHSA-M possess a feature of multiple transmission opportunities (M-opportunities) for EF traffic. This is different from most previous approaches that make use of only single transmission opportunity (S-opportunity) per-cycle solution. We investigate how the UHSA algorithms work with the gated transmission mechanism, particularly the multipoint control protocol (MPCP) to implement an effective and efficient EPON system. Extensive simulation experiments are conducted to study the performance characteristics and validate the effectiveness of the proposed protocols. The rest of the paper is organized as follows. Section 2 reviews the MPCP protocol and existing bandwidth allocation algorithms. Section 3 presents the proposed UHSA-S and UHSA-M algorithms. Section 4 presents the simulation results. Section 5 concludes the paper.

2. Review of MPCP and bandwidth allocation algorithms The upstream transmission time-line is divided into a stream of timeslots by OLT MAC sub-layer using the upstream bandwidth allocation algorithm. Each timeslot represents an upstream transmission opportunity that can deliver a number of packets of the ONU which the timeslot is assigned to. OLT determines the size of each timeslot and generates time reference for identifying the timeslot. Finally, OLT controls the access to these timeslots by the ONUs using a specific scheduling algorithm and corresponding MAC control messages. In this section, we briefly review the bandwidth allocation mechanism, i.e., MPCP protocol and some notable bandwidth allocation algorithms proposed for EPONs.

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

2.1. Multipoint control protocol (MPCP) MPCP has been developed by the IEEE 802.3ah Ethernet in the First Mile Task Force [2] for facilitating dynamic bandwidth allocation and arbitrating the transmission of multiple ONUs in EPON architectures. MPCP is a frame-based signaling protocol and defines a set of 64-byte MAC control messages for real-time information exchange between OLT and ONUs for optimal transmission of 802.3 Ethernet packets. MPCP is not concerned with any particular bandwidth allocation algorithm; rather, it is a supporting mechanism that facilitates the implementation of various allocation algorithms. MPCP relies on two control messages, namely GATE and REPORT, to manage upstream bandwidth request and assignment. An ONU sends a REPORT message to the OLT, containing the timestamp and desired size of the next timeslot based on the ONU’s instantaneous queue size. The message can be sent out either at the beginning or at the end of the timeslot. The OLT calculates the round-trip time (RTT) using the reporting timestamp and determines bandwidth allocation using the reported queue information. The OLT sends a GATE message to an ONU with the beginning and length of the transmission timeslot for the ONU. A REPORT message can support up to eight queue reports, and a GATE message can grant at most four timeslots. Therefore, this mechanism is efficient in implementing differentiated services for multiple class queues in an ONU.

2.2. Review of existing bandwidth allocation algorithms To date, many bandwidth scheduling algorithms have been proposed on the distribution of OLT capacity amongst ONUs, i.e., interONU scheduling. To sum up, these algorithms can be grouped into two categories. One is marked as single transmission opportunity (S-opportunity) per-ONU-per-cycle solution. It covers algorithms that distribute a single timeslot to each ONU in a frame cycle. These include algorithms in [3–6]. In [3], the authors proposed a polling scheme called interleaved polling with adaptive cycle time (IPACT). IPACT uses an interleaved polling approach, where the next ONU is polled before the transmission from the previous one has arrived, thus yielding efficient bandwidth utilization. However, IPACT consumes a lot of downstream bandwidth by transmitting GATE messages in each polling cycle. This is rather severe at light load situation due to unlimited minimum cycle. For example, 43% of the total downstream link capacity is consumed by GATE messages for a network of 32 ONUs each with 5 km distance from the OLT. This phenomenon is marked as light load bandwidth consumption [14]. In [4], the authors introduced a bandwidth guaranteed polling (BGP) scheme that provided guaranteed bandwidth for premium subscribers. It is not consistent with MPCP and has poor network utilization. In [5], the authors presented a dynamic bandwidth allocation (DBA) protocol where ONUs were partitioned into under-loaded and over-loaded groups. Total excess bandwidth from under-loaded ONUs is fairly distributed among the overloaded ONUs. A local priority scheduling algorithm was proposed to alleviate light load penalty. This approach assumes that total excess bandwidth saved from under-loaded ONUs is always fully occupied by over-loaded ONUs, which is not necessarily true. In [6], an advanced DBA algorithm is proposed with a measure to address above deficiency. Furthermore, the algorithms in [5,6] invoke grant program only after receiving REPORT messages from all ONUs. This results in a large idle time (Round-Trip time (RTT)+processing time) in each cycle during which the EPON channel is not utilized. We call this phenomenon as RTT idling problem. Since an ONU has only a single transmission opportunity in a cycle and the cycle time fluctuates greatly due to bursty AF and BE traffic, this solution yields poor queuing delay and delay variation perfor-

967

mance for EF traffic due to large push-pull of timeslot starting point from cycle to cycle. The other category is named as S-opportunity per-class-per-cycle solution. This includes algorithms that divide a frame into two or more parts and each carrying one or more different traffic classes. This solution purposes to alleviate impact on high priority traffic by bursty low priority traffic through separating the transmission of high and low priority traffic in different intervals of a frame. In [7], a hybrid slot size/rate (HSSR) protocol was introduced which separated a frame into two parts with the initial one including N fixed high priority timeslots each for an ONU where N is the number of ONUs and the latter one consisting of variable number of low priority timeslots. This approach improves delay and jitter for high priority class. However, it offers best effort service for low priority traffic ignoring applications with statistical throughput demands, and it uses fixed cycle method yielding inefficient bandwidth utilization, especially at light load. A two-layer bandwidth allocation (TLBA) scheme was designed in [8], which firstly proportionally allocated bandwidth for different classes and then distributed the bandwidth allocated to one class among all requesting ONUs. This scheme provides fairness among different classes and adopts adaptive cycle time. However, this scheme has RTT idling problem and requires 3N guard time intervals (note that timeslots of ONUs are separated by guard time Tg) for upstream transmission in each cycle. This results in large bandwidth wastage. A hybrid granting protocol (HGP) which divided a frame into two parts, namely EF sub-cycle and AF sub-cycle, was proposed in [9]. HGP shortened EF traffic delay by accurately predicting EF traffic up to the next EF timeslot. However, this is only true for constant rate EF traffic but not so for dynamic rate. Besides, it has RTT idling problem. By separating transmission of high priority traffic from low priority traffic in different parts of a cycle, the delay and jitter performance is improved for high priority traffic in the above algorithms. Nonetheless, they still offer just one transmission opportunity in each cycle for high priority traffic just like that for low priority traffic. The above proposals have focused on inter-ONU bandwidth scheduling, while intra-ONU scheduling is mainly implemented by a simple priority scheduling or its varieties to arbitrate transmission of different classes. These protocols are applicable to EPON of ONUs each connecting to a single user. Since each ONU device may host many end users and carries heterogeneous traffic with diverse QoS requirements, a simple priority-based scheduling certainly cannot provide fairness and protection among end users with different traffic classes. A better scheduling algorithm is required. For this purpose, the idea of hierarchical fair scheduling is attractive. In [10], a novel modified start time fair queuing (MSFQ) approach was proposed for intra-ONU scheduling which transmits packet in the order of virtual time of head-of-line (HOL) queue packet. M-SFQ aims to overcome the drawback of strict priority scheduling to provide protection among different service types. A hierarchical intra-ONU scheduling that combining a modified token bucket (M-TB) algorithm [11] for inter-class scheduling and the M-SFQ for intra-class scheduling was designed in [12] to provide fine granularity scheduling for supporting QoS to individuals. In [13], a hierarchical scheduler of fair queuing with service envelopes (FQSE) was developed, which provided some fairness among end users. In [18], a modified deficient weighted round robin (M-DWRR) algorithm is proposed to enforce fairness among various traffic classes with low complexity. These algorithms could provide fairness and protection among end users and different traffic classes. Nonetheless, they buffer packets in per-class-per-user manner, i.e., creating a separate queue for each class of each user. This requires a large number of queues in the system yielding potentially high scheduling overhead and high queue management complexity. Moreover, the inter-ONU

968

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

scheduling of these schemes falls into S-opportunity per-ONU-percycle solution, which limits the performance of high priority traffic. In summary, most of the above algorithms have attempted in providing prior services to high priority traffic. Nonetheless, all scheduling algorithms distribute timeslot to ONUs in a manner of either S-opportunity per-ONU-per-cycle or S-opportunity perclass-per-cycle. High priority traffic has only one transmission opportunity in a cycle just like that of low priority traffic. This does not match well the demand of traffic classes. High priority traffic is generally smooth and demands short delay and delay variation, thus it should be scheduled more frequently (i.e., in short-cycle scale). On the other hand, medium and low priority traffic is generally bursty in nature and requires statistical throughput guarantees in a large-time scale. Thus, a relatively low frequent scheduling is adequate. In [19], we proposed a hybrid cycle scheduling (HCS) algorithm which just matches this requirement by the design of a small EF cycle and an adaptive dynamic AF cycle, However, HCS only addresses inter-ONU scheduling and does not concern with QoS delivery for end users. In particular for HCS, there are only three class-based queues in each ONU, while in this paper an advanced intra-ONU scheduling is proposed which addresses issues of fairness to end users, scheduling efficiency, and queue number. Moreover, to provide differentiated services and fairness to end users, per-class-per-user queuing mechanism was adopted for ONU local queuing in previous approaches. This poses high queuing and scheduling complexity on ONU. These issues have motivated our study. In this paper, we propose two new user-oriented hierarchical bandwidth scheduling algorithms (UHSAs). We adopt the strategy as in [7–9,19] to partition a scheduling frame into two parts, i.e., EF static part and AF dynamic part. To improve the performance of high priority traffic and lower queuing and scheduling complexity, we design a credit-based common queue (CCQ) mechanism and propose a transmission priority scheme for different queue groups. The UHSAs exhibit a feature of multiple transmission opportunities (M-opportunities) per-cycle for EF traffic [19]. This significantly improves delay and jitter performance for EF traffic. On the other hand, the design of CCQ improves greatly scheduling efficiency and reduces average number of queues in the ONU system and lowers queuing management complexity. 3. User-oriented hierarchical bandwidth scheduling algorithms (UHSAs) In this section, we present the user-oriented hierarchical bandwidth scheduling alogrithms (UHSAs) that support differentiated services and provide fairness among end users. We adopt the traffic classification recommendations in [1] to classify network traffic into three classes: EF, AF, and BE. We use two part frame partition strategy to enhance EF performance. Moreover, we design a new two-level hierarchical ONU queuing architecture and define different queue groups, each with different transmission priority scheme. In particular, we introduce a novel credit-based common queue (CCQ) mechanism for high-level class-based queuing. 3.1. EF cycle and AF cycle In our design, we adopt a similar strategy used in [19] to partition a frame (cycle) into two parts. The initial part has N fixed and orderly distributed timeslots, termed EF timeslots, corresponding to N ONUs (ONU1ONUN).We call this part as EF part or EF interval. This is intended for steady EF traffic transmission. The size of each timeslot is determined according to the maximum frame size and Service Level Agreement (SLA) between the subscriber and the

service provider. Normally, the EF traffic requires a small percentage of the total bandwidth, for example 20%, and is not bursty in nature (e.g. Poisson distribution). Thus, the cyclic fixed timeslot mechanism ensures that the EF frames are to be transmitted in short time delay and with little variation. The second part is designed for AF and BE traffic, termed AF part or AF interval. We adopt two different distribution approaches in this part. The first approach divides this part into N orderly distributed timeslots, termed as AF timeslots, each for an ONU. A whole set of AF timeslot is thus covered in a single frame, like that of EF timeslot. We call this approach as UHSA with single cycle (UHSA-S for short). For the other approach, the AF part is divided into a variable number of AF timeslots dependent on instantaneous queuing status of all ONUs. In this case, a whole set of AF timeslot may span multiple frames, i.e., an ONU gets an AF timeslot in multiple cycles. We name this approach as UHSA with multiple cycles (UHSA-M). UHSA-S and UHSA-M have different advantages in providing QoS. This will be discussed subsequently. Now, we introduce the concept of multiple cycles. EF cycle, termed as TEF, is defined as the time period between two consecutive beginnings of ONU1 EF timeslot. An EF cycle is equal to one h i Max . T Min frame size that is limited in a range of T Min EF ; T EF EF is introduced to prevent light load bandwidth consumption problem when the steady EF part is small. T Max is the upper limit of the EF EF cycle which is critical to delay performance of EF traffic. A large value of T Max will yield large EF delay at heavy load, while EF a very small value will lead to a large percentage of bandwidth cost via guard time usage. Thus, a reasonable T Min and T Max is EF EF needed in EPON configuration. Fig. 2 illustrates the architecture of EF cycle for both UHSA-S and UHSA-M. An EF cycle comprises one EF timeslot for each ONU, and includes one AF timeslot for each ONU in UHSA-S while one or more AF timeslots for some ONUs in UHSA-M. The other cycle is AF cycle, termed as TAF, which is defined as the time period covering a complete set of AF timeslots. As shown in Fig. 2, a TAF spans the interval between two consecutive beginnings of ONU1 AF timeslot. For UHSA-S, the nth AF cycle gives the same length as the nth EF cycle although the time coverage is different in the EF part. While for UHSA-M, an AF cycle may span several consecutive EF cycles. Fig. 2(b) shows an example that an AF cycle spans three EF cycles. Therefore, it is possible that not all ONUs are allocated in an AF timeslot of an EF cycle for UHSA-M. This design of AF cycle aims to improve network utilization at heavy load for a large network of ONUs by reducing guard time wastage. This improves bandwidth but trades off delay performance of EF traffic as compared with UHSA-S approach. Guaranteed bandwidth is provided to the EF class as a fixed   . The EF timeslot is reserved for an ONU in each EF cycle 6 T Max EF traffic of ONUi in each EF cycle receives a fixed service in the timeslot size of:

/Hi W EF  RE ¼ ðT fixed  NT g Þ  /Hi  RE i ¼ ðT fixed  NT g Þ  P H k /k

ð1Þ

Accordingly, each user j in ONUi also receives a guaranteed EF service in each EF cycle in the timeslot size of:

/Hi;j EF H W EF ¼ W EF i;j ¼ W i  P i  /i;j H k /i;k

ð2Þ

The parameters used are listed in Table 1. Since AF traffic requires statistical throughput guarantee and generally subscribers sign up network service with a bandwidth requirement, thus, fairness guarantee is also an important issue in AF timeslot distribution. The minimum guaranteed AF timeslot size of ONUi in each AF cycle is defined as:

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

969

a

b

Fig. 2. Illustration of EF cycle and AF cycle (a). UHSA-S (b). UHSA-M.

L   /M i;j þ /i;j AF M L W AF ¼ W AF i;j ¼ W i  P i  /i;j þ /i;j M L k /i;k þ /i;k

Table 1 Parameters used for UHSAs description. N RU, RE Min T Max EF ; T EF Tfixed Min T Max dynamic ; T dynamic

M L /H i ; /i ; / i

Total number of ONUs Line rate of user to ONU link and EPON link rate, respectively Maximum and minimum EF cycle, respectively Steady EF part in an EF cycle, i.e., total EF timeslot plus NTg Maximum and minimum AF dynamic part in an EF cycle     Min Max T Min T Max  T fixed dynamic T dynamic ¼ T EF EF

ð4Þ

It is indicated by (1)–(4) that minimum guaranteed bandwidth is provided to an ONU and its users in terms of EF timeslot and AF timeslot. Nonetheless, to provide better QoS and fairness to ONUs and end users, it is essential to design effective inter-ONU bandwidth scheduling and intra-ONU queuing and scheduling to deal with the unpredictable network traffic.

Preassigned weight of EF, AF, and BE of ONUi among all  P  M P H L k /k ¼ 1 and k / k þ /k ¼ 1

3.2. Credit-based common queue and transmission priority scheme

Preassigned weight of EF, AF and BE of user j in  P  M P H L k /i;k ¼ 1 and k /i;k þ /i;k ¼ 1

The UHSA architecture is illustrated in Fig. 3. The inter-ONU scheduling is executed in OLT to manage bandwidth (timeslot) distribution amongst ONUs, and the intra-ONU scheduling is implemented in each ONU to further distribute bandwidth allocated to an ONU amongst end users. Queuing and scheduling at low-level layer plays a crucial role in ultimate QoS fulfillment and fairness guarantees to end users. For ONU local queuing, a new two-level hierarchical queuing architecture is designed. The high-level queuing consists of three credit-based common queues (CCQs) each for a traffic class, while the low-level queuing comprises a set of private queues (PQs) created in the manner of per-class-per-user. These queues are put into six groups: EF CCQ group, AF CCQ group, BE CCQ group, EF PQ group, AF PQ group, and BE PQ group. Each CCQ group includes a single high-level CCQ while each PQ group includes a set of user PQs. Principally, the CCQ is introduced to serve two objectives. First, it enhances scheduling architecture and substantially improves scheduling efficiency. When the right timeslot arrives at an ONU, it firstly serves packet from a CCQ in FIFO manner before picking packet from the corresponding PQs using a credit-based fair scheduling method. As FIFO scheduling is much simpler, the CCQ mechanism saves much time for the ONU. In particular, a large percentage of packets is served using the common queues. This will make the scheduling efficient. Packets from PQs are also scheduled. However, the number of active PQs is small. Second, the queuing management complexity is lowered by using smaller number of queues. CCQ uses credit-based method to queue the packets. The number of user’s packets being queued in a common queue is determined by the user’s credit value of the corresponding class, not by traditional FIFO approach. This prevents the common queue being monopolized by a single user and effectively guarantees fairness to end users. Besides, this approach of common queue sharing helps to minimize the number of PQs in the ONU and thus lowers the queuing management complexity.

ONUs, M L /H i;j ; /i;j ; /i;j

ONUi M L CH i;j ; C i;j ; C i;j

Credit value of EF, AF, and BE class of user j in ONUi

M L QH i ; Qi ; Qi

CCQ of EF, AF, and BE, respectively of ONUi. It also represents the queue size. Private queue of EF, AF, and BE, of user j in ONUi, respectively. It also represents queue size. Number of bits in CCQ of EF, AF, and BE, of user j in ONUi, respectively EF timeslot size of ONUi and Minimum EF timeslot guarantee of user j in ONUi Minimum guaranteed AF timeslot size of ONUi and minimum AF timeslot of user j in ONUi Guard time between timeslots

M L QH i;j ; Q i;j ; Q i;j M L CoQ H i;j ; CoQ i;j ; CoQ i;j EF W EF i ; W i;j AF W AF i ; W i;j

Tg

8  /AF þ/BE Max i i > >  RE > T dynamic  NT g  P /AF þ/BE > k k k > >     > > AF BE > < ¼ T Max  RE for UHSA  S dynamic  NT g  /i þ /i W AF   i ¼ AF BE > Max > P/i þ/i BE  RE > > M  T dynamic  NT g  k /AF þ/k > k > >   > > : ¼ M  T Max  NT g  ð/AF þ /BE Þ  RE for UHSA  M dynamic i i ð3Þ where M is an integer. UHSA-M augments the minimum guaranteed AF timeslot of an ONU by M times of that of UHSA-S. In other words, the total size of AF timeslots in an AF cycle is maximized by M times. Accordingly, the AF cycle is fluctuating in a larger range and it may span several EF cycles at heavy network load. This feature makes UHSA-M more adaptive for bursty AF and BE traffic. It also improves bandwidth utilization since the guard time waste is reduced in an EF cycle. Accordingly, we define a minimum guaranteed AF timeslot size for each user j of ONUi as follows:

970

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

Inter-ONU scheduling (High level)

Intra-ONU scheduling (Low level) ONU1

EF

AF

OLT

• • •

• • •

ONUi

BE

EF

ONUN

AF

BE

Credit-based Common Queue

• • •

User1

Userj UserM

• • •

User Private Queue

Users

Users

Fig. 3. Architecture of the user-oriented hierarchical scheduling.

To maximize system efficiency, the maximum CCQ length needs to be deliberately specified for each traffic class. For EF CCQ Q Hi , it is limited by:

Q HMax i

¼

W EF i

¼ ðT fixed  NT g Þ 

/Hi

 RE

ð5Þ

Accordingly, each user j in ONUi could queue theoretically as many   EF H bits in the EF CCQ. The maximum EF common as W EF i;j W i  /i;j queue size Q HMax is equal to the fixed EF timeslot size of the i ONU, thus all packets in EF CCQ are guaranteed to be served by the next EF cycle. Each user theoretically can queue as many as W EF i;j bits in the common queue to obtain its share of EF bandwidth. As frame fragmentation is not allowed in EPON, a user may not always get its share of bandwidth in each cycle. To fairly share the common queues by end users, we develop a credit-based queuing mechanism for common queue sharing by all users as shown in Fig. 4. Specifically, for EF CCQ, the operations are summarized as follows:  Initialization. Set credit value C Hi;j ¼ 0 for each user j in every ONUi.  EF packet arrival. When an EF packet P of user j of ONUi arrives, if j’s EF private queue (PQ) Q Hi;j is null and its EF bits in the common queue CoQ Hi;j is less than W EF i;j as well as the space of the EF CCQ is available, P queues in the CCQ, otherwise, it queues in Q Hi;j . If P H queues in CCQ and leads to CoQ Hi;j exceeding W EF i;j , then C i;j   deducts the extra part CoQ Hi;j  W EF i;j .

 EF packet departure. (1) firstly, packets in EF CCQ are served. (2) When EF CCQ is null, an EF packet P is selected from the PQ with the largest C Hi;j . C Hi;j deducts the packet length L(P) after the packet departure. This is called as credit-based fair scheduling for PQ packet scheduling.  Packet shifting (a). After each EF/AF timeslot transmission, packet shifting from PQ to CCQ is conducted if possible. A HoL packet P of Q Hi;j will be shifted from the PQ to the EF CCQ, if

two conditions are satisfied (1) the CCQ has available space (2) H EF CoQ Hi;j þ LðPÞ is not larger than W EF i;j , or CoQ i;j is less than W i;j

and C Hi;j ¼ 0. If the condition are satisfied and CoQ Hi;j þ   H H H EF LðPÞ > W EF i;j , then C i;j is updated as C i;j  ¼ CoQ i;j þ LðPÞ  W i;j .  Packet shifting (b). If the EF CCQ is still large enough, an EF packet P is selected from the PQ with the largest C Hi;j . C Hi;j deducts L(P) after packet shifting.  Finally, C Hi;j is updated. For user with CoQ Hi;j 6 W EF i;j or null PQ, set C Hi;j ¼ 0. For all other users, the credit value deducts the largest credit counter among all non-zero EF PQs.   Q Li is limited by: For AF and BE CCQ, Q M i

Q MLMax ¼ W AF i i

ð6Þ

That is Q MLMax is as large as the minimum guaranteed AF timeslot i size of the ONU. Similarly, each user j of ONUicould also queue as many as    AF AF M L W i;j W AF bits in the AF and BE CCQ. To eni;j ¼ W i  /i;j þ /i;j able fair sharing of CCQ, we also develop a credit-based queuing mechanism for AF and BE CCQ sharing as shown in Fig. 4. AF and BE have the same CCQ sharing approach. The AF CCQ operations are described as follows. MLMax .  Initialization. Set C M i;j ¼ Q i  AF packet arrival. When an AF packet P of user j of ONUi arrives, M it queues in AF CCQ if Q M i;j ¼ 0; C i;j > 0, and AF CCQ could accom-

modate the AF packet. Otherwise, it queues in j’s AF PQ Q M i;j . In   M M L this case, C i;j deducts LðPÞ= /i;j þ /i;j to account for the credit MLMax at the beginexpense of the packet since C M i;j is initialized Q i ning. This guarantees fairness among end users.  AF packet departure. (1) AF CCQ is served first. (2) When AF CCQ is null, an AF packet P is selected for service from PQ with the  M M L accordingly. largest C M i;j ; C i;j deducts LðPÞ= /i;j þ /i;j

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

971

Fig. 4. Pseudo code for UHSA.

 Packet shifting. After each EF/AF timeslot transmission, packet shifting from PQ to CCQ is executed if possible. (1) After EF timeslot transmission, a HoL AF packet P from j’s PQ is shifted to the AF CCQ if C M i;j > 0 and the CCQ could accommodate it. (2) After AF timeslot transmission, an AF packet P will be selected to shift to M M CCQ from the PQ   with the largest C i;j . In both cases, C i;j deducts M L LðPÞ= /i;j þ /i;j .   M MLMax M L  CoQ M  Finally, C M i;j is updated as C i;j ¼ Q i i;j = /i;j þ /i;j . To improve bandwidth utilization at light load when EF traffic cannot consume the entire EF timeslot, AF and BE frames will be transmitted if the left-over EF timeslot is large enough to transmit the head-of-line frame of AF/BE queues. The transmission priority of different queue groups in EF timeslot is defined in descending order as: EF CCQ, EF PQ, AF CCQ, BE CCQ, AF PQ, BE PQ. Both of the two EF groups always have higher priority than other class queue groups in EF timeslot. This is in line with the design objective of EF timeslot in order to provide guaranteed performance service to EF traffic. Thus, EF traffic should not be delayed by other class traffic in EF timeslot.

Similarly, EF frame can be transmitted in AF timeslot. Besides utilizing fully the bandwidth available, it is also crucial to improve delay and delay variation performance for EF traffic. To ensure both objectives, we propose transmission priority for queue groups in AF timeslot in descending order as follows: AF CCQ, EF CCQ, EF PQ, BE CCQ, AF PQ, BE PQ. This proposal is based on several considerations. First, AF traffic demands guaranteed throughput and cannot be starved by EF traffic, thus AF CCQ should have the highest priority in AF timeslot. According to AF CCQ design, the maximum size of AF CCQ is just as large as the minimum guaranteed AF timeslot size, thus packets in AF CCQ will be delivered in limited time. This ensures guaranteed throughput for AF traffic to meet its statistical bandwidth requirement. Second, EF traffic has already secured its lion share of bandwidth in EF timeslot. There is no need for it to be assigned the highest priority in AF timeslot. Nonetheless, the two EF traffic groups still occupy the second and the third priority, respectively in AF timeslot. Unless AF CCQ is full and there is no extra bandwidth allocated for the ONU, EF traffic has a high chance to transmit in AF timeslot. This means EF traffic statistically has more than one transmission opportunities in each EF cycle, which

972

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

substantially shortens the statistical transmission cycle of EF traffic yielding much improved delay and delay variation performance. We call this new style of timeslot scheduling for EF traffic as Mopportunities per-cycle solution. This idea is inspired by the newly developed round robin scheduling algorithms, namely smoothed round robin [15] and stratified round robin [16], where the premium flows are scheduled more than one time in each cycle according to their reserved rate. For EPON system, since the cycle time cannot be too small to prevent large bandwidth cost caused by guard time and GATE message, it is desirable that EF traffic has more than one transmission opportunity in one cycle to improve its performance. The algorithms of UHSAs provide M-opportunities per-cycle for EF traffic to meet this requirement. This is different from previous approaches which make use of S-opportunity per-cycle for EF traffic, thus encountering large delay due to long waiting time for each EF service and large delay variation due to long and fluctuating cycle time. Although the work in [9] can provide accurate prediction for EF timeslot to improve EF delay, this is true only for constant rate EF traffic but not so for dynamic rate. Besides, it has RTT idling problem. Finally, BE CCQ has higher priority than AF PQ but lower than EF PQ. Since BE traffic has no QoS requirement, it is reasonable to put EF traffic ahead of it for urgent EF transmission. Putting AF PQ behind BE CCQ is intended to alleviate AF traffic monopolizing the whole AF timeslot. From the design of AF CCQ, it is possible that AF traffic may dominate the AF timeslot as the maximum size of AF CCQ is equal to the minimum guaranteed AF timeslot size of the ONU. On the other hand, traffic policing can be used to control the amount of traffic of each user to ensure it conforms to its SLA. 3.3. Report and grant mechanisms The OLT maintains a table of ONUs, including the fixed timeslot size, minimum guaranteed AF timeslot size and the latest buffer occupancy information of AF and BE classes, of each ONU. An ONU sends REPORT message to the OLT at the end of each EF and AF timeslot, reporting the queue size information of the AF and BE traffic. OLT determines the AF timeslot size granted for AF and BE traffic according to the latest buffer occupancy information and sends GATE messages to ONUs maximum RTT time earlier than the start time of the first AF timeslot. This is to prevent RTT idle time and at the same time use the most updated buffer occupancy information. AF timeslots are granted to ONUs orderly in an AF cycle. For UHSA-M, this requires the OLT to remember the latest ONU that has been granted AF timeslot since a whole set of AF timeslot may span several EF cycles. For UHSA-S, a whole set of AF timeslot is always included in the same EF cycle, there is no need to remember the latest served ONU. As EF timeslot and its relative position for an ONU is fixed in each EF cycle, OLT sends EF grant messages to each ONU ahead of the beginning of the next EF cycle to keep the ONU active. To save GATE message cost, the EF grant for the next EF cycle and the AF/BE grant for the current cycle can be delivered in a single GATE message to the destined ONU if the ONU has AF timeslot in the current EF cycle. As discussed earlier, an ONUi is reserved a fixed timeslot size of for EF traffic transmission. So, a fixed timeslot size of W EF is W EF i i granted to ONUi in each EF cycle. The ONU further fairly distributes the bandwidth to end users, which is implemented by using the proposed EF CCQ mechanism and credit-base fair scheduling. Next, we discuss REPORT and GRANT mechanism for AF and BE class. At the end of each EF and AF timeslot, a REPORT message is generated, containing three reports, i.e., AF report RAF i , BE report Extra , respectively. They are determined using RBE i , and extra report Ri the following rule:

  8 P AF MLMax AF AF > > R ¼ min Q ; Q þ Q > i i i i;k > > k > ( ) > < P MLMax BE BE BE Ri ¼ min Q i ; Q i þ Q i;k > j > >   > > P P > AF BE AF Extra BE > : Ri ¼ Qi þ Qi þ Q i;k þ Q BE  RAF i;k i  Ri k

ð7Þ

k

MLMax RAF , the maxi represents AF traffic request but is capped by Q i imum size of AF CCQ (also the minimum guaranteed AF timeslot size). Thus, it represents the maximum queue size of AF CCQ at the moment. This part of request is always satisfied to guarantee AF bandwidth if an ONU is allocated AF timeslot in the next EF cycle. Extra represents Similarly, RBE i represents the maximum BE CCQ size. Ri the left-over request. OLT determines the granted AF timeslot size according to the BE Extra of all latest buffer occupancy information, i.e., RAF i ; Ri , and Ri ONUs. For UHSA-S, the grant algorithm works in following steps:

Step 1. Initialize left-over available bandwidth Bav ail ¼   Max T dynamic  NT g  RE , and set grant size Gi = 0 for all ONUs. P AF Step 2. All RAF is fully satisfied, i.e., Gi þ ¼ RAF i i ; Bav ail  ¼ k Rk . P BE is fully satisfied, then Step 3. If Bav ail P k Rk , all RBE i P BE G þ ¼ RBE otherwise G0i ¼ min i ; Bav ail  ¼ k Rk , ni o P 0 0 0 BE AF AF Ri ; W i  Ri ; Gi þ ¼ Gi ; Bav ail  ¼ k Gk ; RBE i  ¼ Gi , the left-overBavail is further distributed among ONUs in Max-Min allocation method, and according to RBE i break; P P Step 4. Bav ail P k RExtra ; Gi þ ¼ RExtra ; Bav ail  ¼ k RExtra , otherwise, k i k , and Max-Min allocation among ONUs according to RExtra i break; Step 5. All requests are satisfied, then check whether the miniMin mum EF cycle is satisfied, i.e., Bav ail

300

Δ ⊕ > Δ ⊕ >

Δ ⊕ >

Δ ⊕ > Δ ⊕ > Δ ⊕ >

0

⊗ ∇

⊗ ∇

0.2

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

40

⊕ Δ >

Δ ⊕ >

Δ ⊕ >

∇ ⊗

Δ ⊕ >

100

One of the objectives of CCQ mechanism is to improve scheduling efficiency. Since the scheduling complexity of picking up a packet from CCQ is O(1) while it is O(n) from a private queue, where n is the number of active queues of the class in service. Thus, we examine the percentage of packets served from CCQs among all packets (including from CCQs and PQs) to evaluate CCQ behavior. The higher percentage that packets are served from CCQs, the more efficient the CCQ mechanism is. It is shown in Fig. 7(a) that over 95.1% of EF packets are served from EF CCQ for both approaches at all network loads. While for AF traffic, the percentage is over 76.0%, and for BE traffic, it is above 92.1% for UHSA-M and above 82.6% for UHSA-S, respectively. This shows that the scheduling efficiency is improved significantly for all of EF, AF, and BE traffic. In conclusion, the CCQ mechanism substantially improves scheduling efficiency in ONU. This mechanism facilitates ONU to transmit packets at high speed matching the rate of 1 Gbps. While for DBA2 as well as most other intra-ONU algorithms [10, 13, etc.], there is no common queue mechanism and thus they will face scheduling scalability problem with increasing number of users under an ONU. The other objective of CCQ is to lower queue management complexity by reducing average number of queues (AQ) in the ONU system. Fig. 7(b) presents the AQ performance. UHSA-S shows a reduction of 21.471.9% while UHSA-M shows a reduction of

(b)

UHSA-M EF UHSA-S EF UHSA-M AF UHSA-S AF UHSA-M BE UHSA-S BE

200

4.3. Scheduling efficiency and queue number

⊗ ∇

Throughput (Mbps)

(a)

among different ONUs. An ONU is assured to get its guaranteed bandwidth without being affected by other congested ONUs. At the loads of 0.70.8, both EF and AF throughputs of U1,3 and U1,8 are nearly the same as their base generating rate, as the network is not fully loaded and ONU1 could utilize the excessive bandwidth saved from other under-loaded ONUs. At the load of 0.9, the network is congested due to high demand of ONU1. In this case, all other under-loaded ONUs will get fully their demands, while ONU1 will get bandwidth less than its demand. From the design of AF CCQ and transmission priority scheme in AF timeslot, AF trafin an fic in CCQ has higher priority for bandwidth at most Q MLMax i AF timeslot (guaranteed bandwidth) while EF traffic has higher priority for the left-over bandwidth. This is confirmed by the simulation result that EF throughput of U1,3 and U1,8 maintains around 10 Mbps at load 0.9. On the other hand, since the weight of U1,8 is 5 while it is 2 for U1,3, U1,8 could get up to 5/7 of the left-over excessive bandwidth. It is shown in Fig. 6(b), U1,8 obtains fully its demand with nearly 40 Mbps throughput, while U1,3 obtains partially its demand at about 21.80 Mbps. This shows AF fairness among different users. At the loads of 1.0 and 1.1, the network is fully loaded, ONU1 could not get excessive bandwidth from other ONUs, thus EF and AF throughput of each user obtains only its guaranteed bandwidth.

⊕⊕ >

0.8

Δ

>



> ⊕

20 10

⊗⊗⊗ ∇∇



0

>

0.7

Δ

Δ

0.8

0.9 Total Load

Fig. 6. (a) Throughput of different classes; (b) throughput of test users.

>





∇ Δ

Δ ∇

1.0

1.1



2 Δ

0.4 0.6 Total Load

>

U1,3 EF U1,8 EF U2,3 EF U1,3 AF U1,8 AF U2,3 AF



975

Y. Yin, G.-S. Poo / Computer Communications 33 (2010) 965–975

∇ ⊗

Δ

100

90

⊗⊗ ∇

>> Δ

UHSA-M EF UHSA-S EF UHSA-M AF ∇∇

⊗ ∇



>

Δ

Δ

>

Δ



>

⊗ ∇

>

Δ

(b) 12

UHSA-S AF UHSA-M BE UHSA-S BE ⊗ ∇

>

Δ

⊗ ∇

>

Δ

⊗ ∇

>

Δ

⊗ ∇

>

Δ

⊕ ⊗



>



80 ⊕

70



0.2









0.4 0.6 Total Load





Δ

0.8

Average Number of Queues (AQ)

Percent of Packets from CCQ (%)

(a)





UHSA-M UHSA-S DBA2

⊗ ⊕









⊕ ⊕

5.0 ⊕ ⊕







1.0



⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇

⊗ ∇



⊗ ∇

0.0 0.2

0.4 0.6 Total Load

0.8

Fig. 7. (a) Percent of packets served from CCQs; (b) average number of queues (AQ).

30.174.0% at all network loads. This shows that CCQ mechanism substantially reduces average number of queues in an ONU, thus effectively lowering queue management complexity. 5. Conclusions This paper studies dynamic bandwidth allocation with differentiated services support to end users in EPON networks. We propose a collection of innovative credit-based hierarchical scheduling algorithms (UHSA-S and UHSA-M) for the purpose. Two-part frame partition approach is adopted for inter-ONU scheduling. While for intra-ONU scheduling, a credit-based scheduling approach is proposed to guarantee fairness among end users. On the other hand, to improve scheduling efficiency and lower queuing management complexity, we design a novel credit-based common queue (CCQ) for each class to enhance scheduling architecture and reduce average number of queues. We also propose new transmission priority schemes for different queue groups in EF and AF timeslots, in order to improve EF delay and delay variation performance by offering M-opportunities per-cycle solution for EF class, while guaranteeing bandwidth for AF class. Extensive simulation experiments have been conducted which show: (1) the UHSA algorithms with M-opportunities per-cycle solution significantly improve delay and delay variation performance by reducing AD over 27.2% and DV over 19.6% as compared with traditional S-opportunity per-cycle solutions at loads 0.10.9. (2) the CCQ mechanism substantially enhances scheduling efficiency with a large percent of packets served from CCQ groups and lowers queue management complexity with a reduction over 22.4% of AQ at all network loads. These improvements are significant. References [1] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, An architecture for differentiated services, IETF RFC2475, December 1998. [2] IEEE 802.ah draft at Available from . [3] G. Kramer, B. Mukherjee, Interleaved polling with adaptive cycle time (IPACT): a dynamic bandwidth distribution scheme in an optical access network, Photonic Network Commun. 4 (1) (2002) 89–107.

[4] M. Ma, Y. Zhu, T. Cheng, A bandwidth guaranteed polling MAC protocol for Ethernet passive optical networks, in: Proceedings of the IEEE INFOCOM, Apr. 2003. [5] C.M. Assi, Y. Ye, S. Dixit, M.A. Ali, Dynamic bandwidth allocation for quality-ofservice over Ethernet PONs, IEEE J. Sel. Areas Commun. 21 (9) (2003) 1467– 1477 (November). [6] A. Shami, X. Bai, N. Ghani, C.M. Assi, H.T. Mouftah, QoS control schemes for two-stage Ethernet passive optical access networks, IEEE J. Sel. Areas Commun. 23 (8) (2005) (August). [7] F.T. An, Y.L. Hsueh, K.S. Kim, I.M. White, L.G. Kazovsky, A new dynamic bandwidth allocation protocol with quality of service in Ethernet-based passive optical networks, Wireless Opt. Commun. (2003). [8] J. Xie, S. Jiang, Y. Jiang, A dynamic bandwidth allocation scheme for differentiated services in EPONs, IEEE Opt. Commun. (2004) 532–539 (August). [9] X. Bai, A. Shami, N. Ghani, C. Assi, A hybrid granting algorithm for QoS support in Ethernet passive optical networks, in: Proceedings of the ICC’05, pp. 18691873, ICC, 2005. [10] N. Ghani, A. Shami, C. Assi, M.Y.A. Raja, Intra-ONU bandwidth scheduling in Ethernet passive optical networks, IEEE Commun. Lett. (2004) 683–685 (November). [11] J. Chen, B. Chen, S. He, A novel algorithm for intra-ONU bandwidth scheduling in Ethernet passive optical networks, IEEE Commun. Letters 9 (9) (2005) 850– 852 (September). [12] B. Chen, J. Chen, S. He, Efficient and fine scheduling algorithms for bandwidth allocation in Ethernet passive optical networks, IEEE J. Sel. Topics Quantum Electron. 12 (4) (2006) (July–August). [13] G. Kramer, A. Banerjee, N. Singhal, B. Mukherjee, S. Dixit, Y. Ye, Fair queuing with service envelopes (FQSE): a cousin-fair hierarchical scheduler for subscriber access networks, IEEE J. Sel. Areas Commun. 22 (8) (2004) (October). [14] S.I. Choi, Cyclic polling-based dynamic bandwidth allocation for differentiated classes of service in Ethernet passive optical networks, Photonic Network Commun. 7 (1) (2004) 87–96. [15] G. Chuanxiong, SRR: an O(1) time complexity packet scheduler for flows in multi-service packet networks, in: Proceedings of the SIGCOMM’01, pp. 211– 222, SIGCOMM, 2001. [16] S. Ramabhadran, J. Pasquale, Stratified round robin: a low complexity packet scheduler with bandwidth fairness and bounded delay, in: Proceedings of the SIGCOMM’03, pp. 239–250, 2003. [17] M.S. Taqqu, W. Willinger, R. Sherman, Proof of a fundamental result in selfsimilar traffic modeling, ACM/SIGCOMM Comput. Commun. Rev. 27 (1997) 5– 23. [18] A.R. Dhaini, C.M. Assi, M. Maier, A. Shami, Per-stream QoS and admission control in Ethernet passive optical networks (EPONs), J. Lightw. Technol. 25 (7) (2007) 1659–1668 (July). [19] Y. Yin, G.S. Poo, A hybrid cycle bandwidth allocation scheme with differentiated services support in Ethernet passive optical networks, Photonic Network Commun. 15 (3) (2008) 263–274 (June).

Suggest Documents