DiffServ clouds. This means that in the ... address or traffic type and assigned to a specific traffic class. Traffic cl
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 10
A Survey of various Scheduling Techniques and Comparison of Best Two Schemes in Differentiated Services D. Rosy Salomi Victoria and S. Senthil Kumar Abstract - A new scheme for packet transmission in various traffic patterns by introducing a Quantum based Round Robin scheduler. This scheduler gives a chance to all classes to have access to the bandwidth. A stream can be defined as the same-class packets from a source router destinated to an output port of the core router. Each stream can send one packet in each small round within a class. So we can reduce the intertransmission time from the same stream and achieve a smaller jitter and startup latency. This scheduler algorithm share load with different traffic patterns like text, audio and video. The quantum is used to adjust in a way to prevent the starvation of lower priority classes. It supports multi class traffic and provides extensive performance analysis. Index Terms— Backlog, Fair queuing, Quantum Round Robin.
1 INTRODUCTION
HE the Quality of Service (QoS) requirement of the applications has become one of the most important tasks for implementing the next generation Internet. DiffServ is architecture for classifying, managing network traffic and providing QoS guarantees on modern IP networks. DiffServ [3] operates on the principle of traffic classification, where each data packet is placed into a limited number of traffic classes. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network. The Diffserv architecture contains the core router and edge router as shown in Fig. 1. Most of the networks use the following Per-Hop Behaviors (PHB)[17]: Default PHB (Best-Effort), Expedited Forwarding (EF) PHB and Assured Forwarding (AF) PHB. Best-effort traffic has no service guarantees that data is delivered. EF is used to provide low loss rate, low delay, low latency, low jitter and an assured throughput. AF traffic requires guaranteed forwarding classes of traffic. We focus in this paper a Quantum based Round Robin technique which has the capabilities of using smaller frame lengths and rounds; sending traffic packet by packet in smaller rounds; reducing the inter transmission time from the same stream; reducing queuing delay, jitter and startup latency and controlling the starvation of lower priority classes.
T
2.
Traffic classifiers may honor any DiffServ marking in received packets or may elect to ignore or override those marking. Because network operators want tight control over volumes and type of traffic in a given class, it is very rare that the network honors marking at the ingress to the DiffServ domain. Traffic in each class may be further conditioned by subjecting the traffic to rate limiters, traffic polices or shapers. The diffserv architecture contain the core router and edge router which explained in the fig1.
LITERATURE REVIEW
DiffServ requires no advance setup, no reservation, and no time-consuming end-to-end negotiation for each flow, as with integrated services. One advantage of DiffServ is that all the policing and classifying is done at the boundaries between DiffServ clouds. This means that in the core of the Internet, routers can get on with doing the job of routing, and not care about the complexities of collecting payment or enforcing agreements. This leads DS relatively easy to implement. Network traffic entering a DiffServ domain is subjected to classification and conditioning. Traffic may be classified by many different parameters, such as source address, destination address or traffic type and assigned to a specific traffic class.
Fig. 1 DiffServ architecture
DiffServ operates on the principle of traffic classification, where each data packet is placed into a limited number of traffic classes, rather than differentiating network traffic based on the requirements of an individual flow. Each router on the network is configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 11
network. PHP are implemented at DiffServ network using queuing and scheduling mechanisms: Fair queuing, Round Robin (RR), Weighted RR (WRR), Deficit RR (DRR), Surplus RR (SRR), PQWRR (Priority Queuing WRR), DRR+ (Latency Critical [LC] flows), DRR++ (Bursty LC flows) and OCRR (Output Controlled RR).
2.1 Fair Queuing [9] Its goal is to serve sessions in proportion to some shares independent of the queuing load. FQ is used to determine fair order of packet transmission. But it leads to computational complexity and renders the sc heme infeasible for high speed applications. 2.2 Strict priority queueing [14] It is the simplest method for providing service differentiation in an IP network and consists of a set of buffers served with given priorities. The side effects of priority queueing could cause the EF packet arrival rate to temporarily exceed the reserved service rate at core routers, thereby resulting in packet losses. Service priorities may or may not be the same as the buffer occupancy priorities. The service priority can be fixed, or dynamic in a manner consistent with rate allocation, delay bounds, etc. Its buffer size is limited, which prevents packets from queueing for a long time. 2.3 Round-robin scheduling (RR) [2] RR can be applied to the data packet scheduling problems. The CPU scheduler goes around a circular queue, allocating the CPU to each process for a time interval of one quantum(time slice). New processes are added to the tail of the queue. The CPU scheduler picks the first process from the queue, sets a timer to interrupt after one quantum, and dispatches the process. If the process is still running at the end of the quantum, the CPU is preempted and the process is added to the tail of the queue. If the process finishes before the end of the quantum, the process itself releases the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the next process in the ready queue. Every time a process is granted the CPU, a context switch occurs, which adds overhead to the process execution time. The amount of allocated bandwidth depends on the packet length. Its disadvantages are the flows sending longer packets use high fraction of available bandwidth and it is difficult to control the exact bandwidth over IP networks. 2.4 Weighted Round Robin (WRR) [13] WRR is a best-effort connection scheduling discipline. It is the simplest emulation of generalized processor sharing (GPS) discipline. While GPS serves infinitesimal amount of data from each nonempty connection, WRR serves a number of packets for each nonempty connection: (number = normalized (weight / mean packet size)). To obtain normalized set of weights a mean packet size must be known. Only then WRR correctly emulates GPS. It is best to know this parameter in advance. But that's really uncommon in IP networks so it has to be estimated which may be in practice quite hard (in terms of good GPS approximation). Another problem with WRR is that in a scale of one round WRR doesn't provide fair link sharing.The disadvantages are it does not work efficiently for networks
with variable length packets and allocates bandwidth on a packet-by-packet basis.
2.5 Weighted Fair Queuing (WFQ) [11] WFQ is the more effective scheduling discipline which handles both of these problems. WFQ can be implemented in hardware and can be applied to high-speed infrastructure in both core and edge of the network. WFQ is an alternative to implementing differentiated services among multiple classes. It can guarantee each class a bandwidth share proportional to its assigned weight but comes at a cost of greatly increased complexity to implement the scheduling discipline. 2.6 Deficit Round Robin (DRR) [16] DRR is a modified version of WRR which is able to properly handle packets of different size without knowing their mean size of each connection in advance. It guarantees fairness in terms of throughput. But DRR scheduler requires knowledge of the upper bound on packet sizes. A maximum packet size number is subtracted from the packet length, and packets that exceed that number are held back until the next visit of the scheduler. WRR serves every nonempty queue whereas DRR serves packets at the head of every nonempty queue which deficit counter is greater than the packet's size. If it is lower then deficit counter is increased by some given value called quantum. Deficit counter is decreased by the size of packets being served. WRR or WFQ handle fixed length packets while DRR, SRR and DRR++ can handle variablelength packets. 2.7. Surplus Round Robin (SRR) [8] SRR scheduler continues serving a flow as long as the flow deficit counter value is positive. The difference between DRR and SRR are DRR never allows a flow to overdraw its account but SRR allows it and penalizes the flow accordingly in the next round. DRR keeps an account of each flow‟s deficit in service but SRR keeps account of the surplus service received by each flow. SRR does not require the scheduler to know the packet length before scheduling it. 2.8. DRR+ (Latency Critical flows) [12] DRR+ provides bounded delay for real time traffic. When DRR+ encounters a burst, it reverts back to DRR. There are two higher-priority (i.e., EF) and lower-priority(i.e., BE) classes in DRR+. Each EF stream guarantees to send at most one quantum worth traffic at every T seconds. When an EF stream has more traffic its contract within T, it will be treated as a BE stream. The period T is considered to service one quantum from each one of the EF streams as well as one quantum‟s worth for every other BE stream. When a packet arrives in an empty EF stream, DRR+ adds the stream to its ActiveList, but only before BE streams. When a stream becomes empty, its grant is reset to zero. 2.9 DRR++ (Bursty LC flows) [15] DRR++ shares bandwidth between traffic classes. DRR++ is very effective in isolating the behavior of one class of flows from another. LC traffic delay and jitter are much lower in DRR++ than in DRR+. In DRR++, the contract for an EF stream is an agreement to send less than one quantum during a scheduling round. If a stream has more traffic than it is contract, it is backlogged and still will be treated as an EF stream. A Priority Transmission Queue (PTQ) is used to save the traffic of all higher-priority streams. The same quantum as
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 12
DRR+ is used for DRR++. Similar to DRR+, when an EF or BE stream becomes empty, the stream‟s grant is zeroed.
2.10 Dynamic Deficit Round Robin (DDRR) [8] DDRR scheduler is a best-effort service which provides delay differentiation according to packet length. DDRR provides max-min fair share, high throughput and small delay for short packets. DDRR provide Relative Differentiated Service(RDS) that enables support of delay-sensitive applications over the Internet. DDRR scheduler dynamically changes the granularity for the deficit counters instead of using a fixed granularity as in the DRR. In DDRR scheduler, the packet delay decreases linearly as the packet length decreases. 2.11 Hierarchical Dynamic Deficit Round Robin (HDDRR)[8] HDDRR scheduler ensures that the short packets of each class experience relatively small delay. Even though the traffic load of each class varies as time, the HDDRR scheduler can provide relative delay differentiation between classes. The HDDRR scheduler can achieve high throughput efficiently and simultaneously provide smaller delay for short packets of each class. HDDRR scheduler also allows the network administrator to adjust the RDS between classes based on pricing or policy requirements. 2.12 Output Controlled Round Robin (OCRR) [6] OCRR technique can be used in edge switches of an optical network to assemble traffic from aggregated flows into bursts/slots. Traffic flows from different classes,are scheduled in logical frames. In each logical frame, a flow obtains a grant quantum with respect to its average rate and bandwidth usage in order to pass its traffic. The round robin technique is extended to include smaller cycles to send packets from aggregate flows one by one, thus, eliminating bursty effect from the same flow on the output bandwidth. The amount of the output traffic from each flow is also controlled. 3.
QUANTUM ROUND ROBIN SCHEDULER
The existing system approach to support DiffServ traffic is to save all same-class packets from different sources in a shared First Come First Served buffer. So, it is difficult to control the service order of packets from different sources because a bursty source in a class may cause a higher delay and loss for well behaved streams within that class. Existing System OCRR mostly support only higher priority classes. The other drawbacks of the existing system are unfairness, non smooth scheduling and higher service time, startup and latency. The dataflow diagram for the proposed system is given in fig. 2. The source starts to send data to the destination, and then the server receives the incoming data from the source in the round robin basis.
3.1 Pseudo code for Scheduling within one frame ┌ = expected frame length
Fig. 2 Data flow diagram for Quantum Round Robin Scheduler
K= total no. of traffic classes for P= 1 to K do for all stream S in class P Update grant(S) if backlogged(S) = true then Append S in Activelist(P) end if Get next stream S end for //end of S call schedule traffic class P if total transmitted bits < 1 then Get next class P end if
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 13
end for //end of P
3.2 Pseudo code for EnQueue process when new packet arrives Receive packet(Q) S=queue no. of Q P= class no. of Q Insert Q in S of class P if grant (S) >0 and not A in Activelist of P then Append S in Activelist(P) end if 3.3 Pseudo code for Schedule class P traffic when a single buffer is used for all streams (in our e.g., BE traffic) while (total transmitted bits < 1 and backlogged(S) = true) do SendPacket from S Update grant and total transmitted bits end while 3.4 Pseudo code for Schedule class P traffic when a separate buffer is used for each stream in class P (in our e.g., EF traffic) Get rounlength while rounlength >0 do Listpt= starting stream round for D= 1 to rounlength do S = Get Queue(Listpt) if total transmitted bits < 1 then loop SendPacket from S Update grant and total transmitted bits if backlogged(S) = false then rounlength = rounlength – 1 Delete S from Activelist(P) end if Listpt = Get next stream reference Get next round D end for end while end loop else Save S reference in Lastlstpt of class P end if The packets from the head of the streams have the same chance of being transmitted under Multiple Round Robin. A packet in a newly backlogged stream encounters a lower latency. Jitter among packets of the same stream is reduced here. 4.
SCHEDULER IMPLEMENTATION
We used SQL server 2005 as a back end to implement the above scheduling algorithm. We used database tables namely maintable, userlist and sentmsg to store the incoming data, active users and messages that are sent to the destination respectively. Maintable consists of source address, destination address, starting time, message, state and type fields. Userlist table contains only username details. Sentmsg table consists of source address, destination address, time, message, state and type details. We used JDK1.6.1 as a front end to implement the above scheduling algorithm. 1) We run the login form for the user. After entering the username and password, the process for each stream is started, such as higher priority stream, medium priority stream and low priority stream. 2) We run the source code in which the sources of each stream should be opened. Similarly the sources of other two streams [normal and low] should be opened. Here connected destination field has a list of system names so that we can select the destination address by clicking it. Fix the traffic class type. i.e. high or medium or low.
Fig. 3 Send the message to the destination
3) We run the higher priority server so that the processing of the packets from higher priority stream begins. The system name that we have selected in the source window is displayed in the currently connected system. 4) We open the source window and select the packets for the transmission from the browse field or we can enter the text as shown in Fig.3. Since we have selected traffic class already, we just press send button to transmit it to the scheduler. 5) Now we run the scheduler code and view its processing started. We can view server high displaying the contents as shown in the fig. 4. 6) Finally, we run the destination code and find its processing started. 7) We can view the acknowledgement that is sent from the destination after receiving the packets as shown in the fig.5. near the source window. Similarly, each stream can transmit their packets to the destination. 8) The scheduler receives the packets from each server and stored there as given in Fig.6. 9) If the state is blocked in any of the server, the packets could not be sent to the destination. If any server (e.g. low) does not want to receive any packet at present due to heavy traffic, it can set „BackLog Not Allow‟ as shown in fig. 7. The message will be displayed as „server is in blocked stage. So, the message cannot be sent‟ in the source window as shown in Fig.8. Once the traffic becomes smooth, the server can can
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 14
change the state as „Send Blocked Message‟. So, it can process the packets.
11) Finally the message details can be received by the destination as shown in Fig. 10 by clicking send view button in the scheduler window.
Fig. 4 Server high display the message Fig. 7 Running server low and setting backlog not allow
Fig.5 Source indicate successful transmission of packets
Fig. 8 Low server display the blocked message
5.
QUANTUM CALCULATION
Let the expected frame length Γ be 4400 bits at the start of the frame. Table 1 shows a two-class system buffer in which Pi,j denotes j from stream i with K bits and Gi is the available grant of stream i. Table 2 shows the grant values, the output
Fig.6 Scheduler receives the packets from the servers
10) The packet that is received from the scheduler are displayed on the message details of the lower priority server as shown in Fig.9.
sequences and the total transmitted bits at the end of each round for Quantum round robin. Stream-3 is removed from the ActiveList EF at the first round when G3 becomes negative. Streams 2,1, the output sequences and the total transmitted bits at the end of each EF round. Stream-3 is removed from the ActiveList EF at and 7 are removed from this list at the second, third, and fourth rounds, respectively.
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 15
TABLE 2 PACKET TRANSMISSION ORDER IN QUANTUM ROUND ROBIN
Round # 1
G1
G2
G3
G7
400
800
300
350
Output sequence P11 , P21 ,
Total bits 900
P31 , P71 2
100
-300
150
P12 , P22
,
2500
P72
Fig.9 After lower priority stream setting as backlog allow, the message is displayed in this server
3
-100
4 BE
50
P13 , P73
2800
-150
P74
3000
GBE = -50
PB1 , PB2
,
4250
PB3 , PB4
TABLE 3 PACKET TRANSMISSION ORDER IN DEFICIT ROUND ROBIN
Round # 1
G1
G2
G3
G7
Output sequence P11 , P12
Total bits 400
400
100
----
----
2
800
----
----
----
P21
600
3
-----
----
----
-----
----
600
4
350
150
50
----
P71, P72, P73
1000
Fig. 10 Destination receive the message details
BE The transmission of the EF packets is finished at the fourth round with the total transmission of 3,000 bits. The transmitted EF sequence: P11;P21;P31;P71;P12;P22;P72;P13;P73;P74. This sequence shows that packets from different streams have fair access to the bandwidth via a smooth schedule. After finishing the EF packets, four packets from the BE buffer are sent until the BE grant becomes negative, and the frame ends before exceeding the frame length than Γ.
GBE = ----
-----
1000
Table 3 shows the grant values, the output sequences and the total transmitted bits at the end of each round for Deficit Round Robin.(DRR). The DRR output sequence with the same grants: P11; P12; P21; P71; P72; P73, not a smooth packet transmission. Quantum round robin has transmitted 4250 bits whereas DRR only 1000 bits.
TABLE 1 EXAMPLE STATUS OF THE STREAMS
© 2011 JCSE www.journalcse.co.uk
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 9, ISSUE 2, OCTOBER 2011 16
TESTING
6.
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program input produces valid outputs as shown in the table 4. TABLE 4 TEST CASES
Test ID
1
Unit to Test
Login
Test Data
Username
Expected
Actual
Output
Output
Display client page
Invalid Username
Assumption
Status
Comments
Data reader
Fail
Use Try catch handler
& password 2
Source
----
Error Display the Destination Name
No Destination details
Connection Error
Fail
Create socket
3
ServerHigh
----
Display Connected system Name
No server details
Server not yet started
Fail
Start server first
4
ServerMid
----
Display Connected system Name
No server details
Server not yet started
Fail
Start server first
5
ServerLow
----
Display Connected system Name
No server details
Server not yet started
Fail
Start server first
6
Scheduler
----
Display the Connection details
No Connection details
Connection Error
Fail
Create the Connection
7
Destination
----
Display the Message details
No Message details
Connection Error
Fail
Create the Connection
© 2011 JCSE www.journalcse.co.uk
17
INFORMATION ABOUT AUTHOR(S):
CONCLUSIONS Quantum round robin has the features of using smaller frame lengths and rounds: sending traffic packet by packet in smaller rounds. Reducing the intertransmission time from the same stream. Buffer per stream is use in each class in order to provide fairness for source routers. This is a challenging and interesting issue as it would require the schedulers in routers along the path to cooperate with each other to provide a desired QoS.
REFERENCES [1] Miaoyan Li and Bo Song, “Design and Implementation of a New Queue Scheduling Scheme in DiffServ Networks”, Proc. IEEE ,pp.117122,October 2010. [2] Jingnan Yao, Jlani Guo, “Ordered Round-Robin: An Efficient Sequence Preserving Packet Scheduler”, IEEE Trans. on Computers, vol. 57, no. 12,December 2008. [3] John Musacchio and Shuang Wu, “The price of Anarchy in Competing Differentiated Services Networks”, Proc. IEEE Technology and Information Management Program, WeD3.1,pp. 615 -622, September 2008. [4] Lijie Sheng, Haoyu Waoyu, BaoBao Wang, “Throughput Fairness Round Robin Scheduler for Non-continuous Flows”, IEEE Fourth International Conference on Networking and Services, pp. 128-133, September 2008. [5] Akbar Ghaffar Pour Rahbar, Oliver Yang, “OCGRR:A New Scheduling Algorithm for Differentiated Services networks”, Proc. IEEE Trans. on Parallel and Distributed Systems, May 2007. [6] A.G.P. Rahbar and O. Yang, “The Output-Controlled Round Robin Scheduling in Differentiated Services Edge Switches”, Proc. IEEE BROADNETS ’05, October 2005. [7] C. Guo, “SRR: An O(1) Time-Complexity Packet Scheduler for Flows in Multiservice Packet Networks”, IEEE/ACM Trans. Networking, vol. 12, no. 6, December 2004. [8] Woei Lin ,Chin-Chi Wu, Chiou Moh, “ Efficient and Fair Hierarchical Packet Scheduling using Dynamic Deficit Round Robin”, IEEE/ACM Trans. Networking, vol. 12,no.3, pp.429-442, June 2004. [9] Ahmad E Kamal, Hossam S Hassanien, “Performance evaluation of prioritized scheduling with buffer management for differentiated services architectures”, Computer Networks, no.46, pp. 169–180, May 2004. [10] H.M. Chaskar and U. Madhow, “Fair Scheduling with tunable Latency: A Round Robin approach” , IEEE/ACM Trans. Networking, vol. 11, no. 4, pp. 592-601, Auguest 2003. [11] Y. Zhang, P.G. Harrison, “Performance of a Priority-Weighted Round Robin Mechanism f or Differentiated Service Networks”, IEE Electronic Letters, vol. 39, no. 3, February 2003. [12] C. Zhang and M. MacGregor, “Scheduling Latency-Critical Traffic: A Measurement Study of DRR+ and DRR++”, Proc. IEEE High Performance Switching and Routing (HPSR), June 2002. [13] Y. Ito, S. Tasaka, and Y. Ishibashi, “Variably Weighted Round Robin Queueing for Core IP Routers”, Proc. IEEE Int’l Performance, Computing, and Comm. Conf. (IPCCC ’02), April 2002. [14] Yu Zhang, “Performance of an Integrated Scheduling of Priority Weighted Round Robin and Strict Priority in Diffserv Networks”, IEEE Trans. on Parallel and Distributed Systems, vol. 13, no. 3, pp. 324-336, March 2002. [15] M. MacGregor and W. Shi, “Deficits for Bursty LatencyCritical Flows: DRR++”, Proc. IEEE Eighth Int’l Conf. Networks (ICON ’00 , September 2000. [16] M. Shreedhar and G. Varghese, “Efficient Fair Queuing using Deficit Round Robin”, IEEE/ACM Trans. Networking, vol. 4, no. 3, June 1996. [17] J. Heinanen et al., Assured Forwarding PHB Group, RFC 2597, June 1999.
D. Rosy Salomi Victoria is an Associate Professor in the Department of CSE, St. Joseph’s College of Engineering,Chennai. She has 20 years of teaching experience. She obtained M.S. from BITS, Pilani and M.E. from Sathyabama University. She has published two books in subjects such as Computer Architecture and Object oriented Analysis and Design. She is a life member of ISTE. She is currently pursuing Ph.D in Anna University, Coimbatore. S. Senthil Kumar is working as an assistant Professor in the Department of EEE, Government College of Engineering, Salem, Tamilnadu. He has obtained Ph.D. He is the supervisor of the above author and many research scholars.
© 2011 JCSE www.journalcse.co.uk