National Conference on System Design and Information Processing – 2013
173
Performance Evaluation of Efficient Assembly of TCP Traffic for Optical Burst Switching Networks V. Christy Reena Raj, P.G.V. Ramesh and Dr.Prita Nair Abstract--- Efficient transmission of transport control protocol (TCP) traffic over the optical burst-switched (OBS) networks is a challenging problem, because of the high sensitivity of TCP congestion control mechanism to losses. A TCP specific traffic profiling and traffic prediction scheme is proposed, for optimizing the TCP transmission over the OBS networks. In the proposed scheme, the burst assembly unit inspects TCP packet headers in parallel to the assembly process, keeping the flow-level traffic statistics. These are then exploited to derive accurate traffic predictions, in at least one round trip time prediction window. A hash table is maintained which consists of the arrival time of the packets at the ingress edge node. This allows traffic schedulers to be notified of upcoming traffic changes in advance and in order to optimally reschedule their resource reservations. The traffic prediction mechanism uses a scheduling algorithm which creates the burst with the packets arrived at a particular time instant. The size of burst can be varied depending on traffic load. The paper also provides the analytical and simulation results to assess its performance. Keywords--- TCP Tra ffic Profile, OBS, Burst Sizing
I.
INT RODUCTION
T
HE rapid expansion of the Internet and the ever increasing demand for mu ltimedia information are severely exposes the limits of our current co mputer and telecommun ication networks. There is an immediate need for the develop ment of new high-capacity networks that are capable of supporting these growing bandwidth requirements. The first generation optical network architectures consist of point-to-point WDM lin ks. Such networks are co mprised of several point to-point lin ks at which all traffic arriv ing to a node is dropped, converted from optics to electronics, processed electronically, and converted fro m electronics to optics before departing fro m the node. Second-generation optical network architectures are based on wavelength add-drop multip lexers (WADM) where traffic can be added and dropped at WADM locations. WADMs can allow selected wavelength channels on a fiber to be terminated, while other wavelengths pass through untouched. The amount of bypass traffic in the network is significantly higher than the amount of traffic that needs to be
dropped at a specific node hence, by using WADMs, we can reduce the overall network cost. WADMs are primarily used to build optical WDM ring networks, wh ich are expected to deploy in metropolitan area. In order to build a mesh network consisting of mult i-wavelength fiber lin ks, appropriate fiber interconnection devices are needed. Third-generation optical network architectures are based on all-optical interconnection devices. These devices fall under three broad categories, namely passive star couplers, passive routers, and active switches. The passive star is a broadcast device. A signal arriving on a given wavelength on input fiber port of the star coupler will have its power equally divided among all output ports of the star coupler. A passive router can separately route each of several wavelengths arriving on an input fiber to the same wavelength on different output fibers. The passive router is a static device; thus, the routing configuration is fixed. An active switch also routes wavelengths from input fibers to output fibers and can support simultaneous connections. Unlike a passive router, the active switch can be reconfigured to change the interconnection pattern of incoming and outgoing wavelengths. In these third-generation optical networks, data is allowed to bypass intermediate nodes without undergoing conversion to electronics, thereby reducing the costs associated with providing high-capacity electronic switching and routing capabilities at each node. All-optical systems are expected to provide optical circuit switched connections, or light paths, between edge routers over an optical core network; however, since these circuitswitched connections. Optical Circuit Switching are static, they may not be able to accommodate the bursty nature of internet traffic in an efficient manner. In order to provide the highest possible utilizat ion in the optical core, nodes would need to provide packet switching at the optical level. The optical packet switching is likely to be infeasible in the near future due to technological constraints. Alternative to alloptical circuit switching and all optical packet switching is optical burst switching. OBS Network In optical burst switching packets are concatenated into transport units referred to as bursts. The bursts are then switched through the optical core network in an all-optical manner. Optical burst-switched networks allow for a greater degree of statistical mult iplexing and are better suited for handling bursty traffic than OCS networks.
V. Christy Reena Raj, (M.E), Applied Electronics, Department Of ECE, St.Joseph’s College Of Engg. E-mail:
[email protected] P.G.V. Ramesh, M.Tech,(Ph D), Associate Professor, Department Of ECE, St.Joseph’s College Of Engg. E-mail:
[email protected] Dr.Prita Nair, Associate Professor, Department of Physics, SSN College of Engg.
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
Figure.1: The use of offset time in OBS. Optical burst switching is designed to achieve a balance between optical circuit switching and optical packet switching. In an optical burst-switched network, a data burst consisting of mu ltip le IP packets is switched through the network alloptically. A control packet is transmitted ahead of the burst in order to configure the switches along the burst’s route. The offset time in Figure.1 allo ws for the control packet to be processed and the switch to be set up before the burst arrives at the intermediate node; thus, no electronic or optical buffering is necessary at the intermed iate nodes while the control packet is being processed. The control packet may also specify the duration of the burst in order to let the node know when it may reconfigure its switch for the next arrival of burst. OBS Network Architecture An optical burst-switched network consists of optical burst switching nodes that are interconnected via fiber links. Each fiber link capable of supporting mult iple wavelength channels using wavelength division mu ltiplexing (WDM). Nodes in an OBS network can either be edge nodes or core nodes as shown in Figure. 2. Edge nodes are responsible for assembling packets into bursts, and scheduling the bursts for transmission on outgoing wavelength channels. The core nodes are primarily responsible for switching bursts from input ports to output ports based on the burst header packets, and for handling burst contentions. The ingress edge node assembles incoming packets fro m the client terminals into burst [1-2].
174
Congestion in OBS A network is said to be congested when the us ers of the network collectively demand more resources than the network has to offer. Congestion in a network leads to delayed packet arrivals, packet droppings and even denial of service for certain hosts. In the optical burst-switched networks, there exists the possibility that bursts may contend with one another at intermed iate nodes. Contention will occur if mu ltiple bursts fro m d ifferent input ports are destined for the same output port at the same time [3].Typically, contention in traditional electronic packet switching networks is handled through buffering; however, in the optical domain, it is more difficult to implement buffers, since there is no optical equivalent of random-access memory. In this paper, we discuss techniques that min imize the congestion and packet dropping so as to make efficient packet transmission. Burst Assembly Burst assembly is the process of assembling incoming data fro m the higher layer into bursts at the ingress edge node of the OBS network. As packets arrive fro m the higher layer, they are stored in electronic buffers according to their destination and class. The burst assembly mechanis m must then place these packets into bursts. In timer-based burst assembly approaches, a burst is created and sent into the optical network at periodic time intervals. A timer-based scheme is used to provide uniform gaps between successive bursts from the same ingress node into the core networks. Here, the length of the burst varies as the load varies.
Figure 3: Timeline of OBS Timer-Based Burst Assembly Algorithm
Figure 2: OBS Network Arch itecture The assembled bursts are transmitted all-optically over OBS core routers without any storage at intermediate nodes within the core. The egress edge node, upon receiving the burst, disassembles the bursts into packets and forwards the packets to the destination client terminals.
A burst is created and injected into OBS networks at periodic time intervals in timer-based algorithms. These algorith ms use a timer to indicate an assembly cycle of each queue. There is a fixed threshold Tth primary criterion to create a burst. Thereby, they provide almost uniform gaps between successive bursts from the same queue into OBS networks. However, the burst length varies with the offered load. To avoid injecting small bursts or even vacant burst, the assembly time threshold algorith m can be described as follo ws. ------------------------------------------------1. Init ialize: Ti = 0. 2. Waiting: when a packet with length pkt_size arrives at i, insert the packet into queue i (if queue I is empty, then start its timer). 3. Fro m i = 0 to N:
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
If Ti ≥ ξ × Tth then Encapsulate all packets of queue i into a burst and store the burst in the electronic buffer. Reset Ti = 0 and return to step 2. Else, return to step 2. --------------------------------------------------Time-based algorithms only consider the encapsulation coefficient and the assembly time. The assembly delay (TaT) of time -based algorithms is equal to product of ξ and Tth. At a given encapsulation coefficient ξ, TaT is only relating to Tth. That is, TaT is independent of real-time traffic. In the timerbased scheme, a timer starts at the beginning of each new assembly cycle. After a fixed time T, all the packets that arrived in this period are assembled into a burst. The time out value for timer-based schemes should be set carefully. If the value is too large, the packet delay at the edge node might be intolerable. If the value is too small, too many small bursts will be generated resulting in a h igher control overhead. Characteristics of Residential Broadband Internet Traffic The residential broadband Internet access is popular in many parts of the world, only a few studies have examined the characteristics of such traffic. Here we describe observations fro m monitoring the network activity for mo re than 20,000 residential DSL customers in an urban area. To ensure privacy, all data is immed iately anonymzed. We augment the anonymzed packet traces with in formation about DSL-level sessions, IP (re -) assignments, and DSL link bandwidth. Performance and path characteristics of TCP connections, nothing that most DSL lines fail to utilize their available bandwidth. Examin ing TCP round-trip-t imes, we found that for many TCP connections the access bandwidth -delay product exceeds the advertised window, thus making it impossible for the connection to saturate the access link. Our RTT analysis also revealed that, surprisingly, the latency from the DSL-connected host to its first Internet hop dominates the WAN path delay. me user’s immediate ISP connectivity contributes more to the round-trip times the user experiences than the WAN portion of the path; and that the DSL lines are frequently not the bottleneck in bulk-t ransfer performance. TCP Implementations in OBS Networks New-Reno and Selective Acknowledgements (SACK), the three most common TCP imp lementations today in (future) optical burst switched networks. In general, SA CK, wh ich considers multip le Triple Duplicated ACKed (TD) losses in one round, is found to perform best in OBS networks, wh ile New-Reno, which is an improved Reno in packet switched networks by fast retransmission in responding to partial ACKs, may however perform worse than Reno. All three TCP implementations react to a Time Out (TO) loss in the same way (i.e, using Slo w Start). In OBS networks, where a burs t may contain all packets fro m one round, and a burst loss occurs main ly due to contention instead of buffer overflow, such a TO event may no longer imp ly heavy congestion, or in other words, it may be a false TO or FTO. Such FTOs, wh ich
175
may be co mmon in OBS networks especially for fast TCP flows, can significantly degrade the performance of all existing TCP implementations. Accordingly, we also propose a new TCP imp lementation called Burst TCP (BTCP) wh ich can detect FTOs and react properly, and as a result, improve over the existing TCP imp lementations significantly. To improve the inefficiency of current TCP implementations in dealing with FTOs, we propose a new TCP implementation called Burst TCP or BTCP, whose aim is to detect FTOs, and treat them as TD losses, which is the way these packet losses should be treated. In this section, we describe three FTO detection methods that may be used by BTCP. The first FTO detection method we propose is for a BTCP sender to estimate the maximal number of packets that can be assembled in a burst. Such a method, called burst length estimation (BLE), does not require any changes to OBS networks, and is relatively simple to imp lement. The second method we propose is to let OBS edge nodes send burst ACK (BA CK) to the BTCP sender, which contains the informat ion of the packets contained in a burst arriving at an ingress and/or egress nodes. This BACK based method therefore requires that the OBS edge nodes be able to process TCP packets and send BACKs to the BTCP sender, but can achieve a better FTO detection accuracy than the first method BLE. Last but not least, the third method lets a core OBS node at which a burst has to be dropped send the information of the packets contained in a dropped burst using a burst NAK (BNAK ) to the BTCP senders, which requires OBS core nodes to be able to handle TCP packet processing and NAK sending. This BNAK based method can not only achieves the highest FTO detection accuracy among the three FTO detection methods, but also allo ws the BTCP sender to start TD retransmissions even before a FTO occurs. Congestion Window for TCP Flows in OBS Network Burst assembly is one of the factors affecting the TCP performance in optical burst switching (OBS) networks. When the TCP congestion window is small, the fixed-delay burst assembler waits unnecessarily long, which increases the end to-end delay and thus decreases the TCP good put. On the other hand, when the TCP congestion window becomes larger, the fixed-delay burst assembler may unnecessarily generate a large nu mber of small-sized bursts, which increases the overhead and decreases the correlation gain, resulting in a reduction in the TCP good put. Due to recent advances in optical transmission technologies, there is a growing mis match between extremely high optical transmission rates and relatively slower switching and processing speeds of optical switches. Optical burst switching (OBS) is a sub-wavelength transfer mode which is between optical circu it switching and optical packet switching. OBS separates the data and control planes in the optical and electrical do main, respectively, to eliminate the technological problems involved in processing the packet header. In an OBS network, an ingress OBS node assembles data packets emanating from its clients into bursts. A variablelength optical burst is composed of several packets; avoid small size optical packets, so that the requirements for
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
transmission and synchronization in the optical domain can be avoided. Once the optical burst is formed, a control packet which includes the header information is sent, so that the OBS switches along the path from the ingress node to the egress node can be configured before hand for the inco ming burst. When TCP flows over OBS networks are considered, TCP performance is significantly affected by the burst assembly algorith m [3]. The burst assembly mechanis m introduces a delay penalty in TCP throughput since the round -trip times of the TCP flows are increased .This is simp ly because the incoming packets to the ingress node need to wait in the burstifier queue until the optical burst is generated. On the other hand, the enlargement of the transmission units from single packets to bursts and thus increasing the number of TCP segments between consecutive loss events increase the TCP performance, wh ich is the so-called correlat ion gain. II.
TCP T RAFFIC PROFILING
TCP pro filer provides running online measurements of flow parameters and performance counters, and extract flow-level traffic statistics. The flow profiler stores an array of counters per sampled flow, to measure a set of flow characteristics. After receiv ing a packet that belongs to the flow sample, it updates one or more of the corresponding counters. These per-flow counters are as follows: 2. Flo w RTT and 3. SPB ratio
1.Flow length,
Traffic Model The network traffic is the multiplex of a finite number of TCP flows, an approach proposed in TCP flow dynamics are a function of the flow congestion window, whose evolution is a function of a small set of parameters, i.e., the round trip time, segment-per-burst ratio, and the burst loss ratio. The TCP flow profiler detailed in the previous section keeps traffic statistics for TCP performance parameters as well as the number of active TCP flows. Thus, the aggregated throughput of N act ive TCP flows can be derived as R(t) =∑ N Xm(t) Where Xm (t) being the flow rate process that measures the transmission rate of flow m [5]. To Reduce Packet Loss in OBS Networks Congestion, the main cause of contention in OBS networks, affects the performance because of the saturation in network resources. The contention occurs when multiple bursts intend for the same outgoing channel simu ltaneously in a switch. To make the optical network capable of handling enormous internet applications, there is a need to deal with network congestion in OBS netwo rk. The most important concern of OBS networks is the packet loss due to burst contention problem. The main contention resolution schemes that have been proposed are the optical buffering technology using the fiber delay line, wavelength conversion and scheduling mechanism. The first two techniques, being very sophisticated and immature for optical hard ware devices, divert more attention towards scheduling technique. All the
176
techniques have been thoroughly investigated and it is found that most of the schemes are restricted only to some specific application or network, rather than a flexib le solution wh ich controls the network traffic volu me properly. Existing conventional schemes discussed in the literature cannot be considered to be completely preventive for reducing burst dropping as instead of avoiding the early destruction, these techniques adjusts the burst sending rate after the occurrence of contention or burst loss. Thus, to improve the burst loss performance of the optical network, we need to develop an efficient scheduling algorithm contention-control method in which the network loads are scheduled to maximize throughput rate. To overcome the restrictions of previously discussed contention resolution techniques, we propose a new scheduling algorithm. In this technique, the packets arriving at a particular g iven time are formed as a burst and are transmitted to the core nodes. The information about various traffic flows are send to previous core nodes and finally to ingress edge routers in the network as the acknowledgement. So that ingress edge node can deflect the existing burst accordingly and adjust further burst sending rate into the network according to the availab le bandwidth in the network. In an OBS network, the edge routers have all the capability and intelligence while the core routers have effortless cu tthrough switching task, to use the feedback information fro m network to control the congestion problem. In the proposed contention resolution scheme, each ingress node adjusts its burst sending rate based on the congestion informat ion received fro m the network. TCP Over OBS Transport control protocol (TCP) transmission over optical burst-switched (OBS) networks has various burst assembly and burst scheduling algorith ms, to enhance the efficient transmission of TCP-over-OBS networks. However, it still remains an open problem, since the relatively high burst loss ratio experienced in OBS networks is inco mpatible with the TCP congestion control mechanism. Enhancing TCP throughput over an OBS network is critical, since TCP traffic represents a dominant part of Internet traffic. Key issues on TCP over OBS There are several works that analyze the perfo rmance of TCP in OBS networks. This performance is closely related with the Retransmission timeout (RTO) of TCP and how the different versions of TCP treat this event. The rate of a TCP connection can be approximated to the value of the TCP transmission window size d ivided by the Round Trip Time (RTT). The RTT of a connection crossing an OBS network is related to the value assigned to the burst Size and b urst Timeout parameters. The RTT decreases with the value of these parameters, since it takes less time to send the OBS payload. The Burst size is the dominant parameter for a heavy loaded destination egress node because OBS bursts are sent as soon as the Burst size is reached. On the other hand, the Burst Timeout is the dominant parameter in a link with lo w load of traffic due to the fact that data is delivered only when the Burst Timeout timer exp ires. But, it must be taken into consideration how the OBS network can affect the TCP
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
Window Size. The size of the TCP Window is set according to different events depending on the TCP version used, upon two events: RTO and Duplicated ACKs, both events produced by packet losses. Losses of packets in an OBS network are due to OBS burst drops when these cannot be allocated in a free transmission time. Th is blocking probability depends on the available wavelengths for data and signalling, on the patterns of the inco ming traffic and on the burst timeout and burst size parameters; but it is not easy to measure how all these parameters can affect the window size and throughput without simu lation. Traffic p rofiling is vital in co mmercial enterprise-class networks, because it serves as a basis for network management, capacity planning, and for gaining insights on user-perceived performance. Monitoring devices are typically deployed on the network edge, to capture packets fro m monitored flo ws and extract flow-level traffic statistics. It operates in a passive mode (i.e., it does not inject or alter traffic but it measures existing traffic passing through), and it is able to ext ract detailed t raffic statistics in the network points, where traffic measurements are carried out. These statistics provides flow performance variab les such as throughput and loss ratio [4]. Flow Sampling Architecture Traffic measurements are typically performed on sampled packet sub streams, a process called traffic sampling, in order to limit the consumption of resources . These measurements are subsequently used to compile flow-level statistics of active TCP flows. The TCP flow profiler proposed in this work emp loys the hash-based sampling technique, which is unbiased and has a straightforward and efficient implementation. This technique performs a random (Bernoulli) selection of flo ws and retains all packets received fro m sampled flows. Selected (or sampled) flows are called the flow sample, and they account for a small percentage (typically 1/1000) of overall active flo ws. The flow sample is stored on a hash table, which maintains a set of per-flow counters, updated on a packet-by-packet basis. This hash table is indexed by the flow-tuple. For every packet that is received by the burstifier, the flow profiler determines whether a flow record is active for that flow ID. If a flow record is active, the flow statistics are updated on reception of the packet. If not, the profiler must decide whether the packet will be retained. Since hash algorith ms are designed with an objective to evenly distribute a stream of values, the flow ID is assumed uniformly distributed and thus can serve as an unbiased criterion of flow selection. Hash-based sampling is carried out in parallel with the burst assembly process, where the packet headers are inspected for being assigned to the appropriate assembly queue. Flow Performance Counters The flow profiler stores an array of counters per sampled flow to measure a set of flow characteristics. After receiv ing packet that belongs to the flow sample, it updates one or more of the corresponding counters.
177
These per-flow counters are as follows, 1)
Flow length: This measures the number of TCP data packets transmitted from the flo w. It is incremented on a packet-by-packet basis. 2) Flow RTT: This measure is the round trip time fro m the flow source to the destination. It is calculated once during the three-way handshake period. The profiler must keep track of the time each control packet (SYN, SYN-A CK, and ACK) was received. Then, the flow RTT is estimated as the s um of Profiler-to-Server and Client-to-Profiler round trip t imes. 3) SPB ratio: This counter measures the number of segments of the flow assembled in a burst. Flow SPB is determined by the flow sending rate and the duration of burst assembly timer (Tbat). The storage requirements for each per-flow record are less than 100 bytes, since we only have to store the mentioned flow counters as well as the flo w ID. In this paper a time based burst assembly technique is proposed. Control information is transmitted on a separate wavelength (reserved only for control messages). In timerbased burst assembly approaches, a burst is created and sent into the optical network at periodic time intervals. A timerbased scheme is used to provide uniform gaps between successive bursts from the same ingress node into the core networks. Here, the length of the burst varies as the load varies. Reservation of Resources Based on Perdiction The prediction of future burst sizes using sampling method, detailed in the previous sections, can be used for notifying the core nodes in advance of upcoming traffic changes per prediction interval. In this section, we propose a resource reservation scheme, in which edge nodes send a single SETUP (o r refresh) message per end destination per prediction interval, in order to commun icate to the core nodes across the path the aggregate traffic of the next predict ion interval. This allo ws core nodes to reschedule their resources in view of the updated (predicted) traffic conditions. In this way, optimizat ion of the link utilizat ion and minimizat ion of burst losses can be achieved. A number of one-way reservation schemes have been proposed for OBS networks, including just enough time (JET), just in t ime (JIT), and Horizon. Resource reservation in one-way OBS networks is typically handled fro m link schedulers, which assign wavelengths to incoming bursts based on an online scheduling algorith m. Ho wever, online scheduling algorithms such as latest available unscheduled channel (LAUC) are best effort in nature and often make sub-optimal decisions. The figure.4 shows the working of LAUC algorith m. The scheduler maintains a table consisting of the latest time each wavelength is scheduled for. It then picks a wavelength which is available and that which has the latest scheduled time [5-7]. This scheduler is the easiest to implement compared to all other scheduling algorith ms.
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
Figure .4: LA UC Algorithm At the reception of a burst header packet (BHP), the lin k scheduler searches for unscheduled channels that can service the corresponding burst and selects the best available at that time. Since the scheduler has no information for future arrivals, bursts are never blocked as long as a feasible wavelength can be found, even if dropping a burst would maximize the overall nu mber of accepted ones. SETUP message per source–destination pair in the beginning of each prediction interval sent to notify the core routers of the future upcoming burst sizes. Thus, core routers can use batch scheduling algorith ms, achieving a decreased burst loss ratio due to the more efficient use of resources. It is important to note that the use of one SETUP message does not affect the different classes of service that may be employed in the network. The single SETUP message is enough to carry the burst overhead informat ion (burst start time, pred icted burst size) fro m all p riority classes per source–destination pair. The proposed reservation protocol is based on the observation that the arrival time of the bursts at the core nodes is periodic, with a period equal to the burst assembly time (Tbat). This periodicity allo ws burst arrival times at individual core nodes to be accurately predicted.
178
soon as a new SETUP message is received. Figure.8 wh ich shows the acknowledgment is received about the reception of burst. Upon its reception a scheduling algorithm is executed looking for a scheduling solution that satisfies the requirements of all source–destination pairs competing for the same outgoing link. In order to acco mmodate the updated (or new) burst size, rescheduling of previously scheduled bursts in a different wavelength or even burst dropping may be required. The arrival time of the bursts at the core nodes is periodic, with a period of TBAT [5]. Thus, the link-state timing information can be mapped in a [0, TBAT) interval using modular (residue) arithmet ic. The timing info rmation of burst bi, i.e., its start time Si and its end time Ei , is rep resented as Si=start_time(i)modTBAT Ei=(Si+length(i))modTBAT Where start_time (i) and length (i) are the arrival time and the predicted length of burst bi. III.
PERFORMANCE EVALUAT ION
In this section, we evaluate the proposed TCP-specific prediction mechanis m, as well as the proposed predictive reservation protocol using the ns -2 simulat ion platform. In this experiment the TCP flow profiler is evaluated as a prediction mechanis m in an OBS network, using a simple three node topology and the performance gains are obtained from the TCP predict ion mechanism. Evaluation of the Traffic Predictor For evaluating the proposed TCP-specific profiling and prediction mechanism a simple three -node network topology was modeled, wh ich consists of two edge nodes and one core node. A timer-based burst assembly scheme was implemented at the edge nodes with a time threshold (Tbat) of 3 ms, while a JET signaling scheme was employed for one-way resource reservations. The network round trip time was set to 15 ms, while all clients had a uniformly selected access rate of 20 Mpbs, 50 Mpbs, and 100 Mpbs.
Figure.5: Timing considerations of the proposed predictive reservation protocol The proposed reservation protocol augments the standard one-way JET signaling with a SETUP message that carries the predicted burst size. One such message is sent per source– destination pair at the beginning of the prediction interval. As seen in Figure.5, the SETUP message propagates downstream and notifies the link schedulers along the path with the burst arrival t ime as well as their predicted size for the following prediction interval. At the reception of this SETUP message by a core node, resources are reserved for all the bursts of the prediction interval, while actual bursts will arrive at a later time. This scheme allo ws schedulers to obtain a priori knowledge of all bursts competing for each outgoing link, and use a more efficient batch channel scheduling algorith m. The wavelength allocation in each lin k scheduler is performed as
Figure.6:Simulat ion in NS2, shows the nodes
ISBN 978-93-82338-53-6 | Published by Bonfring
National Conference on System Design and Information Processing – 2013
179
REFERENCES [1] [2]
[3] [4] [5]
Figure.7: Trans mission of burst from node0 to node3
[6] [7]
C. Qiao and M. Yoo, ―Optical burst switching (OBS)—A new paradigm for an optical Internet,‖ J. High Speed Netw., vol. 8, no. 1, pp. 69–84 X. Yu, C. Qiao, Y. Liu, and D. Towsley, ―Performance evaluation of T CP implementations in OBS networks,‖ Tech. Rep. 2003-13,CSE Department, SUNY, Buffalo, NY, 2003. 936 J. OPT. COMMUN. NETW./VOL. 3, NO.12/DECEMBER 2011 Kostas Ramantas B. Shihada, P.-H. Ho, and Q. Zhang, ―A novel congestion detection scheme in T CP over OBS networks,‖ J. Lightwave Technol.,vol. 27, no. 4, pp. 386–395,2009. Q. Zhang, V. M. Vokkarane, Y. Wang, and J. P. Jue, ―Analysis of T CP over optical burst-switched networks with burst retransmission‖,IEEE Globecom, vol. 4, pp. 1–6, 2005, 1983. Kostas Ramantas and Kyriakos Vlachos ,―A T CP-Specific Traffic Profiling and Prediction Scheme for Performance Optimization in OBS Networks‖ J. Opt. Commun. Netw./Vol. 3, No. 12/December 2011 R. R. C Bikram, N. Charbonneau, and V. M. Vokkarane, ― Coordinated multi-layer loss recovery in TCP over optical burst -switched (OBS) networks,‖ in Proc. E Int. Conf. Communications, 2010, pp. 1–5. E. Arkin and E. Silverberg, ―Scheduling jobs with fixed st art and end times,‖ Discrete Appl. Math., vol. 18, no. 1, pp. 1–8, 1987.
Figure.8: Red color shows the Acknowledgment
Figure.9 : Transmission of Acknowledgment fro m node1
Figure .10:Successful transmission of burst without drop
ISBN 978-93-82338-53-6 | Published by Bonfring