Keywords: Service provider network, Latency, MPLS,. Congestion. 1 Introduction. Ethernet Latency can be defined as the time it takes for a network packet to get ...
A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN) Adeyinka A. Adewale, Samuel N. John, and Charles Ndujiuba Department of Electrical and Information Engineering, Covenant University, Ota, Ogun State, Nigeria
1
Abstract - A scenario of a service provider network when heavily congested due to heavy traffic flow is brought into play to confirm the attribute of MPLS to significantly improve Latency especially in a highly congested network. The service provider network was congested by sending heavy packets through the ingress and egress port of the network. The network was simulated and ping tests were carried out from a company’s head office to one of its branch offices via the service provider network without MPLS implementation and with MPLS enabled. A graphical result showed that latency was reduced drastically with MPLS engaged as the packet forwarding technique on the same Wide Area Network (WAN). Keywords: Service provider network, Latency, MPLS, Congestion
1
Introduction
Ethernet Latency can be defined as the time it takes for a network packet to get to a destination or the time it takes to come back from its destination. [1]. It is the delay from the time of the start of packet transmission at the sender to the time of the end of packet reception at the receiver. It also refers to the time taken to deliver an entire data unit (packet) [2]. Latency in a packet-switched network is stated as either one-way latency or Round-Trip Time (RTT). One-way latency is the time required to send a packet from the source to the destination or the RTT divided by two (RTT/2) which means the one-way latency from the source to the destination plus the one-way latency from the destination back to the source divided by two (RTT/2) [1][11]. Latency also refers to any of several kinds of delays typically incurred in processing of network data like the time an application must wait for data to arrive at its destination. Ethernet Latency is also known as End-to-end latency which is a cumulative effect of the individual latencies along the end-to-end network path. Since latency is cumulative it means that adding more links and router between the sender and receiver will increase the latency. Network routers are the devices that create the most latency of any device on the end-to-end path. These network devices (routers) are usually found in network segments. Packet queuing due to link congestion is most often the reason for large amounts of latency through a router [5]. A low latency network is one that generally experiences small delay times which also enables the possibility of having a fast internet connection, while a high latency network generally
suffers from long delay times which results in a slow internet connection [3]. Latency affects all network applications to some degree. The degree to which latency affects an application‟s performance depends on the application‟s programming model. Latency impacts an application‟s performance by forcing the application to stall while waiting for the arrival of a packet before it can continue to the next step in its processing. Excessive latency creates bottlenecks which prevent data from filling the network pipe (bandwidth) and delays packet arrival therefore limits the performance of network application [5]. This hinders high-quality network performance needed timesensitive applications e.g. VoIP, Online games, algorithmic trading [1]. Latency is one of the two key elements that affect a network‟s performance alongside bandwidth which is the capacity of the network [4]. Speed and capacity are networking concepts that are often commonly misunderstood. For example latency describes „how fast an internet connection is‟ while „how much data can be transmitted per second‟ is determined by the bandwidth [6]. When it comes to web browsing experience, it turns out that latency, not bandwidth, is the more likely constraining factor at present [7]. A large bandwidth connection only gives the ability to send or receive more data in parallel but not faster as the data still needs to travel the distance and experience the normal delay [8]. MPLS (Multi-Protocol Label Switching) is an Internet based technology that uses short, fixed-length labels to forward packets through the network. MPLS has the attributes of both the layer 2 switching and the layer 3 routing which makes it a very efficient protocol [9] [10]. „Label switching‟ indicates that the packets switched are no longer IPv4 packets, IPv6 packets or even Layer 2 frames when switched, but they are labeled [12]. The MPLS labels are advertised between routers so that they can build a label-to-label mapping. These labels are attached to the IP packets enabling the routers to forward the traffic by looking at the label and not the destination IP addresses. The packets are forwarded by label switching instead of by IP switching [12].The fact that MPLS uses labels to forward packets and no longer the destination IP address have led to the popularity of MPLS [12]. Multi-protocol labeled packets are switched after a label lookup/switch instead of a lookup into the IP table. Label lookup and label switching in MPLS is seen to be faster than a routing table or RIB (Routing Information Base) lookup because they could take place directly within the switched fabric and not the CPU [14], [15]. Devices used in an MPLS
Network include customer-edge (CE) routers which is the network device at the customer location that interfaces with the service provider, Provider-Edge (PE) routers which are the device at the edge of the service provider network that interfaces with the customer devices and the provider or core (P) routers which are the devices building the core of the MPLS-enabled network. The PE devices are often also called label switching router edge (LSR-Edge) because they sit at the edge of the MPLS-enabled network. While the provider router have their main functionality which is to label switch traffic based on the most external MPLS tag imposed to each packet and for this reason are often referred to as label switching routers (LSRs)[13].
2
Ethernet switches representing five Virtual Local Area Networks (VLANs) for five departments each consisting of IP phones, laptop computers and printers. The main office (headquarters) is linked with the service provider office and the branch office via optical fiber while the LANs at both the companies and their branches make use of Fast-Ethernet twisted-pair cables .The LAN at a branch office is the same with the LAN at the headquarters.
Network design
The scenario of the service provider network to show the effect of MPLS was simulated using GNS3 Software.GNS3 (Graphical Network Simulator version 3) is a network simulator that allows the emulation of complex networks. It provides the user with a realistic feel when configuring the various devices. It is a good tool testing and implementing new infrastructure and devices into an existing architecture. It is also an open source free program that may be used on multiple operating systems [14]. GNS3 provides an estimate of 1,000 packets per second throughput in a virtual environment. A normal router will provide a hundred to a thousand times greater throughput. The devices used in this scenario include routers (customer, provider edge routers and the core routers), Ethernet switches (access and core switches), IP phones, laptop computers and printers. The simulated service provider network is made of five companies (A- E) and their respective branches connected to the service provider via optical fiber cables as shown in Figure 1.
Figure 2: Network Topology Area The Local Area Network (LAN) of each Branch and Headquarter is linked with Service Provider network via the Provider-Edge (PE) routers while Core routers swap the labeled packets in the Service Provider network (MPLS network).
3
Implementation
The Figure 3 below shows the running configuration of one of the customer-edge router
Figure 1: Companies (LAN) and Branches (LAN) connected to the Service Provider Network The topology area in Figure 2 below shows the design of a local area network (LAN) of one company‟s Headquarters connected to the LAN of one of its branches via a service provider network. The LAN located at the company‟s headquarters consists of the customer router, five Cisco
Figure 3: Figure showing running configurations on the Customer-edge router
Figure 4 below shows all the working interfaces, and their IP addresses on the customer-edge router using the „show interface brief‟ command.
Figure 3: IP address configurations on the Provider-edge router Figure 1: Configured interfaces on the Customer-edge router Figure 5 below shows the directly connected routers or routes of the customer-edge router and the EIGRP (Enhanced Interior Gateway Routing Protocol) learned routes that form its routing table which assists the router in routing packets across the network. The configured interfaces can be viewed using the „show ip route‟ command.
Figure 4: MPLS configurations on the Core router
Figure 5: command
Disabling of MPLS (using the „no MPLS IP‟
Figure 9 below shows the directly connected routers or routes of the core router and the EIGRP learned routes that form its routing table in which the router uses to route packets across the network. This configuration can be viewed using the „show ip route‟ command
Figure 2: Figure showing IP route on the Customer-edge router The provider-edge/customer-edge routers and the provider/core routers in the service provider network were configured with IP addresses and they were also configured to disable MPLS and also to enable the same at different points of the testing process. Figure 6 below shows the IP address configuration of the provider-edge router
Figure 6: IP routes of the Core router Figure 10 below shows all the working interfaces, and their IP addresses using the „show interface brief‟ command.
Figure 10: A ping test from headquarters to branch office
Figure 7: Configured interfaces on the core router
After ping test, 6000 packets are then sent from PE-router 1 to PE-router 2 10000 times with MPLS disabled. This is to keep the service provider network congested for the period it would take PE-router 1 to send 6000 packets to PE-router 2.
Figure 11: Traffic Build-up on PE router 1
Figure 8: MPLS forwarding table on the Core router 6000 packets are also sent from PE-router 2 to PE-router 1 10000 times which keeps the network congested for the period.
Figure 12: Traffic build-up on PE-router 2 A ping test is then carried out from the headquarters to the branch office and latency is calculated from the round trip time (RTT). Latency = RTT/2. The process is repeated with MPLS enabled and latency values were recorded. The simulated network is again tested with network congestion of 9000, 12000, 15000, 18000 packets sent 10000 times, recording the latency values when MPLS is disabled and when MPLS is enabled.
Figure 9: MPLS forwarding table on the Core router
4
Testing and results
Network tools like ping tests and trace route were used to measure latency by determining the time it takes a given network packet to travel from source to destination and back (RTT) then dividing the time by two (RTT/2), the most common technique used is the ping test.
4.1
Measuring network latency
After the network is designed and simulated, a ping test was carried out from the headquarters to the branch office to confirm that the branch office can be reached from the headquarters via the service provider network and vice-versa.
Figure 13: Ping tests from HQ to branch office (MPLS disabled)
Figure 14: Ping test from HQ to branch office (MPLS enabled) The results of the simulation and test are tabulated as shown below in Table 1. Figure 18 and Figure 19 are the graphical realization of the results. The graphical comparison shows a sharp rise in latency as the network is getting congested but the reverse is the case when MPLS is enabled showing a drastic reduction in latency. Also, it can be inferred from the graph (Figure 19) that the latency with MPLS enabled is decreasing with increasing core network congestion which implied that MPLS is a good technology for congested WAN core network. Table 1 Results of Traffic Congestion with different test packets Order of Delay (RTT/2) in Ping Disabled) Test/No. Packets 6000 9000 12000 1st 331 401 543.5 2nd 301.5 382.5 511 3rd 420 394 491.5 4th 431.5 450 511.5 th 5 390.5 380 501 Avg.Del 374.9 401.5 511.7 ay(D/5)
ms (MPLS
Delay (RTT/2) in ms (MPLS Enabled)
15000 18000 6000
9000 12000 15000 18000
749 558 742.5 533.5 649.5 646.5
142.5 157.5 244 226 169 187.8
607.5 554 530 690 550 586.3
157 151.5 246.5 200 190 189
188.5 195.5 196 298 174 210.4
158 159.5 144.5 260 142.5 172.9
208 163 144 133.5 131.5 156
Figure 16: Graph of Latency per increase in number of packets with regards to MPLS Enabled and MPLS Disabled Scenarios
5
Conclusion
Results gotten from the simulation shows that the Latency in an IP network that is when MPLS is disabled, rises sharply as number of packets in the core of the service provider network increases while it drops drastically when MPLS is enabled, even as number of packets in the core of the service provider network increases. From the simulation, it is clear that MPLS is a better technique for improving latency when compared with the traditional IP network in that it takes less time to send data from a source to its destination. MPLS will therefore be more efficient if applied in the current internet architecture. Moreover, the potential of the MPLS technology is yet untapped in most parts of the world with respect to the services it can provide such as MPLS VPN (Virtual Private Network) and MPLS TE (Traffic Engineering) among others. Enterprises and service providers can experience an improvement in the rate of achievement of business targets by implementing and maximizing the capabilities of MPLS in their networks.
6
References
[1] Aliso Viejo, “Introduction QLogic Corporation, 2011.
to
Ethernet
[2] Wikipedia. Retrieved March http://en.wikipedia.org/wiki/Low_latency
30,
Latency“
2014,
[3] Horney, C., “Quality of Service and Multi-Protocol Label Switching”. Irvine, CA: Nuntius Systems Inc., 2014
Figure 15: A Bar chart showing Latency per increase in number of packets with regards to (MPLS Enabled) and (MPLS Disabled
[4] Mitchell, B. (n.d.). Network Bandwidth and Latency. About.com Wireless/Networking. Retrieved March 28, 2014, http://compnetworking.about.com/od/speedtests/a/network_lat ency.htm [5] Smutz, E. (n.d.). Network Latency. Network Latency. Retrieved March 30, 2014, from http://smutz.us/techtips/NetworkLatency.html
[6] Latency versus Bandwidth - What is it? DSL FAQ | DSL Reports, ISP Information. (n.d.). Latency versus Bandwidth What is it? DSL FAQ | DSL Reports, ISP Information. Retrieved March 30, 2014, from http://www.dslreports.com/faq/694 [7] Grigorik I., “Latency: The New Web Performance Bottleneck”, July, 2012. http://www.igvita.com/2012/07/19/latency-the-new-webperformance-bottleneck/ [8] Hoffman, B. “Bandwidth, Latency, and the Size of your Pipe”, December, 2011. http://zoompf.com/blog/2011/12/idont-care-how-big-yours-is [9] Teng, Y., “Multi-Protocol Label Switching”, Computer Science and Engineering. Lecture conducted from University of Maryland Baltimore County, Baltimore, Maryland, March 2010. [10] Traffic Engineering and QoS with MPLS and its applications, http://utta.csc.ncsu.edu/csc570_fall10/wrap/MPLS_Report.pdf , 2014 [11] Latency (engineering). http://en.wikipedia.org/wiki/Latency (engineering), 2014 [12] Ghein, L., “MPLS Fundamentals”. Indianapolis, Ind.: Cisco Press, 2007. [13] MPLS Basic MPLS Configuration Guide, (2008). San Jose, CA: Cisco Systems, Inc. [14] Welsh, C., “GNS3 network simulation guide”, Birmingham, UK: Packet Publishing,. October, 2013. .