SONEP: A Software-Defined Optical Network Emulation Platform - Geant

0 downloads 0 Views 574KB Size Report
the architecture of a Software-Defined Network (SDN) emulation platform for transport .... that employs light-weight process-level virtualization, in which ..... Pulse shape and spectrum of un-chirped Gaussian pulses propagating exactly at the ...
SONEP: A Software-Defined Optical Network Emulation Platform Siamak Azodolmolky∗ , Martin Nordal Petersen† , Anna Manolova Fagertun† , Philipp Wieder∗ , Sarah Ren´ee Ruepp† and Ramin Yahyapour∗ ∗ Gesellschaft

f¨ur Wissenschaftliche Datenverarbeitung mbH G¨ottingen (GWDG), G¨ottingen 37077, Germany, E-mail: {Siamak.Azodolmolky, Philipp.Wieder, Ramin.Yahyapour}@gwdg.de † Technical University of Denmark (DTU) Fotonik, Oersteds Plads 343, 2800 Kongens Lyngby, Denmark, E-mail: {mnpe, anva, srru}@fotonik.dtu.dk

Abstract—Network emulation has been one of the tools of choice for conducting experiments on commodity hardware. In the absence of an easy to use optical network test-bed, researchers can significantly benefit from the availability of a flexible/programmable optical network emulation platform. Exploiting the lightweight system virtualization, which is recently supported in modern operating systems, in this work we present the architecture of a Software-Defined Network (SDN) emulation platform for transport optical networks and investigate its usage in a use-case scenario. To the best of our knowledge, this is for the first time that an SDN-based emulation platform is proposed for modeling and performance evaluation of optical networks. Coupled with recent trend of extension of SDN towards transport (optical) networks, the presented tool can facilitate the evaluation of innovative idea before actual implementations and deployments. In addition to the architecture of SONEP, a use-case scenario to evaluate the quality of transmission (QoT) of alien wavelengths in transport optical networks, along with performance results are reported in this piece of work. Index Terms—Network emulation, Software-defined network, Transport SDN, Emulated WDM, Linux namespaces

I. I NTRODUCTION Significant advances in optical technologies, enables transport networks to provide more intelligent, and flexible multiplexing and switching functions in addition to basic data transport and survivability. Transport network elements are also becoming more intelligent for flexible management. The growth in traffic volumes, changing traffic profiles and types of applications prompted service providers to rethink not only how to optimally engineer their IP and optical backbone networks, but also to ease their operational and management overhead. Starting from campus Ethernet networks [1], the next step for software-defined networking (SDN) ( [2], [3] to name a few) was the data center. Centralized SDN controllers have demonstrated value by analyzing the entire data center network as an abstracted and virtualized representation, simulating any network changes ahead of time and then automatically configuring multiple switches on demand. Faster and more deterministic convergence compared to a distributed control plane, increased operational efficiency, with links running at 95% vs. 25%, and rapid innovation supported by simulating the network in software before actual deployments are key benefits of SDN in WAN, which are reported by Google’s G-scale network [4], [5].

Fig. 1. Transport SDN can provide an abstract view of a multi-layer network as a single, flat network topology graph [7].

Extending the notion of SDN from packet switching (L2+) to transport (circuit switching) (L0/L1) networks, can help SDN demonstrate its real power by managing multiple elements from multiple vendors across multiple layers of the network, including the IP and Ethernet layer, the OTN switching layer, and down to the optical transmission layer [6]. In order for SDN to be truly useful in multi-domain, multi-vendor and multi-layer networks, it needs to extend its control to include the emerging next-generation converged optical transport networks. Efficiently running multi-layer, multi-vendor, and multi-domain is the key challenge on the path of SDN from packet domain towards carrier (transport) networks. Transport SDN, as depicted in Fig. 1, can represent a multi-layer network as a single virtualized abstracted topology [7]. While there are several software tools and utilities, in particular network emulation tools [8], for researchers in the packet switching domain of SDN/OpenFlow, researchers with interests in modeling transport optical network have no access to similar tool-chain to be able to evaluate their innovative idea using an emulation environment. Network emulation could be an attractive, cost-effective tool for network researchers. In this work we report our experience with design and development of a software-defined optical network emulation platform (SONEP), which can be used for early design and evaluation of SDN-based optical networks. To the best of our knowledge this is the first time that an effort is dedicated to the design and development of an SDN-based optical network

emulation. Further more, the performance evaluation of a usecase scenario using SONEP is presented. The remaining part of this paper is organized as follows. Section II is an overview of the existing network emulation solutions. In Section III, the main building blocks of our proposed emulation platform are presented. A use-case scenario, which is under investigation in the framework of MOMoT project1 along with selected results, which are generated using the SONEP framework, are discussed in Section IV. Finally, Section V provides directions for our future work and concludes this paper. II. R ELATED WORK In addition to analytic models, discrete-event simulations, emulation platforms, and experimental test-beds, are general means to evaluate the performance of novel algorithms or to re-produce the result of existing researches. Current simulation options (e.g., ns-2, OPNET, and OMNET++) provide flexibility with easy to reproduce results but lack “realism”. Testbeds provide realism but lack flexibility (e.g., Emulab lacking support for arbitrary topology). Besides, the problem with testbed experiments is that their results can be hard to reproduce. First, an experiment may not be feasible to be conducted on a test-bed due to topology restrictions (e.g., GENI only supports tree topology, or it is not possible to change the OpenFlow firmware of a switch in the test-bed). There are also possible issues with availability of the test-bed. Finally, since the testbed may not be available indefinitely, the research results may not be reproducible in the future. In a network emulation platform, (e.g., Mininet [8], [9]), the core of the network is like a simulator, in that it processes events. The difference is that these events happen in continuous time. A typical network emulation tool, describes software running on a host that configures and runs (to the possible extend) everything that researchers would find in a real network: the links, switches, packets, and the servers. The emulated servers (like test-beds) run code (e.g., OS kernel, network protocols), rather than a discrete event “model”. Unlike a simulator, the protocols codes (e.g., TCP stack) are the same codes that would run on multiple systems, and it captures implementation quirks such as OS interactions, lock conflicts, and resource limitations that simulation “models” inherently ignore and abstract away for the interest of simplification. Network emulators support arbitrary topology and their virtual “hardware” costs less compared to real test-beds. Network emulators can be classified as 1) Full-system emulation, (e.g., DieCast [10]) or VMs connected to each other using a software switch (Open vSwitch), which uses one full virtual machine per host, or 2) Container based emulations (e.g., Mininet, virtual Emulab [11], Trellis [12], NetKit [13]) that employs light-weight process-level virtualization, in which many aspects of the system are shared. Sharing page tables, 1 Multi-domain Optical Modeling Tool, is an open call project under the GEANT project (GN3plus) (http://www.geant.net/opencall) under the network architecture and optical projects.

kernel data structures, and the file system; lightweight OSlevel containers are able to achieve better scalability than VMbased systems by providing a larger number of small virtual hosts on a single system [8], [14]. High-fidelity network emulation platforms (e.g., Mininet), couples a resource isolating emulator with a monitoring agent to verify properties of the emulation run. An ideal emulator is indistinguishable from hardware, in that no code changes are required in order to port an experiment and no performance differences will be resulted. Motivated by lack of optical network emulation, in this work we present an SDN-based optical network emulation platform (SONEP) using lightweight Linux namespace containers for modeling and performance evaluation of SDN transport optical networks. III. A RCHITECTURE OF SONEP The overall architecture of SONEP is depicted in Fig. 2. The SONEP emulation manager is running in the user space and interface with the end user through a command line interface (CLI). It also includes the namespace node manager and commander (explained below) to manage the namespace lightweight system virtualization nodes. Typical Linux networking tools (e.g., ip, TC/Netem, brctl, ebtables, iperf) running in user space are available to control the various characteristics of the emulated network. Note that both namespace nodes and the user space have access to the shared file system. Therefore, any program residing in the shared file system is accessible both in the userspace and network emulation environment. The emulation manager, virtual open transport switch (OTS) [7], and emulated WDM links are explained in the following sub-sections. A. Emulation manager Namespaces support appeared in the mainline Linux kernel, starting with version 2.6.24, with the clone() system call. A namespace can have its own network stack, filesystem mounts, and process hierarchy. SONEP emulation manager was extended to support these Linux network namespaces (or Linux containers) using two core modules named namespace node manager (NNM) and namespace node commander (NNC). The NNM performs the clone() system call. Protocols and applications can run in these nodes without modification.The NNC is the interface that can send commands to the namespace processes for execution. In addition to these two core modules, the normal user space networking tools (e.g., ip, TC/Netem, brctl) are available to control and configure different networking requirement of the emulated networks. Currently there is no GUI for accessing the SONEP emulation manager and a simple command line interface (CLI) is used to create the emulated network scenario. Each emulation consists of packet switching and optical transport emulated networks. The key building blocks of these networks are SDN-based virtual OTSes (VOTSes), emulated (D)WDM and/or virtual Ethernet links, emulated hosts, and OpenFlow switches. Virtual Ethernet devices are created and installed into nodes and connected to the networks; they form

interfaces into optical trunks with the required capacity. [7]. In SONEP, we used the Stanford reference implementation of OpenFlow soft-switch as a baseline and additional services can be incrementally added. The VOTS consists of these components: 1) Discovery agent to notify the state change of network element, discovery and registration of SDN controller, 2) Control agent for monitoring and propagating notifications and alarms to the controller and 3) Data plane agent (extended OpenFlow), which is responsible to program the NE datapath to create/update/release circuits/Label switched path (LSP). The reference implementation is a user-space soft switch, which can be simply executed inside an emulated namespace and controlled by any compliant SDN controller. C. Link emulation

Fig. 2. Architecture of SONEP and a sample converged (packet switching and optical transport) emulated network.

the boundary between the physical host and the namespace nodes. In the case of VTOS nodes, these virtual Ethernet interfaces play the role of individual optical transport port (channels). Real network interfaces may also be installed into nodes, linking the virtual world with the real world. Distributing the emulation across several physical machines is one way to alleviate the processing bottleneck encountered on a single machine, especially as more complex and CPU intensive emulated nodes are used. Further study needs to be done to quantify the scalability of this distributed approach. However, the same technique is used in the recent version of Mininet to provide support for distributed emulation. Time synchronization between machines is another design challenge that requires further investigation in distributed emulation. B. Virtual OTS The virtual OTS (VOTS) is an emulated software transport switch that can run either in a central data center or on top of one or more converged WDM/OTN switching devices as long as those devices support bandwidth virtualization or a similar abstraction of the optical wavelengths. In fact, by abstracting the interface between packet and circuit layers, the virtualization of the transport layer happens. The VOTS works with any SDN controller via the OpenFlow protocol (or extended OpenFlow). It may also use OFConfig and will possibly use some other Web 2.0 protocols to support topology and resource discovery, alarm, and service monitoring, which are critical to manageable carrier class transport networks. Applications can utilize the northbound API of an SDN controller to request provisioning of circuit cross-connects or aggregation of packet

Network link emulators shape, delay or drop packets, which are arriving in or leaving from a specific network interface to match the desired characteristics of the emulated network in terms of bandwidth, latency, and packet loss. Delayline [15] is a user-level library providing these features. Dummynet [16] runs on FreeBSD and is integrated with FreeBSD firewall IPFW. NISTNet [17] was initially developed for Linux 2.4 and was recently ported to Linux 2.6. Linux 2.6+ also provides Netem [18], a network emulation facility built into Linuxs Traffic Control (TC) subsystem. Among these link emulators we initially considered Dummynet, NISTNet and TC/Netem because: 1) These solutions are of production quality and are no longer prototypes in Unix/Linux distributions and are being directly or indirectly used in network emulators by research community and 2) They are freely available on Linux distributions. TC/netem provide better and more precise behavior in terms of bandwidth, delay and transmission error emulations (drop, reordering, duplication, and corruption) [19]. Therefore, we used TC)/Netem as our base model to mimic the behavior of WDM links. Using the network isolation feature of namespace containers, we modeled the emulated WDM links by creating virtual Ehernet ports (i.e., veth interfaces) with one end in the host system and the other end in the namesapce node corresponding to the number of required channels per WDM link. Each of these virtual interfaces play the role of individual WDM channel that can be controlled by SDN controller (and SDN-based network application). The SONEP emulation manager bridges together veths on the host side using Linux Ethernet bridging (brctl). Ethernet bridging tables (ebtables) is used to manage the connectivity between the devices. The bandwidth, delay, and error rate (in terms of packet drop) can be easily controlled by TC/Netem. However since the bandwidth of the virtual interfaces (virtual Ethernet interfaces) are typically less than the bandwidth of WDM links (e.g., OTUx) the control and management program can uniformly scale the bandwidth of the links down for emulation and report back the bandwidth related results after properly scaling them up. Using a centralized timing of packet arrival, the network control application is also able to emulate the timing of the parallel WDM channels. So we emulate the “behavior” of a

WDM link using the packet-based Ethernet interfaces. IV. P ERFORMANCE E VALUATION Short range interfaces (850, 1310, or 1550 nm) are used to connect IP routers or Ethernet switches into an optical transport network. The (D)WDM transponders convert the optical signals, suitable for long haul transmission lightpaths. There is a potential use-case, where the colored interface (DWDM transponder) resides inside the client equipment (e.g., IP router with built-in colored interface). This potentially saves equipment costs, increases provisioning speed, increases reliability and allows greater transparency and potentially provides multiple bit rates and different client formats [20]. The colored light signal is “alien” to the optical transport network, and so these optical signals are called “alien wavelengths” (AW). In our previous works we experimentally evaluated the application of AW in field trials [21], [22]. In the scope of the Multi-domain Optical Modeling Tool project (MOMoT), we investigate the potential of providing AW service in the ´ GEANT community. A multi-domain modeling tool will be developed which will facilitate the deployment of AW in the national research and educational networks (NRENs) and the ´ GEANT network by evaluating the Quality of Transmission (QoT) of the signal considering the current state of the network and the requirements of the connection. Referring to Fig. 2, we can assume that IP router “E” is equipped with a colored interface and wants to establish an end-to-end path to Router “F”. This use-case is an interesting application of Transport SDN framework, in which the network application is able to monitor and properly control the networking devices. Since the application can control the DWDM optics of router E and F, therefore the alein wavelengths in fact become virtual transponders. The control application then has full visibility over layer 1 performance, alarms, current state, etc. and is able to set the transmission parameters such as wavelength and power. A. Experiment setup The network emulation setup of the presented MOMoT use case is depicted in Fig. 3. Two VOTS along with end-to-end test terminals are connected using an emulated WDM link with 16 channels. Each channel is assigned to a separate IP network and therefore by changing the IP address of test terminals the connectivity of individual channels can be tested. Our custom (C-based) SDN controller (i.e., PARHAM in Fig. 3) is responsible for establishing the extended OpenFlow session with the VOTSes. Also a network and link state database is maintained inside the PARHAM controller. In order to estimate the QoT of the end-to-end WDM link, we have developed a QoT estimator Net. App., which runs on top of the SDN controller. The initial functionality of this Net. App. is to evaluate the group velocity dispersion (GVD) and self phase modulation (SPM) of the WDM links. This functionality will be extended in the course of the MOMoT project to consider other physical layer impairments and to estimated the quality of the deployed AW in terms of Q-Factor.

Fig. 3. Network emulation setup of MOMoT’s QoT estimator use-case.

In this setup, AW in the VOTSes can be directly managed from the SDN controller and the QoT estimator. The involved steps are: 1) Discovery of the router interface (transponder profile) in the SDN controller (through feature request/reply part of extended OpenFlow); 2) Using the centralized view of the network and its state, the QoT estimator computes the proper transmission parameters; 3) The next step is the configuration of interface parameters (e.g., OTN TTI parameters for source and destination verification, FEC type and wavelength); and 4) Based on the estimated QoT and logging, the SONEP manager properly updates the parameters of the emulated WDM link. Since the network emulation has shared access to the file system, the QoT log will be shared with SONEP (marked point A in Fig. 3). This log information will provide a mechanism for SONEP manager to properly control the characteristics of the emulated link (mark point B in Fig. 3). So the degradation of the emulated link can be updated in run time and this can be reflected in the end-to-end session of test terminals in terms of increased packet loss or lack of connectivity due to unacceptable (i.e., below threshold) QoT of the WDM link. B. Results The current version of our QoT Net. App. is able to monitor and report the impact of GVD on SPM with particular focus on the SPM induced frequency chirp. In the normal dispersion regime of an optical fiber link the combined effects of GVD and SPM can be used for pulse compression. In this section we report the temporal and spectral changes that occur when the effects of GVD are included in the description of SPM. The QoT estimator is implemented in C with interface to OCTAVE (open source scientific computing language, compatR ible with MATLAB ). Using the split-step Fourier transform (accelerated by FFTW C libraries), non-linear Schr¨odinger equations (NLSE) are solved [23] to compute the propagation

a symmetric two-peak spectrum is expected. The effect of GVD is to introduce spectral asymetry without affecting the two-peak spectral structure. By increasing the signal power (100 mW), and same initial pulse width (i.e., T0 =10ps) pulse evolution exhibits qualitatively different features. As shown in Fig. 5 the pulse develops an oscillatory structure with deep modulation. The GVD effects become more important as the pulse propagates inside the fiber due to rapid temporal variations. Also the pulse energy is concentrated in two spectral band. The spectral and temporal evolution depends on whether the dispersion-compensating fiber (DCF) is placed before or after the the standard fiber. In the case of post-compensation (standard fiber+DCF), the pulse develops an oscillating tail and exhibits spectral narrowing [24]. V. C ONCLUSIONS AND F UTURE R ESEARCH Fig. 4. Pulse shape and spectrum of un-chirped Gaussian pulses propagating exactly at the zero-dispersion wavelength.

Fig. 5. The impact of increased launch power on the pulse shape and spectrum of un-chirped Gaussian pulses.

of the signal inside the fiber, with consideration for nonlinearity characteristics of the fiber. The QoT estimator uses the network and link state information, which are reported from the VOTSes to the SDN Controller and maintained in the controller repository. Fig. 4 depicts the shape and the spectrum of an initially un-chirped Gaussian pulse (pulse width T0 =10ps, and launch power P0 =10 mW). T /T0 is the time scale normalized to the initial pulse width (T0 ). The spectral evolution of the signal is measured by (ν − ν0 )T0 . The effect of SPM is to increase the number of oscillations seen near the trailing edge of the pulse. The intensity does not become zero at the oscillation minima. The effect of GVD is also evident in this figure. In the absence of GVD,

Lack of a flexible and programmable optical network emulation was the main motivation behind the design and initial development of SONEP as an SDN-based Optical Network Emulation Platform. While there are several software tools and utilities, in particular network emulation tools, for researchers in the packet switching domain of SDN/OpenFlow, researchers with interests in design and modeling of transport optical networks have no access to similar utilities. The current research trend of how to define the extended SDN for optical transport network is also another motivation behind the design of SONEP. In this work the architecture of SONEP, which is mainly based on the lightweight virtualization, was presented. SONEP provides a local environment for optical network innovation that complements shared global test-beds, with interactive prototyping, scalability, and a path to hardware deployment. Combined with SDN, it provides an easier and faster path from prototyping, small scale deployment to real systems implementation. Thanks to its architectures, novel proposals inline with the transport SDN and the OTS abstraction can be implemented and evaluated using the SONEP environment. In the framework of the MOMoT project, we demonstrated a sample network application to evaluate the QoT of optical signal propagation along the fiber with consideration for nonlinear characteristics of fiber. Using network emulation, it is possible to observe the impact of physical layer on the end-to-end application performance. Since both emulated network and SONEP engine have access to the file system, the monitoring result of the link can be logged and then the SONEP is able to apply the changes to the emulated links in terms of increased packet drop rate or even completely disconnect the link if the QoT is below a certain threshold. The presented QoT estimator, as the core application of MOMoT project, should consider the impact of other physical layer impairments (e.g., cross phase modulation, ASE noise, and four wave mixing) and furthermore maps them to a single figure of merit for QoT evaluation. Enhancement of the SONEP is another important task which will shape part of our future research plan.

ACKNOWLEDGMENT The research leading to these results has been partially funded by the EC FP7 under grant agreement no. 605243 (Multi-gigabit European Research and Education Network and Associated Services, MOMoT open call project of GN3plus). R EFERENCES [1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation in campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, Mar. 2008. [2] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama et al., “Onix: a distributed control platform for large-scale production networks,” in Proceedings of the 9th USENIX conference on Operating systems design and implementation, 2010, pp. 1–6. [3] A. Lara, A. Kolasani, and B. Ramamurthy, “Network innovation using openflow: A survey,” Communications Surveys Tutorials, IEEE, vol. PP, no. 99, pp. 1–20, 2013. [4] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, J. Zolla, U. H¨olzle, S. Stuart, and A. Vahdat, “B4: Experience with a globally-deployed software defined wan,” in Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM, ser. SIGCOMM ’13. New York, NY, USA: ACM, 2013, pp. 3–14. [5] X. Zhao, V. Vusirikala, B. Koley, V. Kamalov, and T. Hofmeister, “The prospect of inter-data-center optical networks,” Communications Magazine, IEEE, vol. 51, no. 9, pp. 32–38, 2013. [6] M. Shirazipour, Y. Zhang, N. Beheshti, G. Lefebvre, and M. Tatipamula, “Openflow and multi-layer extensions: Overview and next steps,” in Software Defined Networking (EWSDN), 2012 European Workshop on, 2012, pp. 13–17. [7] A. Sadasivarao, S. Syed, P. Pan, C. Liou, A. Lake, C. Guok, and I. Monga, “Open transport switch: a software defined networking architecture for transport networks,” in HotSDN, 2013, pp. 115–120. [8] B. Lantz, B. Heller, and N. McKeown, “A network in a laptop: Rapid prototyping for software-defined networks,” in Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, ser. Hotnets-IX. New York, NY, USA: ACM, 2010, pp. 19:1–19:6. [9] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown, “Reproducible network experiments using container-based emulation,” in Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies, ser. CoNEXT ’12. New York, NY, USA: ACM, 2012, pp. 253–264. [10] D. Gupta, K. V. Vishwanath, M. McNett, A. Vahdat, K. Yocum, A. Snoeren, and G. M. Voelker, “Diecast: Testing distributed systems with an accurate scale model,” ACM Trans. Comput. Syst., vol. 29, no. 2, pp. 4:1–4:48, May 2011. [11] M. Hibler, R. Ricci, L. Stoller, J. Duerig, S. Guruprasad, T. Stack, K. Webb, and J. Lepreau, “Large-scale virtualization in the emulab network testbed,” in USENIX 2008 Annual Technical Conference on Annual Technical Conference, ser. ATC’08. Berkeley, CA, USA: USENIX Association, 2008, pp. 113–128. [12] S. Bhatia, M. Motiwala, W. Muhlbauer, Y. Mundada, V. Valancius, A. Bavier, N. Feamster, L. Peterson, and J. Rexford, “Trellis: A platform for building flexible, fast virtual networks on commodity hardware,” in Proceedings of the 2008 ACM CoNEXT Conference, ser. CoNEXT ’08. New York, NY, USA: ACM, 2008, pp. 72:1–72:6. [13] M. Pizzonia and M. Rimondini, “Netkit: Easy emulation of complex networks on inexpensive hardware,” in Proceedings of the 4th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities, ser. TridentCom ’08. ICST, Brussels, Belgium, Belgium: ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2008, pp. 7:1–7:10. [14] S. Soltesz, H. P¨otzl, M. E. Fiuczynski, A. Bavier, and L. Peterson, “Container-based operating system virtualization: A scalable, highperformance alternative to hypervisors,” SIGOPS Oper. Syst. Rev., vol. 41, no. 3, pp. 275–287, Mar. 2007. [15] D. B. Ingham and G. D. Parrington, “Delayline - a wide-area network emulation tool,” Computing Systems, vol. 7, pp. 313–332, 1994.

[16] L. Rizzo, “Dummynet: A simple approach to the evaluation of network protocols,” SIGCOMM Comput. Commun. Rev., vol. 27, no. 1, pp. 31– 41, Jan. 1997. [17] M. Carson and D. Santay, “Nist net: A linux-based network emulation tool,” SIGCOMM Comput. Commun. Rev., vol. 33, no. 3, pp. 111–126, Jul. 2003. [18] S. Hemminger, “Network emulation with NetEm,” in LCA 2005, Australia’s 6th national Linux conference (linux.conf.au), M. Pool, Ed., Linux Australia. Sydney NSW, Australia: Linux Australia, Apr. 2005. [19] L. Nussbaum and O. Richard, “A comparative study of network link emulators,” in Proceedings of the 2009 Spring Simulation Multiconference, ser. SpringSim ’09. San Diego, CA, USA: Society for Computer Simulation International, 2009, pp. 85:1–85:8. [20] A. Lord, Y. Zhou, P. Wright, P. Willis, C. Look, G. Jeon, S. Nathan, and A. Hotchkiss, “Managed alien wavelength service requirements and demonstration,” in Optical Communication (ECOC), 2011 37th European Conference and Exhibition on, 2011, pp. 1–3. [21] R. Nuijts, L. Bjorn, M. Petersen, and A. Manolova, “Design and OAM&P aspects of a DWDM system equipped with a 40gb/s PM-QPSK alien wavelength and adjacent 10gb/s channels,” in Proceedings of the TERENA Networking Conference, 2011. [22] A. Manolova Fagertun, S. Ruepp, M. Pedersen, and B. Skjoldstrup, “Field trial of 40 gb/s optical transport network using open wdm interfaces,” in OptoElectronics and Communications Conference held jointly with 2013 International Conference on Photonics in Switching (OECC/PS), 2013 18th, 2013, pp. 1–2. [23] C. Menyuk, “Nonlinear pulse propagation in birefringent optical fibers,” Quantum Electronics, IEEE Journal of, vol. 23, no. 2, pp. 174–176, 1987. [24] ——, “Pulse propagation in an elliptically birefringent kerr medium,” Quantum Electronics, IEEE Journal of, vol. 25, no. 12, pp. 2674–2682, 1989.

Suggest Documents