SDN Orchestration of OpenFlow and GMPLS Flexi-grid Networks with

0 downloads 0 Views 2MB Size Report
Flexi-grid Networks with a Stateful Hierarchical PCE. [Invited]. Ramon Casellas ... connectivity services spanning multiple and heterogeneous network domains.
1

SDN Orchestration of OpenFlow and GMPLS Flexi-grid Networks with a Stateful Hierarchical PCE [Invited] ¨ Munoz, ˜ Ramon Casellas, Raul Ricardo Mart´ınez, Ricard Vilalta, Lei Liu, Takehiro Tsuritani, Itsuro Morita, ´ ´ V´ıctor L´opez, Oscar Gonzalez de Dios, and Juan Pedro Fernandez-Palacios

Abstract—New and emerging use cases, such as the interconnection of geographically remote data centers, are drawing attention to the need for provisioning end to end connectivity services spanning multiple and heterogeneous network domains. This heterogeneity is due to not only the data transmission and switching technology (the so called data plane) but also to the deployed control plane, which may used within each domain to automate the setup and recovery of such services, dynamically. The choice of a control plane is affected by factors such as availability, maturity, operators preference and the ability to satisfy a list of functional requirements. Given the current developments around OpenFlow and SDN along with the need to account for existing deployments based on GMPLS, the problem of heterogeneous control plane interworking needs to be solved. The retained solution must equally address the specific issues of multidomain networks such as limited domain topology visibility, given the scalability and confidentiality constraints that characterize them. In this setting, we propose a functional and protocol architecture for such interworking, based on the key concepts of network abstraction and overarching control, implemented in terms of a hierarchical stateful PCE, which provides the orchestration and coordination layer. In the proposed architecture, the PCEP and BGP-LS protocols are extended to support OpenFlow addresses and datapath identifiers, unifying both GMPLS and OpenFlow domains. The solution is deployed in an experimental testbed and validated. Although the main scope of the approach is the interworking of OpenFlow and GMPLS, the same approach can be directly applied to a wide range of multi-domain scenarios, with either homogeneous or heterogeneous control technologies. Index Terms—Path Computation Element (PCE), Software Defined Networking (SDN), Hierarchical PCE (H-PCE), Orchestration, Stateful, Optical OpenFlow, Optical Network Control and Management.

I. I NTRODUCTION loud applications such as off-site data backup or virtual machine migration involve an increasing amount of data traffic between remote and geographically dispersed data centers (DC), requiring efficient network architectures in terms of cost, energy consumption and reliability. Such architectures may combine flexible, fine-grained and adaptive intra-DC traffic control regarding forwarding entries and policies in a very dynamic context with long haul, potentially

C

Manuscript received June 30, 2014. ˜ R. Casellas, R. Munoz, R. Mart´ınez and R. Vilalta are with CTTC, Spain (e-mail: [email protected]). L. Liu is with University of California, Davis. One Shields Ave, Davis, CA 95616, USA. T. Tsuritani and I. Morita are with Photonic Networks, KDDI R&D Laboratories, Inc. Saitama, Japan. ´ ´ V. L´opez, O. Gonzalez de Dios and J.P. Fernandez-Palacios are with Telef´onica, Don Ram´on de la Cruz 82-84, Madrid, Spain.

multi-domain aggregated inter-DC transport. For the former, the application of a control plane based on Software Defined Networking (SDN) and OpenFlow fulfills such requirements whereas, for the latter, a GMPLS/PCE control plane, with its maturity, carrier-grade and multi-domain support, accounts for existing deployments, slow technology migrations and the need for return on investments. This scenario, shown in Figure 1, calls for interworking solutions at the control plane level and, in particular, for the interworking of GMPLS and OpenFlow (GMPLS/OF) control planes, which are, as of today, the de facto standards. The GMPLS/OF interworking alternatives show varying degrees of integration and flexibility. Straightforward approaches are characterized by the adaptation of one control model into the other, whereas more advanced interworking requires the definition of common device and network models (e.g. a subset of attributes for the network elements) and of coordination and orchestration functions. The adaptation of a GMPLS network into an OpenFlow control architecture was addressed in [1], where a GMPLS domain was modeled as a logical OpenFlow node. A proxy OpenFlow agent translated OpenFlow logical circuit flow operations between GMPLS client interfaces into requests for GMPLS driven setup of connections, referred to as Label Switched Paths (LSPs). More generically, such interworking can be seen as the orchestration (i.e., the automated configuration, coordination and management of complex systems, commonly with transactional semantics with rollback capabilities) of heterogeneous control plane technologies, and it has become a domain of application of SDN, not only because of the flexible, dynamic and decoupled centralized control (it is a common model of operation, given the specifics of the optical technology such as the wavelength continuity constraint or physical impairments), but because of the potential simplification and better integration with operation and business support systems, by means of open and standard interfaces and the use of existing functional entities. In this paper, we focus on such control orchestration, defining a SDN architecture and interfaces that address the considered DC-interconnection and GMPLS/OpenFlow interworking. We extend our work reported in [2], better detailing the control plane aspects, procedures and experimental results and adding a new section on the topology aggregation and abstraction concept considering the use of BGP-LS as topology dissemination protocol. A centralized “controller of controllers” or orchestrator (as opposed to a mesh of controllers) handles the automation of end-to-end connectivity provisioning, working at a higher, abstracted level and covering inter-domain aspects. Specific per-domain controllers map the abstracted control

2

ComputeHost1

Datacenter1

pPCE

Datacenter2 ComputeHost1

cPCE OpenFlow controller

VM1 VM2 … VMN

cPCE OpenFlow controller

cPCE pPCE

Active Stateful PCE cPCE

X

VM2 … VMN

X

X

X VM1

VM2 … VMN

cPCE

X ComputeHost2

VM1

ComputeHost2 X

cPCE OpenFlowrcontrolled ROADMs

OpenFlowrcontrolled ROADMs GMPLSrcontrolled SpectrumSwitchedOpticalNetwork(SSON)

VM1 VM2 … VMN

Fig. 1. DC interconnection as use case for the applicability of stateful H-PCE as orchestration in heterogeneous (OpenFlow/GMPLS) domains.

plane functions into the specifics of the underlying control plane technology, such as network resource discovery, topology management, connectivity provisioning and monitoring. The approach relies on an adapted interface and protocol that abstracts the particular control plane technology of a given domain. In this sense, the proposed architecture applies the same abstraction and generalization principles that OpenFlow/SDN have applied to data networks: much like OpenFlow identifies an abstracted, generic model of packet switch that can be used regardless of a particular vendor or technology, and provides a protocol (the OpenFlow protocol) to query and set its forwarding state, the project defines a generic functional model of a “control plane” for the provisioning of connectivity. The actual instantiation of the architecture in this work relies on a set of stateful PCEs [3], arranged in a hierarchical PCE (H-PCE) manner [4]. It should be noted that actual data center deployments are typically based on Ethernet, and core transport networks aggregate traffic using optical connections. Nonetheless, the scope of the work is the orchestration of OF and GMPLS control plane functions, and such orchestration needs to be clearly defined and fully understood for a single layer case before addressing the aspects bound to a multi-layer network. From an architectural point of view, the architecture proposed in the paper would not change if the whole data plane was Ethernet based, albeit single layer, barred the technology specific extensions that would apply both in the OpenFlow and GMPLS domains (i.e., although the proposed architecture can be applied, without loss of generality, to a diversity of data plane technologies, we focus on our current implementation that targets flexi-grid optical networks as detailed later). Although the interconnection of remote inter-datacenters is a particularly interesting use case of the architecture, the main focus of the paper is the orchestration of OpenFlow and GMPLS and, in this sense, we did not make any assumptions on the topology or structure of the OpenFlow (intra-) domain (e.g. the presence of racks or rack connectivity or tree-like

topology) and we did not extend OpenFlow to instantiate or refern to virtual machine instances or top-of-rack (ToR) switches. Finally, extending the architecture to support a multi-layer network is ongoing work, in order to support the dynamic addition and removal of virtual ports at the OpenFlow nodes which correspond to dynamic optical LSPs (connections). The child PCE within the GMPLS domain need to be aware of TE links that are induced by optical LSPs that are instantiated and removed dynamically and that can be used by Openflow flow tables in the edge domains. The instantiation of connectivity cannot be sequential-transactional but hierarchical, starting with the lower layers. A. Hierarchical Path Computation Elements In the H-PCE architecture, a common deployment model is the configuration of a child PCE (cPCE) per network domain, attached to a unique parent PCE (pPCE). The pPCE is responsible for the selection of domains to be traversed for a new provisioned service. Such domain selection is based on a high level, abstracted knowledge of intra- and inter- domain connectivity and topology. The topology abstraction, needed due to scalability and confidentiality reasons, is based on a selection of relevant Traffic Engineering (TE) attributes and represented usually as a directed graph or virtual links and nodes as allowed by the domain internal policy, as detailed in Section III-B. Children PCEs are responsible for segment expansion (i.e., computation) in their respective domains. The architecture has been proposed for path-computation in multi-domain optical networks e.g. as in [5], as well as multilayer networks [6]. This paper addresses not only the path computation function, extended to work in the aforementioned heterogeneous networks, but also the actual instantiation and provisioning of services by adapting the recent work on stateful PCEs [7]. A stateful PCE (sPCE) is a PCE that is able to consider the network status in terms of links and nodes (the Traffic

3

Engineering Database or TED) and the status of active connections (the database of Label Switched Paths or LSPDB). A sPCE is said to be active if it is able to recommend or modify/affect the state of existing connections. Finally, a stateful, active PCE may have instantiation capabilities [8], that is, the ability to trigger, upon request or autonomously, the establishment of connections. Such instantiation capabilities may drive the provisioning of GMPLS LSPs or directly configure the forwarding of network elements using OpenFlow. Let us note, for the sake of completeness, that the hierarchical approach can have more than two levels and be recursive (that is, a given child PCE may be a parent PCE for a given set of children). In particular, the transport network can be segmented into domains which may or may not be due to the deployment of different vendors. On the other hand, even if a network is segmented, a logically centralized PCE can indeed compute multi-domain paths if scalability is not a concern (typically in homogeneous networks with a single vendor). In the following, and matching the testbed implementation, we assume a single transport domain. Likewise, please note that there are different architectural options. Notably, it is possible to leave the stateful HPCE only for the purposes on path computation and once the path is known, it is provided to a provisioning manager that proceeds with the instantiation. Alternatively the stateful HPCE architecture is used for both the computation and instantiation. This choice is due to several reasons, such as: i) notably in the case of GMPLS, the lack of an open, standard, and dedicated protocol to request instantiation (i.e., part of the analogous to the northbound interface for a GMPS controller); the fact that, in any case, the provisioning manager would also need to have several interfaces (not just to the ”ingress” node) roughly mapping to the hierarchical setting of PCEs and, finally, to allow the reusing of an open and standard protocol for provisioning, and only requires an active adjacency (PCEP session), reusing object formats and encoding. B. Flexi-grid DWDM optical networks The term flexi-grid [9] refers to the updated set of nominal central frequencies (grid), lower spacing values and optical spectrum management considerations that have been defined in order to allow an efficient and flexible allocation of optical spectral bandwidth for high bit-rate systems. A key concept is the Frequency Slot (FS), a variable-size optical frequency range that can be allocated to a data connection, characterized by its nominal central frequency, selected from the set of reference frequencies (f = 193.1THz+n×6.25GHz), and its slot width (m × 12.5GHz, m ≥ 1). From a networking perspective, a flexible grid network is assumed to be a layered network [10], [11] in which the flexi-grid layer (also referred to as the media layer) is the server layer and the signal layer is the client layer. In the media layer, switching is based on a frequency slot, media elements (fibers, amplifiers, filters, switching matrices) only direct the optical signal or affect the properties of an optical signal, do not modify the properties of the information that has been modulated to produce the optical signal. The media channel is a media association that represents both the topology (i.e., path through the media) and the resource (frequency slot) that it occupies. As a topological construct, it represents a (effective) frequency slot supported by a

concatenation of media elements (fibers, amplifiers, filters, switching matrices...). This term is used to identify the end-to-end physical layer entity with its corresponding (one or more) frequency slots local at each link filters. Media channels can can thus be dimensioned to contain an Optical Tributary Signal (e.g. an Optical Channel or OCh-P) [10], allocating as much optical spectrum as required, allowing higher bit rates than currently deployed fixed-grid systems. In this work, we consider the dynamic establishment and release of media channels, the actual client signal mapping is out of scope. II. N ETWORK REFERENCE MODEL AND CONTROL PLANE ARCHITECTURE

The considered network reference model is composed of OpenFlow islands interconnected by a GMPLS controlled transport network. Let us also note that, even when under the control of a single administrative entity, transport networks may also be segmented for technical or scalability reasons (e.g., in the form of vendor islands). We assume that the domains interconnection is done by means of “border links” (two devices, one per each domain, are interconnected by a shared link) rather than “border nodes” (a single network element belongs to both domains, so a given node is not administered and managed by two different entities). In consequence, a single node is bound to a single control plane technology. In any case, such multi-domain networks are characterized by the fact that no single entity has full topology (TE) visibility, affecting optimality and efficient resource usage. Our main contribution (cfr. Figure 2) extends the H-PCE architecture first to support OpenFlow and second with stateful capabilities, while considering different alternatives for abstracted topology dissemination. In the architecture, the pPCE orchestrates the provisioning of services with generalized identifiers (e.g. covering both GMPLS and OpenFlow) and each cPCE acts as a middleware for each domain control layer. Each cPCE controls its domain either integrated with an OpenFlow controller based on our previous work [12], or delegating the actual establishment and release of connections to an underlying GMPLS/PCE control plane [13]. The latter is notably the case for the core optical network, with typically tens of long haul, high capacity and low latency traffic, backwards compatible with carriers that deploy GMPLS-based transport services that upgrade their network to flexi-grid. All cPCEs have the Path Computation Element Communications Protocol (PCEP) as their Northbound interface (NBI). PCEP is also the NBI for a GMPLS controlled edge node path computation client (PCC) module, and used for provisioning, rerouting, delegating and reporting. OpenFlow is used as Southbound interface (SBI) for cPCEs in such domains. III. P ROPOSED C ONTROL P LANE P ROCEDURES AND P ROTOCOL E XTENSIONS A. Initial synchronization Each children is configured to have a persistent PCEP connection with its configured parent, and capabilities are exchanged during session setup. In particular, child PCE and domain identifiers are conveyed in PCE ID and DOMAIN ID type-length-value tuples (TLVs), which are attached to the PCEP OPEN object of the Open message.

4

NBI Queue

Path Computation (RSA)

PrPCE

Initiation

Aggregated TED

Interrdomain LSPDB

CrPCE IntegratedStatefulPCEandOpenFlowController request

Queue

Path response Computation R (RSA)

R/W TED

Extended Attributes

CrPCE StatefulPCEwithGMPLS

W

Queue

Path response Computation R (RSA)

Hardware AbstractionLayer TL1,… ROADM

TED

Extended Attributes W

PCEP

ExtendedOpenFlowprotocol forFlexirgridDWDMnetworks OpenFlowAgent

Domain LSPDB

R

OpenFlowController

Freq.Slot

R/W

request Domain LSPDB

PCC OpenFlowAgent Hardware AbstractionLayer

LRM

xml RC CC

LRM OSPFrTE RSVPrTE

RC CC

TL1,… ROADM

OFcontrolledflexirROADM

ROADM

ROADM

GMPLScontrolledflexirROADM– RC:RoutingContr.CC:Conn.Contr.LRM:LinkResourceMgr.

Fig. 2. Simplified block diagram and architecture. The cPCE in an OpenFlow domain acts as an OpenFlow controller, and the cPCE is a GMPLS domain delegates the actual establishment to the underlying control plane

In summary, each cPCE announces its identifier and the domains it is responsible for. To enable endpoint location and allow the parent to manage aggregated reachability information, aggregated per-domain end-point reachability is announced using TLVs that contain, notably, classless interdomain routing (CIDR) IPv4 prefix sub-objects for the node identifiers within the GMPLS domain and a list of endpoint datapath ids is encoded in TLVs for OpenFlow domains (let us note that active polling could also be proposed, we have not considered this, since endpoint localization is quasi-static information). B. Topology Management and Domain Abstraction Within each domain, the way that the designated child PCE obtains a copy of TED depends on the underlying control plane. Let us note that the PCE architecture does not define or constraint how this TED is obtained. In our implementation, the PCE within the GMPLS transport domain obtains the TED by parsing OSPF-TE Node and Link TLVs of the TE Link State Advertisements (LSAs). Domain border nodes (ABRs and ASBRs) are learnt from Summary and External OSPF LSAs and Inter-domain links from OSPF-TE Inter-AS-TE-v2 LSAs [14]. For OpenFlow domains, the PCE obtains the TED from the integrated OpenFlow controller. The details of the mechanism are given in Section III-D. All cPCEs cooperate in a distributed way to construct a parent TED, composed of abstracted domains. The topology aggregation mechanism refers to the procedure by which a domain is represented at a higher level, and this representation is always a trade-off combining confidentiality concerns to not disclose topology internals, performance and scalability, given the number of network elements to manage

at higher levels. A similar term to refer to this is TE network abstraction, which is the synthesizing of reported TE attribute information for each domain. This provides the aggregated TE reachability information and subsequent abstracted topology representation, known as virtual links and nodes (virtual topology). This virtual topology does not represent all possible connectivity options, but provides a connectivity matrix based on current TE attributes that are being reported from within each domain. While abstraction uses available TE information, it may also be subjected to network policy and management choices. Thus, not all potential connectivity would be advertised. Each cPCE is thus responsible for dynamically updating its own virtual mesh, as a result of a change in the domain or periodically. A mesh of paths between border nodes is computed, and several metrics are bound to each resulting path, such as the additive TE metric. Consequently, each path induces a virtual link. The procedure and protocol mechanisms for disseminating and constructing of the pPCE TED may be provided using a number of mechanisms: 1) Dedicated Interior Gateway Protocol (IGP) instance: In this case, the pPCE joins the IGP instance of each child PCE domain, while the attributes of the interdomain links may be distributed within a domain by TE extensions to the IGP, as in [14]. However, this approach has important practical limitations, would break the domain confidentiality principles and is subject to scalability issues. Alternatively, [4] points out that in Automatically Switched Optical Network (ASON) models it is possible to consider a separate instance of an IGP running within the parent domain with the participation of the child PCEs. This option has been left for further study. 2) PCEP notifications: A second option, as proposed in [5] is the embedding in PCEP Notifications of both intra domain

5

and inter-domain LSAs. The cPCEs use the already active PCEP connection to forward link and node updates to the pPCE. This approach was experimentally demonstrated in a multi-partner testbed [15]. 3) BGP-LS: Finally, the north-bound distribution of TE information can be done by means of the BGP-LS protocol [16], [17]. With this approach, a BGP speaker is located in each domains, and announces TE links and nodes to a listening BGP speaker in the parent domain. Note that a separated policy can be configured to decide which information can be exported, and a BGP-LS speaker may decide to announce, for example, the whole topology, just a subset, or an abstracted representation of the domain connectivity. Let us detail, from a protocol encoding perspective, the encoding of the abstracted TE information. The dissemination of topology is carried out, asynchronously, from the cPCE to the pPCE after the initial establishment of a BGP session (see Figure 3). The BGP protocol was extended, first, by using MultiProtocol extensions (MP-BGP) so BGP UPDATE messages can be used to announce other elements in addition to IPv4 network layer reachability information (NLRI). Second, the path attributes field of the UPDATE message, designed to convey a set of attributes associated with a path, may not contain new attributes. In summary, three new attributes are combined to disseminate topological information: the MP REACH attribute conveys descriptors of nodes or links, declaring them to the pPCE which may instantiate them in its TED. In particular, it conveys the node identifier or the node name for a node, or the domain, local and remote identifiers for a link. The MP UNREACH attribute, on the other hand, is used to withdraw a topological element from the parent TED. Finally, for a given topological element identified by its descriptors, the LINK STATE attribute lists TE attributes of interest. Common attributes are the TE metric, Maximum link bandwidth, shared Risk Link Groups (SRLGs), etc. In this context, we have extended the BGPLS protocol for mainly two purposes: first, to support the notion of OpenFlow address, e.g the encoding of an OpenFlow datapath identifier. Second, to convey, in bitmap form, the status of the nominal central frequencies that characterize a flexi-grid link. More concretely, the main design rationale was to be as less disruptive as possible, and the extensions mainly are as follows: first, when announcing or withdrawing a link, the IGP router id field, for both in the local node and remote node descriptors, it is variable length, since the original intent of this field is to account for OSPFv2, OSPFv3 and IS IS. This allows us to just encode OpenFlow datapath id directly. Second, for announcing nodes, the same approach is used. In the BGP MP NLRI the igp router id field has the datapath id and, in the link state attribute for the node, we added a new TLV, routerid openflow using type code 1023, wrapped in LINK STATE TLV. We also extended the draft of BGP-LS to cover optical aspects. In particular we added a BGP-LS attribute that corresponds to a bitmap encoded status of the nominal central frequencies - TLV type = 1200. The latter two approaches have been used in the experimental evaluation of the architecture. It can be argued that using PCEP to convey topological information is outside its main scope and that the use of BGP-LS should be preferred. It should be noted that there are potential scalability issues associated to a full-mesh abstraction mechanism. It is known that such mesh abstraction has a complexity O(N 2 )

cPCE

cPCE

BGPrLS

BGPrLS

pPCE

OPEN OPEN OPEN

OPEN

KeepAlive KeepAlive KeepAlive KeepAlive BGPrLSUpdate BGPrLSUpdate BGPrLSUpdate BGPrLSUpdate BGPrLSUpdate

Fig. 3. Construction of a pPCE abstracted TED by having BGP-LS adjacencies with the cPCEs.

with N the number of border nodes. We have evaluated this [5] showing that, e.g., for a random 50 node network, the abstraction when considering 2 border nodes (2 virtual links) took 1.2 ms and for 12 border nodes (132 virtual links) it took 144 ms. More generically, the notion of abstraction is still a research topic, and it is expected that in deployments such procedures will be controlled by policy. It is important to note that the architecture is generic and applies regardless of the selected abstraction approach (with a trade-off in optimality). The implementation is based on the full mesh virtual link approach, but the cPCE could deploy arbitrarily complex and policy controlled abstraction mechanisms. Finally, the abstraction does not directly account for path diversity, in the sense that the virtual link only maps to a single path. The path is nonetheless dynamically computed, so in case of a link failure, the next iteration will select a feasible path that avoids the failed link. C. End to End Connection Setup and Release Figure 4 shows the main procedure for the setup of an end to end media channel connection between two endpoint nodes located at remote OF islands. The parent PCE NBI receives the request from the operator or Network Management System (NMS), specifying the endpoints and requested parameters. Following such a NBI request (1), the pPCE requests the virtual shortest path tree (VSPT) from the source node to the ingress domain border nodes and the inverse VSPT (iVSPT) from the destination domain border nodes to the destination node (2) and uses the VPSTs and virtual links to perform domain selection (3). Once the domain selection is complete, the pPCE proceeds to request the segment expansion to the corresponding cPCEs (4). Each cPCE returns usable path and frequency slots (4), although ultimately, the pPCE is responsible for end to end Routing

6

and spectrum Assignment (RSA). For this, the pPCE combines the segments into an end-to-end ERO (5) and requests the establishment of the segments with PCinitiate messages (6). OpenFlow domains use a modified OpenFlow protocol to program the ROADMs, and GMPLS domains delegate the establishment to the underlying control plane (7). Finally, PCRpt messages communicate establishment (8) within the GMPLS domain, cPCE to pPCE and pPCE to NBI. For the automated setup and release of connections in our use scenario, PCEP has been extended for initiation, delegation and topology discovery of GMPLS and OpenFlow networks. The ENDPOINTS object supports both numbered and unnumbered interfaces as well as OpenFlow datapath ids. Explicit Route Objects (EROs) defined for the RSVPTE protocol and used within PCEP to convey path routes can now include sub-objects that refer to OpenFlow entities (nodes as datapath ids and interfaces). PCEP has also been extended to support RSA; to convey the selected modulation format, FEC and frequency slots, and to announce reachability and aggregated topology dissemination, notably in the cPCE to pPCE direction. The release of a connection is also done in a similar way: the pPCE decides to release a connection and sends the appropriate PCInitiate messages to the cPCEs.









D. OpenFlow extensions for flexi-grid networks The extensions of OpenFlow for Optical Circuit Switching (OCS) were partially addressed by the group that conceived the original OpenFlow specification. The document covering the extensions to the OpenFlow Protocol in support of circuit switching [18] described the requirements of an OpenFlow circuit switch and extended OFP v1.0 messages and data structures to support basic circuit switching and signal adaptations, introducing the concept of circuit flow and circuit switch flow table (or cross-connect table). Despite its shortcomings, the circuit switching addendum has been used, mainly in the context of research activities, as the basis to extend OpenFlow to cover optical networks including those based on ITU-T DWDM flexi-grid [19]. • The initial handshake between an agent and its controller(s) is adapted, to convey, in the payload of the OFP FEATURES REPLY message, the number of circuit ports, what capabilities are supported and a variable list of port descriptors for each of the circuit ports including, mainly, the port name, identifier and basic switching type. • This initial handshake, along with subsequent status messages that are sent by the OF agent to controller (described later) are commonly used as a mechanism for automated topology discovery and dynamic updates. This mechanism allows the controller to build a network connectivity directed graph that can be used for path computation and Routing and Wavelength / Spectrum Assignment (RWA/RSA). Of course, this does not preclude the case in which the controller authoritatively manages the network topology and link / node status. • The basic circuit flow table is extended to support a diversity of technologies and related parameters. For example, in a flexi-grid network, the entries contain the characterization of the switched frequency slots, in terms of nominal central frequency and frequency slot width (the so called n and m parameters) and, optionally, signal types, modulation formats, etc.



If neither a supervisory channel nor a discovery protocol between neighbors can be assumed (thus precluding the ability to inject test messages for port identifier correlation), local and remote port identifiers are typically pre-configured, either at the nodes or at the controller. A range of port number identifiers is reserved to specify mapper ports that map, e.g. Ethernet packets to Time Domain Multiplexing (TDM) time-slots, used mainly in signal adaptation. Common procedures are clarified, to name a few: in order to guarantee the liveness of a connection between a network element and the controller, echo request and echo reply messages are used. The result of cross-connection request operations (described next) are specifically acknowledged rather than assumed working, etc. A new message is defined and used for the configuration of cross-connections, that is, for the addition and removal of circuit flows. Such message (OFPT CFLOW MOD), includes an structure that conveys associations of the type (incoming port, incoming wavelength channel, outgoing port, and outgoing lambda channel). The message also contains a set of actions, of limited use in the scope of circuit switching, with the exception of the case when inserting or removing packet flows to/from circuit flows. For optical networks, newer extensions better encode the parameters that characterize a service, such as the central wavelength in a fixed DWDM grid, or the frequency slot on a flexible grid. Finally, specific extensions address additional capabilities such as: central frequency, spectrum range; number of ports and wavelength channels of switches; peering connectivity inside and across multiple domains; signal types; optical constraints, etc. A new asynchronous message (OFP CPORT STATUS) is used to report the dynamic status of the optical ports, adding specific new reasons to report bandwidth (i.e., wavelengths). IV. E XPERIMENTAL P ERFORMANCE E VALUATION

A. Testbed description A control plane testbed has been deployed (that is, with emulated optical hardware) composed of three domains. The source and destination domains represent source and remote DC domains, and are controlled by an integrated cPCE/OpenFlow controller, directly using the OpenFlow protocol with extensions for flexi-grid, circuit switched optical networks. Both domains have 14 optical nodes (i.e., optical cross-connects or OXC), with datapath ids of the form aa-bbcc-dd-ee-ff-00-01 to -0d and ff-ee-dd-cc-bb-aa-00-01 to -0d (the topology is shown in Figure 6). On the other hand, the core transport backbone domain is controlled by a GMPLS/PCE control plane instance, with persistent connections with path computation client (PCC) agents located at each transport node. The GMPLS domain also has 14 nodes, with node ids 10.0.50.1 to 10.0.50.14 (both nodes are also border nodes with the OpenFlow domains) and emulates a Japanese national flexi-grid network (see Figure 5). All links are considered basic flexi-grid optical links with finest granularity in the selection of nominal central frequencies and frequency slot widths. All links are homogeneous having 128 usable nominal central frequencies, ranging from

7

cPCE OpenFlow Controller

Parent PCE

sPCE GMPLS

cPCE OpenFlow Controller

1 Apps NBI

PCReq

3

PCRep

iVSPT

PCRep

VSPT

4 SegmentsComputed

PCInitiate

6

OF Agent

SegmentExpansion PCRep

ERORSA

GMPLS Controller

2

Domain Selection PCReq

5

GMPLS Controller

OF Agent

PCInitiate

PCRep

PCRep

SegmentInstantiation

7

PCRpt

CFLOW_MOD

CFLOW_MOD OpenFlow CPORT_STATUS BW_MODIFY

8

8

7

PCInitiate

8

7

GMPLS RSVPrTE Pathmessage

OpenFlow CPORT_STATUS BW_MODIFY

RSVPrTE Resv message

PCRpt

PCRpt

PCRpt

PCRpt

Fig. 4. Main procedures and message flow for the pPCE driven setup of an end to end flexi-grid connection. Two steps are clearly identified: first, the pPCE performs hierarchical path computation, requesting segment expansion. Second, the pPCE segments the ERO on a per domain basis and delegates the establishment of the intra-domain segments to the cPCE, so the end to end media channel results from the concatenation of segments since the pPCE ensures the same frequency slot. Note that both the segment expansion as well as segment establishment can be parallelized to minimize latency.

aa-bb-...-00-02

1

aa-bb-...-00-04 3

(#2)

(#4)

2

2 1

aa-bb-...-00-06 (#6)

2 1

aa-bb-...-00-09 3

2

4

aa-bb-...-00-0e

(#8)

3

2

aa-bb-...-00-01

1

(#0)

2

aa-bb-...-00-05 2

3

3

(#5)

2

(#14)

1

1

3 2

2

2 1

aa-bb-...-00-08

1

aa-bb-...-00-03 (#9)

5

(#3)

aa-bb-...-00-0d

4

aa-bb-...-00-0c

1 2

3

(#13)

(#12)

3

1 3

5

1

aa-bb-...-00-07

1

3 3

(#7)

1

2

6 2

aa-bb-...-00-0b

aa-bb-...-00-0a (#10)

Fig. 5. Emulated 14 node Japanese topology used to model the GMPLS/PCE controlled transport domain.

n=0 to n= 127. The BV-ROADMs do not present any asymmetry and are assumed to be able to switch from any incoming port to any outgoing port. B. Topology aggregation and abstraction As stated earlier, a single cPCE is located at each domain. Both approaches for topology abstraction and northbound distribution can be used, either as PCEP Notification (PCNtf) messages announcing a virtual link mesh, each link encoded using XML, or by means of BGP-LS. In both cases, the algorithms used by the cPCE to summarize the network

1

(#11)

Fig. 6. Emulated topology of the OpenFlow domain 1, composed of 14 nodes with datapath identifiers aa-bb-cc-dd-ee-ff-00-01 to -0d. The topology in the OpenFlow domain 2 is symmetrical

are the same, and regular updates (other than the initial synchronization) are sent to the pPCE every 60s. Topology aggregation results in having a 3 node mesh in the openflow domains and a simple mesh between the two GMPLS nodes (in short, the size of the virtual mesh layout depends on the number of selected border nodes. Several border nodes were selected in the OpenFlow domains to validate the topology summarization algorithm). Figure 7 shows the use of the BGP-LS protocol and the resulting parent TED.

8

aa-bb-...-01 65538 (5)

(#4)

10.0.50.1 65541 (5)

10 (1)

(#2)

aa-bb-...-0d

65537 (1) 65539 (1)

(#0)

655361 (4) 655362 (4)

10 (1)

6 5 564505 4( 52 ) ( 5 )

10.0.50.14 (#3)

aa-bb-...-02 (#5) 30 (1)

30 (1)

ff-ee-...-01 131075 (4)

(#1)

131073 (4)

ff-ee-...-0c (#6)

131077 (5)

131074 (5)

131076 (1) 1 3 1 0 7 8 ( 1 ) ff-ee-...-0d

(#7)

(a) Wireshark Capture of a BGP-LS update message, sent from a (b) Resulting parent topology (TED) used by the pPCE for domain cPCE to the pPCE, announcing an OpenFlow node selection. The topology shows the level of abstraction of each domain. For example, the GMPLS domain is seen as a potential TE link between the border nodes 10.0.50.1 and 10.0.50.2 Fig. 7.

Topology abstraction mechanism deployed in the testbed

C. Experimental Results and analysis To evaluate the control plane overhead, we setup and release 100 connections with random source and destination nodes in remote OpenFlow domains, this time using PCNtf for topology notifications, given that the XML encoding is expected to imply more overhead. The requests arrive in a deterministic manner, with 1 s between operations. All the children PCEs execute the same algorithm, which is based on a Dijkstra shortest path with spectrum continuity check, finding the shortest path w.r.t the additive TE metric, maintaining a list of nominal central frequencies that are available from the source to the current pivot node. Before relaxing a node, the algorithm checks whether a sufficient number of contiguous nominal central frequencies are available from the source, the number depending on the actual requested spectrum width. At the parent PCE, the algorithm is basically similar, ensuring spectrum contiguity at the parent level, using a first fit approach for end to end spectrum assignment. For the selected topologies, (tens of nodes, 128 nominal central frequencies per link, Dijkstra basis) the computation time of the path, knowing that the PCE is a process running e.g. minimum on an Intel(R) Core(TM)2 Duo CPU E8400 is below 1 ms, one/two orders of magnitude smaller that the measured end-to-end setup time,

mainly determined by network latency, and even smaller when accounting for the hardware configuration delay. As a first validation, Figure 8 shows the wireshark capture showing the involved PCEs, and the exchanged messages between the children and parent PCEs, both for the path computation part and for the actual establishment part. As expected, the test lasts roughly 229 seconds, with request 1 starting at T=22s, after the initial synchronization. Since operators are expected to dimension their networks in such a way that blocking a connection is a rare event, we focus here in evaluating, in addition to the control plane overhead, the setup delay as seen by the pPCE north bound entity (e.g. the NMS). It is clear that the total provisioning time can be decomposed in the sum of different components, namely, i) the path computation latency considering the domain selection and the segment expansion and ii) the path establishment itself, delegating to the OpenFlow controller or the GMPLS control plane. It is worth noting that, given the inherent parallelization of an OpenFlow control plane, in which an OpenFlow controller can program the cross-connections of the nodes in parallel, ensuring that all operations were successful before reporting the result, the measured setup delays (between sending the initial pPCE NBI PCinitiate and receiving the corresponding PCRpt mes-

9

Path Setup Delay 30

1 0.9 0.8 0.7

histogram

20

0.5

cdf

0.6

0.4 10

0.3 0.2 0.1

0 40000

42000

44000

46000

48000

50000

0 52000

microseconds

Fig. 9. Histogram and C.D.F of the end to end setup delay, seen by a network operator.

involved domains. Finally, Figure 10 shows the histogram and c.d.f. of the packet size. OpenFlow 1 GMPLS OpenFlow 2 c - p Packets 884 599 880 p - c Packets 935 598 939 c - p Kb/s 18 5.1 18 p - c Kb/s 4.7 3.8 4.8 IPv4 packet sizes (for TCP/PCEP)

histogram

sage that acknowledges the establishment) are mainly determined by H-PCE latency (computation and instantiation) and the setup delay within the GMPLS domain, which emulates a nationwide Japanese topology. Since the OF domains are attached to the same pair of nodes in the testbed, the setup delay within the GMPLS is quite constant and evaluated to approximately 30ms as seen by the cPCE of that domain. The histogram of the path delay and the cumulative distribution function (c.d.f) can be seen in Figure 9; the average setup value from a control plane only perspective is around 40ms. That said, flexi-grid optical hardware may require a certain amount of time (can reach the order of seconds) and optical connections may need to be validated before actual traffic is transported. Consequently, values are provided as a performance indicator, but real deployments would definitely show higher values. During the test, the pPCE processes 4835 TCP segments (1584502 bytes, Tx=2486 segments, Rx=2369). The average packet size, as reported by wireshark, is 326.35 bytes, yielding 16.7 packets/s, and corresponding to 2135 PCEP messages. There are several PCEP messages (Open, KeepAlive, PCReq, PCRep, PCInitiate, PCRpt and PCNtf topology notifications). In this test, 51% of PCEP messages have 80-159 bytes, and 42% 640-1279. The reason for this is that, in this scenario, report messages (PCRpt) and notification messages (PCNtf) are relatively large, since they convey the available central frequencies and topology information. That said, this overhead is lower if using BGP-LS given the bitmap encoding of nominal central frequencies, which significantly reduces the message size. The table below summarizes the number of IP packets and measured data rate both in the child to parent (c-p) and parent to child (p-c) directions, for the three

1

2400 2200

0.9

2000

0.8

1800

0.7

1600

0.6

1400 1200

0.5

1000

0.4

800

0.3

600

cdf

Fig. 8. Wireshark capture showing the process of setting up an end to end flexi-grid connection

0.2

400

0.1

200 0

0 0

250

500

750

1000

1250

1500

packet sizes (bytes)

Fig. 10. Histogram and C.D.F of the IPv4 packet sizes in the 100 connection test.

V. C ONCLUSIONS We have designed, implemented and validated a SDNbased orchestration mechanism that addresses the problem of GMPLS and OpenFlow interworking in the scope of multi-domain networks with heterogeneous control planes, as illustrated by our main DC interconnection use case. The mechanism involves the definition of functional and protocol architectures, which are based on the concepts of hierarchical, stateful PCE and network abstraction. The approach requires specific control plane extensions, namely to address flexi-grid DWDM networks, and to support OpenFlow identifiers. The proposed protocol architecture relies heavily on the mature, stable and relatively quite featurecomplete PCEP protocol either with extensions for topology discovery, management and abstraction, or using a dedicated protocol such as BGP-LS. We have implemented and deployed the architecture in a control plane testbed. Although there is certainly value in experimentally validating and demonstrating the approach,

10

extensive numerical results assessing the applicability optical networks and evaluating its scalability are required. We have obtained meaningful indicators such as the path computation or provisioning delay (from a control plane perspective) and the order of magnitude of the control plane overhead that the approach requires. In view of the results, we believe that the architecture can be considered an industry-ready solution to the GMPLS/OpenFlow interworking problem. A CKNOWLEDGMENTS Work funded partially by the Spanish MINECO project FARO (TEC2012-38119), EU FP7 project IDEALIST (317999) and by the EU-Japan FP7 project STRAUSS (FP7-ICT-2013-EU-Japan 608528). The authors would like to thank the members of the PACE CSA project (619712) regarding PCE applicability and best current practices.

[15] F. Paolucci, O. Gonzalez de Dios, R. Casellas, S. Duhovnikov, P. Castoldi, R. Munoz, and R. Martinez, “Experimenting hierarchical PCE architecture in a distributed multi-platform control plane testbed,” in Optical Fiber Communication Conference, March 2012. [16] H. Gredler, J. Medved, S. Previdi, A. Farrel, and S. Ray, “Northbound distribution of link-state and TE information using BGP,” Internet Engineering Task Force, May 2014. ˜ [17] M. Cuaresma, F. Munoz, S. Martinez, A. Mayoral, O. G. de Dios, ´ V. L´opez, and J. Fernandez-Palacios, “Experimental demonstration of H-PCE with BPG-LS in elastic optical networks,” in ECOC, 2013. [18] S. Das, “Extensions to the OpenFlow Protocol in support of Circuit Switching”, addendum to OpenFlow Protocol Specification (v1.0),” June 2010. ˜ R. Casellas, T. Tsuritani, R. Mart´ınez, [19] L. Liu, R. Munoz, and I. Morita, “Openslice: an openflow-based control plane for spectrum sliced elastic optical path networks,” Optics Express, vol. 21, no. 4, pp. 4194–4204, 2013.

R EFERENCES [1] S. Azodolmolky, R. Nejabati, E. Escalona, R. Jayakumar, N. Efstathiou, and D. Simeonidou, “Integrated openflow gmpls control plane: an overlay model for software defined packet over optical networks,” OSA Optics Express, vol. 19, no. 26, pp. B421–B428, December 2011. ˜ R. Mart´ınez, R. Vilalta, L. Liu, T. Tsuri[2] R. Casellas, R. Munoz, tani, I. Morita, V. L´opez, O. G. de Dios, and J. P. F. Palacios, “Sdn based provisioning orchestration of openflow/gmpls flexi-grid networks with a stateful hierarchical pce,” in in Proc. of Optical Fiber Communication Conference and Exposition (OFC), San Francisco, California, USA, March 2014. [3] A. Farrel, J. P. Vasseur, and J. Ash, “A Path Computation Element (PCE)-based architecture,” IETF, RFC 4655, Aug 2006. [4] D. King and A. Farrel, “The Application of the Path Computation Element Architecture to the Determination of a Sequence of Domains in MPLS and GMPLS,” RFC 6805, Internet Engineering Task Force, Nov. 2012. [Online]. Available: http://www.ietf.org/rfc/rfc6805.txt ˜ L. Liu, T. Tsuritani, [5] R. Casellas, R. Martinez, R. Munoz, I. Morita, and M. Tsurusawa, “Dynamic virtual link mesh topology aggregation in multi-domain translucent wson with hierarchical-pce,” OSA Optics Express Journal, vol. 19, no. 26, December 2011. ˜ L. Liu, T. Tsuritani, and [6] R. Casellas, R. Martinez, R. Munoz, I. Morita, “Inter-layer traffic engineering with hierarchical-pce in mpls-tp over wavelength switched optical networks,” OSA Optics Express Journal, vol. 20, no. 28, December 2012. [7] E. Crabbe, J. Medved, I. Minei, and R. Varga, “Pcep extensions for stateful pce,” Internet Engineering Task Force, March 2013. [8] E. Crabbe, I. Minei, and S. S. R. Varga, “Pcep extensions for pce-initiated lsp setup in a stateful pce model,” Internet Engineering Task Force, April 2013. [9] ITU-T Recommendation G.694.1, “Spectral grids for wdm applications: Dwdm frequency grid,” 2012. [10] ITU-T Recommendation G.872, “Architecture of optical transport networks,” 2012. [11] ITU-T Recommendation G.800, “Unified functional architecture of transport networks,” 2012. ˜ R. Vilalta, L. Liu, T. Tsuri[12] R. Casellas, R. Martinez, R. Munoz, tani, and I. Morita, “Control and management of flexi-grid optical networks with an integrated stateful pce and openflow controller [invited],” IEEE/OSA Journal of Optical Communications and Networking, vol. 5, no. 10, pp. A57–A65, November 2013. ˜ L. Liu, T. Tsuritani, [13] R. Casellas, R. Mart´ınez, R. Munoz, and I. Morita, “Dynamic provisioning via a stateful pce with instantiation capabilities in gmpls-controlled flexi-grid dwdm networks,” in European Conference and Exhibition on Optical Communication, 2013. [14] M. Chen, R. Zhang, and X. Duan, “ OSPF Extensions in Support of Inter-Autonomous System (AS) MPLS and GMPLS Traffic Engineering,” RFC 5392 (Standards Track), Internet Engineering Task Force, Jan. 2009. [Online]. Available: http://www.ietf.org/rfc/rfc5392.txt

Ramon Casellas (SM’12) graduated in telecommunications engineering in 1999 by both the UPC-BarcelonaTech university and ENST Telecom Paristech, within an Erasmus/Socrates double degree program. After working as an undergraduate researcher at both France Telecom R&D and British Telecom Labs, he completed a Ph.D. degree in 2002. He worked as an associate professor at the networks and computer science deptartment of the ENST (Paris) and joined the CTTC Optical Networking Area in 2006, with a Torres Quevedo research grant. He currently is a senior research associate and the coordinator of the ADRENALINE testbed. He has been involved in several international R&D and technology transfer projects. His research interests include network control and management, the GMPLS/PCE architecture and protocols, software defined networking and traffic engineering mechanisms. He contributes to IETF standardization within the CCAMP and PCE working groups. He is a member of the IEEE Communications Society and a member of the Internet Society.

¨ Munoz ˜ Raul (SM’12) graduated in telecommunications engineering in 2001 and received a Ph.D. degree in telecommunications in 2005, both from the Universitat Polit`ecnica de Catalunya (UPC), Spain. After working as undergraduate researcher at Telecom Italia Lab (Turin, Italy) in 2000, and as assistant professor at the UPC in 2001, he joined the Centre Tecnol`ogic de Telecomunicacions de Catalunya (CTTC) in 2002. Currently, he is senior research associate. Since 2000, he has participated in several R&D projects (FP7 IP STRONGEST and NoE BONE, FP6 IP NOBEL and NOBEL2, FP5 LION, etc.) and technology transfer projects. He leads the Spanish project DORADO, and previously has led the projects RESPLANDOR and GAUDI. He has published over 50 journal and international conference papers in this field. His research interests include GMPLS architecture, protocols and traffic engineering algorithms (provisioning, protection and restoration strategies) for highly dynamic optical transport networks.

Ricardo Mart´ınez graduated in telecommunications engineering 2002 and received a Ph.D. degree in telecommunications in 2007, both from the Technical University of Catalonia, Spain. Since June 2007 he has been a research associate in the CTTC Optical Networking Area. He has participated in several Spanish, EU-funded (EU FP6, FP7 and CELTIC) and Industrial projects in the context of optical networking. His research interests include GMPLS architecture, distributed control schemes, packet and optical transport networks. He has published over 80 scientific conferences and peer reviewed journal papers.

11

Ricard Vilalta obtained his Telecommunications Engineering degree from Technical University of Catalonia on 2007. He also has studied Audiovisual Communication at UOC (Open University of Catalonia) and is a master degree on Technology-based business innovation and administration at Barcelona University (UB). During 2006-2009, he worked as research engineer and software developer in Triagnosys GmbH (Munich, Germany). During 2009, Ricard Vilalta obtained a Torres Quevedo Research Grant at E2M (Sabadell). Since 2010, Ricard Vilalta is a researcher at CTTC, in the Optical Networks and Systems Department. In 2013, Ricard Vilalta obtained his PhD at BarcelonaTech (UPC). His research is focussed on Optical Network Virtualization and Optical Openflow.

Lei Liu (M’11) received the B.E. and Ph.D. degrees from Beijing University of Posts and Telecommunications (BUPT), Beijing, China, in 2004 and 2009 respectively. From 2009 to 2012, he was a research engineer at KDDI R&D Laboratories Inc., Japan, where he was engaged in research & develop on intelligent optical networks and their control and management technologies. He is now with the University of California, Davis, USA. His main research interests include wavelength switched optical networks, elastic optical networks, network control and management techniques (such as MPLS/GMPLS, PCE, OpenFlow), software-defined networking (SDN), network architectures and grid/cloud computing. He has coauthored more than 100 peer-reviewed papers related to the previously mentioned research topics in international journals and conferences, and is the first author of more than 50 of them. He has presented several invited talks in the international conferences, including some prestigious ones such as ECOC’11, ECOC’12 and Globecom’12.

´ V´ıctor Lopez received the M.Sc. (Hons.) degree in telecommunications engineering from Universidad de Alcala´ de Henares, Spain, in 2005 and the Ph.D. (Hons.) degree in computer science and telecommunications engineering from Universidad Aut´onoma de Madrid (UAM), Madrid, Spain, in 2009. The results of his Ph.D. thesis were awarded with the national COIT prize 2009 of the Telef´onica foundation in networks and telecommunications systems. In 2004, he joined Telef´onica I+D as a Researcher, where he was involved in next generation networks for metro, core, and access. He was involved with several European Union projects (NOBEL, MUSE, MUPBED). In 2006, he joined the High-Performance Computing and Networking Research Group (UAM) as a Researcher in the ePhoton/One+ Network of Excellence. He worked as an Assistant Professor at UAM, where he was involved in optical metro-core projects (BONE, MAINS). In 2011, he joined Telefonica I+D as Technology specialist. He has co-authored more than 100 publications and contributed to IETF drafts. His research interests include the integration of Internet services over IP/MPLS and optical networks and control plane technologies (PCE, SDN, GMPLS).

´ Oscar Gonzalez de Dios received his PhD with honors form U. of Valladolid (2012) and his MS in Telecommunications Engineering also from U. of Valladolid (2000). He has 14 years of experience in Telefonica I+D, where he has been involved in numerous R&D European projects (recently STRONGEST, ONE, IDEALIST). He has coordinated the BANITS 2 project and acted as WP leader in others. He has co-authored more than 40 research papers and is currently active in IETF CCAMP and PCE WGs, as well as ITUT Study group 15. His main research interests include Photonic Networks, Flexi-grid, Inter-domain Routing, PCE, OBS, automatic network configuration, End To End MPLS, TCP performance and SDN. He was Guest Editor for IEEE Communications Magazine Special Issue on the Feature Topic “Advances in Network Planning,” published in Jan and Feb 2014.

Takehiro Tsuritani was born in Ishikawa, Japan, on Nov. 1971. He received the M.E. and Ph.D. degrees in electronics engineering from Tohoku University, Miyagi, Japan, in 1997 and 2006 respectively. He joined Kokusai Denshin Denwa (KDD) Company, Limited (currently KDDI Corporation), Tokyo, Japan, in 1997. Since 1998, he has been working at their Research and Development Laboratories (currently KDDI R&D Laboratories Inc.) and has been engaged in research on long-haul wavelength division multiplexing (WDM) transmission systems and designing and modeling of photonic networks. ´ Juan Pedro Fernandez-Palacios received the MS in Telecommunications Engineering from Polytechnic University of Valencia in 2000. In Sept. of 2000 he joined Telefonica I+D where he is currently leading the Core Network Evolution unit. He has been involved in several European projects such as NOBEL, NOBEL-2, STRONGEST and MAINS as well as in the design of core network architectures in the Telef´onica Group. Currently he is coordinating the FP7 project IDEALIST as well as the standardization activities within CAON cluster, where he is acting as co-chair. Itsuro Morita received the B.E., M.E., and Dr. Eng. degrees in electronics engineering from the Tokyo Institute of Technology, Tokyo, Japan, in 1990, 1992, and 2005, respectively. He joined Kokusai Denshin Denwa (KDD) Company Ltd. (currently KDDI Corporation), Tokyo, in 1992 and, since 1994, he has been with their Research and Development Laboratories. He has been engaged in research on long-distance and high-speed optical communication systems, and photonic networks. In 1998, he was on leave at Stanford University, Stanford, CA.

Suggest Documents