Document not found! Please try again

Design and Implementation of Reliable Multi-layer Service Network

5 downloads 777 Views 371KB Size Report
Service Network. Shigeo Urushidani and Michihiro Aoki. Research and Development Center for Academic Networks. National Institute of Informatics, NII.
Design and Implementation of Reliable Multi-layer Service Network Shigeo Urushidani and Michihiro Aoki Research and Development Center for Academic Networks National Institute of Informatics, NII 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan {urushi and aoki-m}@nii.ac.jp Abstract—This paper describes the design and implementation of the new Japanese academic backbone network, called SINET4, which started full operation in April 2011. The network was designed to serve as a higher-speed and more reliable infrastructure than the previous network, SINET3, as well as to provide a variety of multilayer network services. The network was subjected to the disastrous March 11 earthquake during the migration from SINET3 to SINET4 but did not suffer serious damage thanks to the new network design. We present the network design concept, networking technologies applied, and effects of the earthquake. Keywords-multilayer availability

I.

services;

converged

network,

high

INTRODUCTION

The required specifications for leading-edge academic backbone networks [1–3] are quite different from those for regular commercial networks in terms of network bandwidth, network performance, service integration, and so on. The reason for this is that the former networks accommodate cutting-edge research and education resources, such as huge experimental devices, e.g., LHD [4], LHC [5], and ITER [6]; distributed advanced sensors and observation devices, e.g., radio telescopes for e-VLBI [7] and earthquake sensors [8]; supercomputers for shared use, e.g., the K computer [9]; advanced communication devices, e.g., non-compressed HDTV transmission systems [10] and t-Room [11]; and so on. Veryhigh-speed backbone networks with high-speed international links are therefore essential for researchers in universities and research institutions who use the above-mentioned devices. Academic backbone networks are also required to provide virtual private networks (VPNs) in multiple layers for collaborative research and to provide network resources on demand, such as bandwidth on demand, for experiments. The Japanese academic backbone network, called the Science Information Network (SINET), started operating as an Internet backbone network in 1992, and the third version (SINET3 [12, 13]), which was launched in 2007, expanded its service menu gradually over the single network platform by introducing several advanced networking technologies. As the use of advanced network services has increased nationwide, it is required that the network becomes higher-speed and more reliable. Regional disparities in accessibility to the network must also be alleviated. 978-963-8111-77-7

In this paper, we describe the design and implementation of the newly launched Japanese academic backbone network, called SINET4. The remainder of this paper is organized as follows. Section II describes the required specifications, design concept, and network structure. Section III describes the networking technologies applied to shape the network design. Section IV details the technologies for high reliability. Section V shows the effect of the Great East Japan Earthquake during the migration from SINET3 to SINET4. In Section VI, we present our conclusion. II.

NETWORK ARCHITECTURE

A. Overall Requirements SINET4 is an academic backbone network operated by the National Institute of Informatics (NII) for more than 700 universities and research institutions. It provides a variety of network services in terms of layer, VPN, QoS, and so on (Table 1). The major access interface to the network has been shifting from a Gigabit Ethernet (GE) to a 10 Gigabit Ethernet (10GE): more than 100 10GE access interfaces have been used as of April 2011. As a basic network service, the network provides a commercial Internet access service via major domestic Internet exchange points (JPIX and JPNAP) and contracted global ISPs. As other IP services, the network provides IPv6 services (native/dual-stack/tunnel), IPv4 fullroute information for BGP users and network researchers, IPv6 multicast services for mainly remote lectures, and prioritybased QoS services by identifying IP addresses and application port numbers. As VPN services, the network provides L3VPN, L2VPN, virtual private LAN service (VPLS), and L1VPN services. The L3VPN service, which was the only VPN service in 2003, has been shifting to L2VPN/VPLS services, but about 30 projects still use the L3VPN service. Layer-2 based VPN services, which are L2VPN for two sites and VPLS for more than two sites, are currently the fastest growing VPN services. We are now developing L2VPN/VPLS on-demand (L2OD) services with VLAN-based QoS control so that approved project users can freely form virtual experimental networks. We also provide layer-1 on-demand (L1OD) services, which provide end-to-end dedicated layer-1 paths during specified durations with a granularity of 150 Mbps [13]. The L1OD service has set up and released more than 1,000 layer-1 paths so far. For example, an e-VLBI project [7] plans to use an 8-

Gbps bandwidth over a 10GE interface per antenna at night and on weekends this year. TABLE I.

NETWORK SERVICES IN SINET4

Service Menu E/FE/GE (T) Access GE (LX) interface 10GE (LR) Commercial Internet access IPv6 IPv4 full-route information IPv4/IPv6 multicast Layer-3 IPv4/IPv6 multicast (QoS) service Application-based QoS L3VPN L3VPN (QoS) Multicat in L3VPN L2VPN/VPLS Layer-2 L2VPN/VPLS (QoS) service L2VPN/VPLS on demand Layer-1 service

L1 on demand

Status Note Yes Yes Yes Yes Via IXs and global ISPs Yes Native/dual-stack/tunnel Yes Yes Yes Yes Yes Yes Planned Yes Fastest growing service Yes Planned Yes

Over 1,000 paths so far

As the academic lifeline network in Japan, SINET4 is required to be more reliable and stable than ever before. In the previous versions, edge nodes that accommodated access links of universities and research institutions were located at selected user organizations called node organizations. While the node organizations enjoyed a high-speed access environment, there were some problems regarding the stability and operability of the edge nodes. For example, each edge node was negatively affected by annual planned power outages for legal mandatory inspections of the electric systems of the corresponding node organization. We needed to send a power-generator truck to continue providing connectivity for other organizations connected to the edge node during power outages. We also had a limited entrance time window for the maintenance of each edge node, e. g., from 9 AM to 5 PM on weekdays. Even for an emergency response, we sometimes needed to spend a significant amount of time negotiating with the security guards. We were also worried about possible damage in the case of natural disasters such as earthquakes. We therefore decided to place every edge node at the commercial data centers that resolve these problems. We also had to alleviate regional disparities in accessibility to our network. Historically, 13 of 47 prefectures did not have edge nodes, and user organizations in those 13 prefectures had to connect their access links to the nearest edge nodes located in other prefectures, which resulted in big differences in access link bandwidth and cost between user organizations. We therefore decided to place edge nodes in all prefectures, while optimizing the network topology as well as moving node locations from node organizations to commercial data centers. With SINET4, we aimed to achieve higher network bandwidth at a reasonable cost; to provide a variety of network services nationwide; to make the network, including edge nodes, even more stable than SINET3; and to alleviate the regional disparities in accessibility to the network.

B. Structual Features of SINET4 Figure 1 shows the structural change from SINET3 (left side) to SINET4 (right side). In the structure of SINET3, there were 62 edge nodes, which were located at node organizations and accommodated the access links of user organizations, and 12 core nodes, which were co-located at telecom carriers’ buildings and transferred the traffic from the edge nodes. The bandwidths of the core links between the core nodes had a maximum speed of 40 Gbps (STM256), but only between Tokyo, Nagoya, and Osaka. The bandwidths of the edge links between the edge and core nodes were 1 to 20 Gbps. In the structure of SINET4 (right side), both the edge nodes and the core nodes are placed at commercial data centers. Here, each core node also accommodates access links of user organizations. Each prefecture has one edge/core node in principle, and three prefectures have two nodes. At the same time, we refined the entire network topology to optimize the node location. We aggregated SINET3’s 12 core nodes to 8 and 62 edge nodes to 29, which eventually resulted in a very-highspeed backbone network with a reasonable cost. We also decided to place an additional 13 edge nodes in prefectures that previously had none. To date, we have already installed 4 of these 13 nodes and will complete the installation of the remaining 9 before March 2012. As for core links, SINET4 has 40-Gbps (STM256) links from Sendai to Fukuoka as of April 2011, and will add a direct 40-Gbps link between Tokyo and Osaka and upgrade the link between Tokyo and Sapporo to a 40-Gbps link, which will eventually lead to a nationwide 40Gbps backbone (Fig. 2). The core links are configured to have redundant routes for high service availability and form five loops as a whole. As for edge links, each link bandwidth depends on the expected traffic volume from the area and is a maximum of 40 Gbps (STM256) and a minimum of 2.4 Gbps (STM16). Here, every edge/core link is a dispersed duplexed link, which we will explain in more detail in section IV. As for access links between edge/core nodes and the previous node organizations, SINET4 introduced WDM-based access links, composed of dark fibers and WDM devices, with a maximum speed of 40 Gbps (four 10GEs). The interface is easily added to the WDM devices on a 10GE basis depending on the requests. This dark fiber and WDM device combination can transmit data up to about 40 km without amplifiers, which covers most of the access links between data centers and the previous node organizations. In fact, that was one of the criteria for data center selection. These WDM-based access links will be kept for several years by NII for smooth migration of user access links from SINET3 edge nodes to SINET4 nodes. We also performed joint procurement of dark-fiber-based access links for other user organizations, which allowed them to get faster access links (interfaces of either 1GE or 10GE) at reasonable costs. In order to provide a variety of network services, SINET4 deployed node architecture and technologies similar to those used in SINET3, and we procured new equipment for edge and core nodes. The details are described in sections III and IV. As for new network services, we have focused on the expansion of resource on-demand services, such as L1/L2 on-demand services, in collaboration with equipment vendors. As an

optional extra, we can support cloud-type services by collaborating with the data centers.

TABLE II. Item Neutrality

Cloud-type services L1 on-demand function

SINET3

Resource on-demand function, etc.

SINET4

Core link Core node (40-Gbps-based)

Core link Core node (10–40 Gbps)

(8 DCs)

(12 DCs) Other access networks

Edge link

(1–20 Gbps)

Other access networks

Edge link

(2.4–40 Gbps)

Secure power supply Natural disaster resistance

Edge node (29 +13 DCs)

Edge node

Security

(62 sites) N

N

N

Dark fiber Access link Node organizations +WDM (1–40 Gbps)

N

Location

= 1 Gbps

U

U

U

U

U

U

U

U

User organizations ( > 640)

U

U

U

U

Figure 1. Structual change from SINET3 to SINET4.

:Core node

:Core link (40 Gbps; planned)

:Edge node :Edge node (planned)

:Core link (10 Gbps)

Sapporo

:Edge link (40 Gbps) :Edge link (10 Gbps) :Edge link (2.4 Gbps) :Edge link (2.4 Gbps; planned)

Kanazawa Fukuoka

Sendai

Hiroshima Los Angeles Osaka Nagoya

Criteria No restrictions for telecom carriers No restrictions for equipment vendors No interruption to power supply due to planned power outages under legal mandatory inspection Power supply from emergency power supply system for at least ten hours in case of blackouts Endurance to earthquakes equivalent in intensity to the Great Hanshin Awaji Earthquake in 1995 Located 5 meters or more above sea level for seaboard cities Access securely controlled 24 hours a day, 7 days a week, 365 days a year Emergency admission within 2 hours of admission request Preferably located near previous node organizations so as to install WDM-based access links without amplifiers

U

User organizations ( > 700)

:Core link (40 Gbps)

MAIN CRITERIA FOR DATA CENTER SELECTION

New York Tokyo

Singapore

Figure 2. Network topology of SINET4.

C. Criteria for Data Centers We selected commercial data centers to improve the reliability and operability of the network by defining several criteria items (Table 2). First, the data centers must be neutral to telecom carriers and equipment vendors to ensure that all user organizations can connect their own access links and sometimes small equipment to our edge nodes. Next, the data centers must be resistant to blackouts and natural disasters. We required a secure power supply from an emergency power supply system for at least ten hours in case of blackouts as well as no interruption to power supply due to planned power outages under legal mandatory inspection. We also required endurance to earthquakes equivalent in intensity to the Great Hanshin Awaji Earthquake in 1995. As for security, we required emergency admission within 2 hours in addition to secure access control, 24 hours a day, 7 days a week, 365 days a year. We also required that the data centers preferably be located near previous node organizations so as to install WDMbased access links without amplifiers between the data centers and the previous node organizations.

III.

NETWORK TECHNOLOGIES FOR MULTI-LAYER SERVICES

A. Service Provision by Virtual Service Networks SINET4 provides a variety of multi-layer network services on a single network platform. In order to avoid any instability due to functional upgrades or failure recovery actions of coexisting network services, the network forms virtually separated service networks for each layer, VPN, and so on. Currently it has five virtual service networks for IPv4/IP6 dual stack, including commercial Internet access, L3VPN, L2VPN/VPLS, L2OD, and L1OD services (Fig. 3). Here, we use the same virtual service network for L2VPN and VPLS. VPNs for each research project are formed in corresponding virtual service networks. Each virtual service network applies its own routing and signaling protocols and has its own highavailability functions in case of link and node failure. Although this virtual separation enables us to manage each service network independently, we also need to manage how we assign the physical network resources between layer-2/3 and layer-1 services. The L1OD server (described later) manages resource assignment by setting the available bandwidths of each link for L1OD services. SINET4 New projects (HPCI, etc.) IPv4/IPv6 dual stack Nuclear fusion research Grid computing research

L3VPN

L2VPN/VPLS

Earthquake research High-energy physics research

Virtual service network for each service

VPNs for individual research projects

L2 on-demand L1 on-demand

e-VLBI research High-realistic sensation research

Figure 3. Virtual service network separation.

B. Networking Technologies for Each Network Service Figure 4 shows the main components and technologies used for accommodating the above-mentioned virtual service

networks in SINET4. Each edge node is composed of a layer-1 switch and a layer-2 multiplexer, and each core node is composed of layer-1 switches and an IP router. Through recent procurements, we decided to use NEC’s layer-1 switches (UN5000), Alaxala’s layer-2 multiplexers (AX6600), and Juniper Networks’ IP routers (MX960). The access links of user organizations are connected to the edge nodes with Ethernet-family interfaces. Here, for user organizations using WDM access links, interfaces for layer-2/3 services and those for layer-1 services are multiplexed at the campuses and separated at data centers. The former interfaces are connected to the layer-2 multiplexers, and the latter interfaces are connected to the layer-1 switches. Each layer-2 multiplexer attaches Ethernet packets with internal VLAN tags corresponding to each virtual service network. Each edge layer-1 switch receives the layer-2/3 service packets with 10GE interfaces and accommodates them into a layer-1 path for layer-2/3 services between edge and core layer-1 switches by using generic framing procedure (GFP) [14] and virtual concatenation (VCAT) [15] technologies. The opposite IP router receives and distributes the layer-2/3 service packets to its logical systems (or virtual routers) corresponding to each virtual service network after examining their VLAN tags. These VLAN tags are removed for layer-3 service packets and not for layer-2 service packets in the IP router. For each layer-2/3 VPN packet, the IP router use multi-protocol label switching (MPLS) tags to transfer the packet. Then, the IP router attaches each service packet with another VLAN tag corresponding to each virtual service network between the IP routers. The core layer-1 switch received the layer-2/3 service packets with 10GE interfaces and accommodates them into a layer-1 path for layer-2/3 services between core layer-1 switches by using GFP and VCAT. When we need the bandwidths for layer-1 paths in the same links, the bandwidth of each layer-1 path for layer-2/3 services can be varied without any packet loss by using the link capacity adjustment scheme (LCAS) [16]. Here, we use VC-4 (about 150 Mbps) as a bandwidth increase/decrease granularity. While each layer-2/3 VPN is established statically, a layer1 VPN using layer-1 paths is established on demand because layer-1 services require dedicated network resources. For this dynamic resource assignment, we developed an L1OD server that receives user requests, calculates appropriate routes, controls the layer-1 switches through the layer-1 operation system via CORBA interface [19], and manages the network resources [13]. On-demand layer-1 paths are created by users directly making requests to the L1OD server via a simple Web screen, through which the destinations, durations, and route attributes (minimum delay routes or maximum bandwidth routes) are specified. The L1OD server establishes these layer1 paths only for the specified durations. The layer-1 switches set up and release these layer-1 paths by using the generalized MPLS (GMPLS) protocols [17, 18] between them. Depending on the required bandwidth and the available bandwidth of each link for L1OD services, the L1OD server changes the bandwidths of the layer-1 paths for layer-2/3 services by using LCAS. We are also developing an L2OD server that sets up layer-2 paths by user requests via a similar Web screen to the

L1OD service. We will use the NETCONF interface [20] to control the layer-2 multiplexers and the IP routers. L1OD server

L2OD server Control and management plane

User side L2 MUX

Data IP Ether

IPv4/IPv6

Data IP VLAN Ether

IPv4/IPv6

Data IP Ether

L3VPN

Data IP MPLS VLAN Ether

Data IP VLAN Ether

Data

L3VPN

Ether

L2VPN Data

: Logical system

IP router

Data IP VLAN Ether

Data

VLAN Ether

Data

VLAN Ether

Data VLAN Ether MPLS VLAN Ether

L2VPN

Ether

L2OD E/FE/GE/10GE

Hitless bandwidth control by LCAS

10GE

Data VLAN Ether MPLS VLAN Ether

L2OD

10GE

Path for L2/L3 services

L1VPN

Path for L2/L3

L1OD GE/10GE

L1 switch

L1 switch

STM256/STM64/ STM16

Edge node

STM256/ STM64

Core node

Figure 4. Network components and technologies.

IV.

DESIGN AND TECHNOLOGIES FOR HIGH AVILABILITY

A. Link Redundancy and Link Down Detection We constructed every edge/core link as a dispersed duplexed link composed of primary and secondary (or standby) circuits, each of which goes through a route that is geographically different from the other. For example, the primary circuit of the core link between Tokyo and Sapporo goes through the Pacific Ocean side and the secondary circuit goes through the Japan Sea side. When a failure occurs on the route of the primary circuit, the secondary circuit becomes active within 50 msec. If both the primary and secondary circuits fail to work, the link comes down. The layer-1 switches detect the link down and inform the IP routers or layer-2 multiplexers by forced link down (Fig. 5). Triggered by layer-1 switches, the IP routers quickly divert the traffic to other routes by using OSPF or MPLS functions (described later). IP route recalculation and MPLS protection/FRR

10GE

10GE

IP router

10GE STM

Switching within 50 msec STM

Primary

STM

STM

Secondary

STM

STM

STM

STM 10GE

10GE

10GE STM

STM 10GE

10GE

L1 switch

L1 switch

IP router

: Link down detection : Forced link down

Figure 5. Link redundancy and link-down detection.

B. High Availability Functions for Multi-layer Services Our backbone network has sufficient redundancy and easily enables traffic to be diverted to different directions to ensure high availability in the case of link and node failure. Each virtual service network has individual high-availability functions depending on the applied forwarding technologies. For IPv4/IPv6 dual stack services, the corresponding logical systems perform IP route recalculation by the OSPFv2 and

OSPFv3 protocols, and for L3VPN, L2VPN/VPLS, and L2OD services, the logical systems perform MPLS protection and fast reroute (FRR) [21, 22] in each virtual service network. For stable VPN service recovery between the logical systems, we in principle use protected MPLS paths composed of primary and secondary paths (Fig. 6). The route of each primary path between logical systems is the smallest-delay route in the network, and the route of each secondary path is the smallestdelay disjoint path to the corresponding primary path. Both primary and secondary routes are strictly specified manually. In addition, we use fast reroute functions, which push additional MPLS labels to MPLS packets, for fast partial recovery and to divert traffic away from multiple failures. For a single failure, the network uses FRR for local recovery and then sends an RSVP PathErr message to the ingress logical system (Tokyo’s in Fig. 6), which then carries out the MPLS protection. If the primary protection path is recovered after the failure, the ingress logical system switches the secondary path to the primary path in an hour of the recovery detection. If another failure occurs on the secondary path, the logical system (Kanazawa’s in Fig. 6) that detected the failure uses FRR for partial recovery. For layer-1 services, the assigned resources, including routes for layer-1 paths, are calculated by the L1OD server when doing admission control [13]. If a failure occurs before the layer-1 path setup, the L1OD server can recalculate assigned network resources, and if a failure occurs during the service, the layer-1 switches can use GMPLS LSP rerouting functions [23], but only for mission critical applications.

:Primary :Secondary Tokyo

MPLS paths f or L3VPN MPLS paths f or L2VPN/VPLS MPLS paths f or L2OD

IPv4/6

IPv4/6

Fukuoka

L3VPN L2VPN

Hiroshima

IPv4/6

Osaka

L3VPN

L3VPN

L2VPN

L2VPN

L2VPN

L2OD

FRR

IPv4/6 L3VPN

Protection

L2VPN L2OD

IPv4/6

L3VPN L2OD

L2OD

Kanazawa

Failure

Nagoya

L2OD

Sendai

Sapporo IPv4/6 L3VPN L2VPN L2OD

Tokyo

IPv4/6

IPv4/6

L3VPN

L3VPN

L2VPN

L2VPN

L2OD

L2OD

Figure 6. MPLS protection and fast reroute.

Figure 7 shows the network situation with a focus on the Sendai core node, which is connected to the edge nodes in the Tohoku area, right after the earthquake. Solid lines show the primary circuits between the nodes and broken lines show the secondary circuits between them. For the core links, both the primary and secondary circuits were affected between Sendai and Tokyo and between Sendai and Kanazawa, but only the primary circuits were affected and the secondary circuits became active between Sendai and Sapporo (in Hokkaido prefecture) and between Sapporo and Tokyo. Therefore, the backbone network could keep the routes between the arbitrary core nodes open and the Tohoku and Hokkaido areas were not isolated. The affected secondary circuits were repaired in about 57 hours, but the repair of the primary circuits needed more than one month. For the edge links, only the primary circuits were affected, and the secondary circuits became active between Sendai and other cities. Therefore, none of the prefectures that had edge nodes were isolated. The affected primary circuits were repaired in a couple of days. Because we installed all of the edge and core nodes in commercial data centers, the nodes survived even in areas where the blackout took place. The blackout lasted about 96, 28, and 17 hours in Sendai, Yamagata, and Hirosaki cities, respectively. Three data centers in these cities continued to supply power to equipment through emergency power supply systems by refueling them until commercial power sources were recovered. Note that such prolonged power supply from emergency power supply systems was of course a first for the data centers. The surviving links and nodes successfully diverted the IPv4/IPv6 packets to other routes by OSPFv2 and OSPFv3, saved VPN packets by MPLS protection and FRR, and did not stop the services. As for the layer-1 services, we set the available bandwidth for L1OD services to zero as an emergency measure until the secondary circuit between Sendai and Tokyo was repaired. As the bandwidth between these cities will be upgraded to 40 Gbps by next March, this measure will not be necessary if the same situation occurs after the upgrade.

:Primary (af fected)

:Core node (data center)

:Primary (not aff ected) :Secondary (af fected)

:Edge node (data center)

Hirosaki

:Secondary (not aff ected)

V.

EFFECTS OF THE GREAT EAST JAPAN EARTHQUAKE

The construction of SINET4, which took place while SINET3 was still operating, was completed by the end of January 2011. We started the migration from SINET3 to SINET4 in early February 2011 and completed the migration by the end of March 2011. To clarify, this migration means that the access links of each user organization connected to SINET3 was moved to SINET4. Fortunately, we had almost completed the migration in the Tohoku area by the time the earthquake struck the area on March 11. The backbone network itself did not suffer serious damage thanks to the highly reliable network design including equipment housing in data centers, dispersed duplexed links, and high-availability functions for multi-layer services.

:Blackout area (numbers indicate the time)

17 h

:Protection :Link down

Yamagata 28 h

Sendai

Sapporo

Kanazawa 96 h

Fukushima Tokyo

Figure 7. Network situation after the Great East Japan Earthquake.

VI.

CONCLUSION

In this paper, we described the required specifications, design concept, network structure, applied transfer technologies, and high-availability functions of the new SINET4. We also reported on the effects of the Great East Japan Earthquake of March 11, 2011, and showed that our reliable network design worked very well and did not stop the service operation. ACKNOWLEDGMENT We wish to thank all of the members of the Network Group at NII for their support of SINET4. We are also grateful to Mr. Yasuhiro Kimura of NTT Communications, Mr. Takuro Sono of IIJ, Mr. Takeshi Mizumoto of NTT East, and Mr. Akihiro Sato of NTT ME for their continuous cooperation and support. REFERENCES [1] [2] [3] [4] [5] [6]

[7] [8] [9]

Internet2: http://www.internet2.edu/. GÉANT2: http://www.geant2.net/. SINET4: http://www.sinet.ad.jp/index_en.html?lang=english. Large Helical Device (LHD): e.g. http://www.sinet.ad.jp/caseexamples/nifs. ATLAS at Large Hadron Collider (LHC): http://www.atlas.ch/. Y. Nagayama, M. Emoto, Y. Kozaki, H. Nakanishi, S. Sudo, T. Yamamoto, K. Hiraki, and S. Urushidani, “A proposal for the ITER remote participation system in Japan,” Fusion Engineering and Design 2010, 85, pp. 535–539. N. Kawaguchi, “Trial on the efficient use of trunk communcation lines for VLBI in Japan,” 7th International eVLBI Workshop, Jun. 2008. JDXnet: http://www.sinet.ad.jp/case-examples/eri. K computer: http://www.nsc.riken.jp/project-eng.html.

[10] K. Harada, T. Kawano, K. Zaima, S. Hatta, and S. Meno, “Uncompressed HDTV over IP transmission system using ultra-highspeed IP streaming technology,” NTT Technical Review, vol. 1, no. 1, pp. 84–89, Apr. 2003. [11] t-Room: http://www.mirainodenwa.com/. [12] S. Urushidani, S. Abe, Y. Ji, K. Fukuda, M. Koibuchi, M. Nakamura, S. Yamada, R. Hayashi, I. Inoue, and K. Shiomoto, “Design of versatile academic infrastructure for multilayer network services,” IEEE JSAC, vol. 27, no. 3, pp. 253–267, Apr. 2009. [13] S. Urushidani, K. Shimizu, R. Hayashi, H. Tanuma, K. Fukuda, Y. Ji, M. Koibuchi, S. Abe, M. Nakamura, S. Yamada, I. Inoue, and K. Shiomoto, “Implementation and evaluation of layer-1 bandwidth-on-demand capabilities in SINET3,” ICC2009, Jun. 2009. [14] ITU-T Recommendation G.7041, “Generic framing procedure (GFP),” Aug. 2005. [15] ITU-T Recommendation G.707, “Network node interface for the synchronous digital hierarchy (SDH),” Dec. 2003. [16] ITU-T Recommendation G.7042, “Link capacity adjustment scheme (LCAS) for virtual concatenated signals,” Mar. 2006. [17] L. Berger, “GMPLS Signaling Resource ReserVation Protocol: Traffic Engineering,” RFC3473, Jan. 2003. [18] E. Mannie and D. Papadimitriou, “GMPLS Extensions for SONET and SDH Control,” RFC3946, Oct. 2004. [19] TM FORUM 814A version 2.1, “TM FORUM MTNM implementation statement template and guidelines: NML-EML interface for management of SONET/SDH/WDM/ATM transport networks,” Aug. 2002. [20] R. Enns, “NETCONF configuration protocol,” RFC4741, Dec. 2006. [21] V. Sharma and F. Hellstrand, “Framework for Multi-Protocol Label Switching (MPLS)-based Recovery,” RFC3469, Feb. 2003. [22] P. Pan, G. Swallow, and A. Atlas, “Fast Reroute Extensions to RSVPTE for LSP Tunnels,” RFC4090, May 2005. [23] E. Mannie and D. Papadimitriou, “Recovery (Protection and Restoration) Terminology for Generalized Multi-Protocol Label Switching,” RFC4427, Mar. 2006.

Suggest Documents