Modeling and Simulation of Data Center Networks - Springer Link

3 downloads 33937 Views 3MB Size Report
centers constitute the foundations and building blocks of cloud computing. Con- .... network switches and computational server for traffic routing). The CamCube ...
Modeling and Simulation of Data Center Networks Kashif Bilal, Samee U. Khan, Marc Manzano, Eusebi Calle, Sajjad A. Madani, Khizar Hayat, Dan Chen, Lizhe Wang and Rajiv Ranjan

1

Data Centers and Cloud Computing

Cloud computing is projected as the major paradigm shift in the Information and Communication Technology (ICT) sector [1]. In recent years, cloud market has experience enormous growth and adoption. The cloud adoption is expected to increase in coming years. As reported by Gartner [2], Software as a Service (SaaS) market is expected to rise to $ 32.3 billion in 2016 ($ 13.4 billion in 2011). Similarly, Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) are projected to rise from $ 7.6 billion in 2011 to $ 35.5 billion in 2016. Cloud computing has been adopted in almost all of the major sectors, such as business, research, health, agriculture, e-commerce, and social life. A data center is a repository to hold computation and storage resources interconnected to each other using network and communication infrastructure [3]. Data centers constitute the foundations and building blocks of cloud computing. Continuous evolution of cloud services, as well as their increased demand mandate growth in data center resources to deliver the expected services and required Quality of Service (QoS). Various cloud service providers already host hundreds of thousands of servers in their respective data centers. Google is estimated to host around K. Bilal () · S. U. Khan North Dakota State University, Fargo, ND, USA e-mail: [email protected] M. Manzano · E. Calle University of Girona, Girona, Spain S. A. Madani · K. Hayat COMSATS Institute of Information Technology, Islamabad, Pakistan D. Chen · L. Wang Chinese Academy of Sciences, Beijing, China R. Ranjan Australian National University, Canberra, Australia © Springer Science+Business Media New York 2015 S. U. Khan, A. Y. Zomaya (eds.), Handbook on Data Centers, DOI 10.1007/978-1-4939-2092-1_31

945

946

K. Bilal et al.

0.9 million servers in their data centers. Similarly, Amazon is reported to have around 0.45 million servers to support Amazon Web Services (AWS). The number of servers in Microsoft data centers double every 14 months [4]. The projected number of resources required to accommodate future service demands within data centers are mounting. The “scale-out” or “scale-up” approaches alone cannot deliver a viable solution for escalating resource demands. Scale-out is the common approach adopted by data centers designers by adding inexpensive commodity hardware to the data center resources pool. Scale-up approach focuses on improving and adding more power and complexity to the enterprise-level equipment, which is expensive and power-hungry [6]. Increasing the computational and storage resources is currently not a major challenge in the data center scalability. However, how to interconnect these commodity resources together to deliver the required QoS is the major challenge. Besides scaling-out the data centers, energy consumption and resultant Operational expenses (OpEx) of data centers also pose serious challenges. Environmental aspects, enormous amount of Green House Gases (GHG) emissions by data centers, and increasing energy costs are worsening the problem. These aforementioned problems mandate the revisions in design and operation of data centers. Data Center Networks (DCNs) play a pivotal role in asserting the performance bounds and Capital Expenditure (CaPex) of a data center. Legacy ThreeTier DCN architecture is unable to accommodate the growing demands and scalability within data centers [3]. Various novel DCN architectures have been proposed in the recent past to handle the growth trend and scalability demands within data centers [4]. Besides, electrical network technology, optical, wireless, and hybrid DCN architectures are also proposed [16, 17]. Moreover, intra-network traffic within the data center is growing. It has been estimated that around 70 % of the network traffic will flow within the data centers [7]. Various cloud and data center applications follow several communication patterns, such as one to one, one to many, many to many, and all-toall traffic flows [8]. The traffic patterns within data centers are fairly different from the traffic patterns observed in other type of telecommunication networks. Therefore, the traffic optimization techniques proposed for such networks are inapplicable within data centers. Finally, it has been observed that the main DCN architectures have a low capacity to maintain an acceptable level of connectivity under different type of failures [5]. All of the aforementioned challenges require detailed analysis and quantification of various issues within a data center. In this particular case, simulation is an appropriate solution for detailed analysis and quantification of the aforementioned problems, since experiments comprising realistic DCNs scenarios are economically unviable. Simulation can help to quantify and compare the behavior of a network under a presented workload and traffic pattern. Unfortunately, network models and simulators to quantify the data center network and varying traffic patterns at a detailed level are scarce, currently. Moreover, current network simulators, such as ns-2, ns-3, or Omnet++ lack the data center architectural model and simulation capability. Therefore, we implemented the state-of-the-art DCN architectures in ns-3 simulator to carry out the DCN simulations and comparative analysis of major DCNs [3]. We

Modeling and Simulation of Data Center Networks

947

implemented the three major DCN architectures namely, (a) legacy ThreeTier [10], (b) DCell [8], and (c) FatTree [19]. We implemented six traffic patterns to observe the behavior of the three DCN architectures under a specified workload. We carried out extensive simulations to perform a comparative analysis of the three considered DCN architectures.

2

DCN Architectures

Based on the packet routing model, the DCN architectures can be classified in two major categories, namely: (a) switch-centric and (b) server-centric networks. The switch-centric networks rely on network switches and routers to perform network packet forwarding and routing. The ThreeTier, FatTree, VL2, and JellyFish DCN architectures are the examples of the switch-centric networks [4]. The server-centric networks utilize computational servers to relay and route the network packets. The server-centric network may be pure server-based or hybrid (using an amalgam of network switches and computational server for traffic routing). The CamCube is a pure server based DCN architecture that relies solely on computational server for packet forwarding [9]. The DCell, BCube, and FiConn are examples of the hybrid server-centric DCN architectures [8]. The legacy ThreeTier architecture is the most commonly deployed network topology within data centers, currently. The ThreeTier architecture is comprised of a single layer of computational servers and a three layered hierarchy of network switches and routers (see Fig. 1) [10]. The computational servers are grouped in racks. Typically, around 40 servers within a rack are connected to a Top of the Rack (ToR) switch [11]. The ToRs connecting the servers within individual racks make the first layer of the network hierarchy called access layer. Multiple access layer switches are connected to the aggregate layer switches. The aggregate layer switches make the second layer of switches within the ThreeTier network hierarchy. A single access layer switch is connected to multiple aggregate layer switches. The high-end enterprise-level core switches make the topmost layer of the ThreeTier network hierarchy called core layer. A single core layer switch is connected to all of the aggregate layer switches within the data center. The intra-rack traffic flow is controlled by the access layer switches. The traffic flow among the racks with the ToRs connected to the same aggregate layer switch passes through the aggregate layer switches. The inter-rack traffic flow where the ToRs of the source and destination rack are connected to different aggregate layer switch passes through the core layer switches. Higher layers of the ThreeTier network architecture experiences higher oversubscription ratios. Oversubscription ratio is the worst-case available bandwidth among the end hosts and the total bisection bandwidth of the network topology [19]. The FatTree DCN architecture is a clos based arrangement of commodity network switches to deliver 1:1 oversubscription ratio [19]. The computational servers and commodity network switches are arranged in a hierarchical manner similar to the

948

K. Bilal et al. Core Network

Aggregation Network

Access Network

Fig. 1 ThreeTier DCN architecture

Core Layer

Aggregaon Layer Edge Layer

Pod 0

Pod 1

Pod 2

Pod 3

Fig. 2 FatTree DCN architecture

ThreeTier architecture. However, the number of the network devices and the interconnection topology is different from the ThreeTier architecture (please see Fig. 2). The number of pods (or modules) represented by ‘k’ decides the number of devices in each layer of the topology. There are total (k/2)2 number of switches in the core layer of the FatTree architecture. The aggregate and access layers each contain k/2 number of switches in each pod. Each access switch is used to connect k/2 computational servers. Each pod in the FatTree contains k number of switches (arranged in two layers) and (k/2)2 number of computational servers. The FatTree DCN architecture exhibit better scalability, throughput, and energy efficiency compared to the ThreeTier DCN. The FatTree architecture uses a custom addressing and routing scheme [AiL08]. The DCell is a hybrid server-centric DCN architecture [8]. DCell follows a recursively built topology where a server is directly connected to multiple servers in units called dcells (see Fig. 3). The dcell0 constitutes the building block of the DCell architecture; where n servers are interconnected to each other using a commodity network switch (n is a small number usually less than eight). n + 1 dcell0 cells build

Modeling and Simulation of Data Center Networks

Dcell0[0]

949

Dcell0[1]

DCell0

DCell2

DCell1 Dcell0[2]

Dcell1[0]

Dcell1[1]

Dcell1[2] Dcell1[6]

Dcell1[3] Dcell1[5]

Dcell1[4]

Fig. 3 DCell DCN Architecture

a level-1 cell called dcell 1 . A dcell 0 is connected to all other dcell0 ’s within a dcell 1. Similarly, multiple lower layer dcell(L−1 ) cells constitute a higher level dcell(L) cell. DCell is an extremely scalable DCN architecture that may scale to millions of servers by having a level-3 DCell with only six servers in each dcell 0 .

3

DCN Graph Modeling

DCN architectures can be represented as multilayered hierarchical graphs [13]. The computational servers, storage devices, and network devices represent the vertices of the graph. The network links connecting the devices represent the edges of the graph. Table 1 presents the variables used in DCN models.

950

K. Bilal et al.

Table 1 Variables used in the DCN modeling Variable

Represents

ν

Set of vertices (servers, switches, and routers) in the graph

ε

Network links connecting various devices

Pi

A pod/module in topology representing set of servers and middle layers switches

C

Core layer switches

δ

Servers

α

Access layer switch

γ

Aggregate layer switch

k

Total number of pods/modules in the topology

n

Total number of servers connected to a single access layer switch

s

Total number of servers connected to a switch in a dcell0

m

Total number of the access layer switches in each pod

q

Total number of the aggregate layer switches in each pod

r

Total Number of the core layer switches in topology

3.1

ThreeTier DCN Model

The ThreeTier DCN can be represented as DCNT T = (ν, ε ),

(1)

where ν represents the nodes in the ThreeTier graph (computational servers, network switches, and routers), and ε represent network links interconnecting the devices. The servers, access, and aggregate layer switches are arranged in k modules/pods (Pik ) and a single layer of the core Cri switches (see Fig. 1 with four modules and a core layer) v = Pik ∪ Cri ,

(2)

Each module or pod Pi is organized in three distinct layers of nodes, namely: (a) aggregate layer ( l g ), (b) access layer ( l c ), and (c) server layer ( l s ). The nodes in each layer within a pod can be represented as c Pi = {l smα ×nδ ∪ lmα ∪ l s qγ },

(3)

where δ represents the servers, α represents the access layer switches, and the aggregate layer switches are represented by γ . |Pi | represents the total number of nodes within a pod  m   |Pi | = n +m+q , (4) 1

Modeling and Simulation of Data Center Networks

951

Total number of nodes in a Threetier architecture having n pods can be calculated as  k   |ν| = |Pi | + |C| , (5) i=1

There are three layers of edges (network links) interconnecting four layers of the ThreeTier architecture nodes ε = {§, α, ´ C| },

(6)

where § represent the edges that connect servers to access switches, α´ are the edges used to connect aggregate and access layers switches, and C| represent the edges used to connect aggregate and core switches. Aggregate switches are also connected to each other within a pod, represented by γ . The set of edges within the ThreeTier architecture can be represented by ε = {§(∀δ,α) , α´ (∀α, ∀γ ) ,γ (∀γ , ∀γ ) , C| (∀γ

∀C) }.

Total number of edges within a ThreeTier DCN can be calculated as  m  q k m    q (q − 1)  |ε| = + n+ q+ r . 2 1 1 1 1

3.2

(7)

(8)

FatTree DCN Model

As discussed in Sect. 2, the FatTree is also a multi-layered DCN architecture similar to the ThreeTier architecture. However, the number of devices and the interconnection pattern among the devices in various layers varies largely in both of the architectures. The FatTree architecture follows a Clos topology for network interconnection. The number of nodes in each layer within the FatTree topology is fixed and is based on the number of the pods ‘k’ n = m = q = (k/2 ),

(9)

r = (k/2 )2

(10)

Similar to the ThreeTier architecture, the FatTree DCN can be modeled as DCNF T = (ν, ε ),

(11)

where ν, Pi , |Pi |, and |ν| can be modeled by using Equation 2–5, respectively. However, the aggregate layers switches within a FatTree are not connected to each other. Moreover, the contrary to the ThreeTier architecture, each of the core layer switch is connected to a single aggregate layer switch from each pod 9 : ε = §(∀δ,α) ∪ α´ (∀α,∀γ ) ∪ C| (∀,γi ) , (12)

952

K. Bilal et al.

and the total number of edges can be calculated as  m  k m R     |ε| = n+ q + k. 1

3.3

1

1

(13)

1

DCell DCN Model

DCell uses a recursively built topology, where a single server in a dcell is connected to servers in other dcells for server-based routing (see Fig 3). The graph model of the DCell DCN architecture can be represented as: DCNDC = (ν, ε ),

(14)

ν = {∂i , ∂i+1 , . . . , ∂L } ,

(15)

where 0 ≤ i ≤ L. ∂0 represents dcell 0 and L denotes highest level. & ' ∂0 = δ ∪ α ,

(16)

where δ represents the set of ‘s’ servers with dcell0 and α presents a single switch connecting the servers. ∂l = {xl .∂l−1 },

(17)

where 1 ≥ l ≤ L, and xl is the total number of ∂l−1 in ∂l . A dcell1 can be represented by

similarly, for l ≥ 2: xl =

∂1 = {x1 .∂0 } ,

(18)

x =s+1

(19)

 l−1 )

 xi × s + 1 .

(20)

i=1

A 3-level DCell can accommodate around 3.6 million servers with s = 6. Total number of node in a 3-level DCell can be calculated as: x x x  3  2  1  ; 3; ;ν ; = (s + 1) , (21) 0

1

1

1

and the total number of edges in a 3-level DCell can be calculated as:  x  x    x3 2 1   ; 3;  ;ε ; = s + (x1 (x1 − 1)/2) + (x2 (x2 − 1)/2) + (x3 (x3 − 1)/2) 0

1

1

1

(22)

Modeling and Simulation of Data Center Networks

953

The total number of vertices in the l-level DCell are:  n  x   i

)  (l−1) |ν| = , (s + 1) (s + 1) i=1

and the total number of edges can be calculated as: ⎛⎛ ⎞ ⎡ ⎞⎤  l  x   l l i

)   ) ⎝⎝ xy ⎠ (xj − 1)⎠⎦ |ε| = s (l−1) + 1/2 ⎣ (s) i=1

4

(23)

1

1

j =1

(24)

y=j

DCNs Implementation in ns-3

We implemented three major DCN architectures namely: (a) ThreeTier, (b) FatTree, and (c) DCell (see Sect. 2 for details). We used ns-3 discrete-event simulator to implement the three DCN architectures. We implemented (a) interconnection topology, (b) customized addressing scheme, and (c) customized routing logic for the three considered DCN architectures. Moreover, we implemented six traffic patterns to observe the behavior of the considered DCNs under various network conditions and traffic loads. In the year 2003–2004, ns-2 was the most used network simulator for network research [14]. However, to address the outdated code design and scalability of ns2, a new simulator called ns-3 was introduced. The ns-3 is a new simulator (not an evolution of ns-2) written from scratch. Some of the salient features of ns-3 simulator are as follows [3, 14], and [18]. The ns-3 simulator offers the modeling of realistic network scenarios. The ns-3 uses the implementation of real Internet Protocol (IP). Moreover, the ns-3 offers implementation of Berkeley Socket Distribution (BSD) sockets interface and installation of multiple network interfaces on a single node. Simulated packets in the ns-3 contain real network bytes. Furthermore, the ns-3 offers to capture the network traces, which can be analyzed by using various network tools, such as WireShark. We implemented our DCN models using the ns-3.13 release. Currently, ns-3.18 is the stable release of the ns-3 simulator [18]. One of the major drawbacks of the ns-3 simulator is that it does not provide Ethernet switch implementation. The BridgeNetDevice is the closely related bridge implementation that can be used to simulate an Ethernet switch. However, the BridgeNetDevice is used for simulating CSMA devices and does not work for the Point-To-Point devices. Therefore, we implemented a Point-To-Point based switch for simulations in ns-3 [18].

4.1

ThreeTier DCN Implementation Details

We offer a customizable implementation of the ThreeTier architecture. The simulation parameters can be configured to simulate the ThreeTier architecture with devices

954

K. Bilal et al.

arranged in four layers having different oversubscription ratio. Users can define the number of pods/modules in the topology. Each pod contains (a) servers arranged in racks, (b) number of access layers switches, and (c) number of aggregate layer switches. Users can specify the required number of servers in each rack. All of the servers within a rack are connected by an access layer (ToR) switch. The bandwidth of the network link interconnecting the servers with the ToR switch can be configured. The default bandwidth for the links connecting servers to a ToR is 1Gbps. All of the ToRs, within a single pod/module are connected to all of the aggregate layer switches. The default bandwidth of the network links interconnecting the ToRs to the aggregate layer switch is 10Gbps, which is configurable. The number of devices and bandwidth of the links in each of the layers (server, access, and aggregate layer) of the ThreeTier architecture remains same in all of the pods. The core layer of the ThreeTier architecture is the topmost layer that is used to connect various pods to each other. Users can specify the number of core switches in the topology. Each core layer switch is connected to all of the aggregate layer switches. The network switch used in the aggregate and core layer of the ThreeTier architecture are often high-end enterprise layer switches. One of the major features of the high-end switches is the support of Equal Cost Multi Path (ECMP) routing [15]. We have added the support for ECMP for the aggregate and core layer switches for realistic results. We used ns-3 Ipv4GlobalRoutingHelper class for routing with the ECMP support in the ThreeTier architecture. It is worth mentioning that the performance and results of the ThreeTier architecture are heavily dependent on the oversubscription ratio at each layer and use of the ECMP. The throughput and network delay fluctuates substantially by varying the oversubscription ratio and use of ECMP routing. We configured each device with real IP address for simulation. The IP addressing scheme is also customizable and users can assign the network addresses of choice to devices. Figure 4a depicts the topology setup for the ThreeTier architecture with k = 4.

4.2

FatTree DCN Implementation Details

FatTree DCN is based on the Clos interconnection topology. The number of devices in each layer of the FatTree architecture is fixed based on the number of pods ‘k’. Contrary to the ThreeTier architecture, the user only needs to configure the total number of pods for the FatTree simulation. The implementation of the FatTree architecture creates k pods and the required devices and interconnection within each pod. The value of k must be (a) greater than or equal to four and (b) an even number. The network bandwidth of the interconnecting links is configurable and the default value is 1Gbps. The entire network links uses same bandwidth value contrary to the ThreeTier architecture, where the bandwidth value of the network links connecting servers to ToR and the links connecting ToRs to aggregate layer switch is usually different.

Modeling and Simulation of Data Center Networks

955

Fig. 4 DCN Topologies in ns-3 a ThreeTier Topology b DCell Topology

Fig. 5 Simulation of a k = 4 FatTree in ns-3

FatTree architecture uses a custom network addressing scheme. The network address of a server or a node is dependent on the location of the node. The network address of a server is based on the pod-number that contains the server, and the access switch number that connects the server. We have implemented the custom network addressing scheme of the FatTree architecture and each of the nods within the FatTree is assigned the addressing scheme as specified. Figure 5 and 6 depicts the assignment of the network addresses within each pod of a FatTree with k = 4. The FatTree architecture uses a two-level routing scheme for packet forwarding. The packet forwarding decision is based on two-level prefix lookup scheme. The FatTree uses a primary prefix routing table and a secondary suffix table. Firstly, the longest prefix match is checked in the primary prefix table. If a match is found for the

956

K. Bilal et al.

Core Layer 10.4.1.1

10.4.1.2

10.4.2.1

10.4.2.2

10.0.2.1

10.0.3.1

10.1.2.1

10.1.3.1

10.2.2.1

10.2.3.1

10.3.2.1

10.3.3.1

10.0.0.1

10.0.1.1

10.1.0.1

10.1.1.1

10.2.0.1

10.2.1.1

10.3.0.1

10.3.1.1

10.0.0.2 10.0.0.3 10.0.1.2 10.0.1.3

Pod 0

10.1.0.2 10.1.0.3 10.1.1.2 10.1.1.3

Pod 1

10.2.0.2 10.2.0.3 10.2.1.2 10.2.1.3

Pod 2

Aggregation Layer

Access Layer

10.3.0.2 10.3.0.3 10.3.1.2 10.3.1.3

Pod 3

Fig. 6 Assignment of IP addresses in a k = 4 FatTree

destination address of the packet, then the packet is forwarded to the port specified in routing table. If the longest prefix does not match, then the longest matching suffix in the secondary table is found, and the packet is forwarded to the port specified in the secondary routing table. The switches in the access and aggregate layers use same algorithm for routing table generation (please see Algorithm 1 in [12]). The core layer switches uses a different algorithm (please see Algorithm 2 in [12]) for core switch routing table generation. We implemented both of the algorithms for our FatTree DCN implementation.

4.3

DCell DCN Implementation Details

The DCell is a recursively built, highly scalable, and server-based hybrid DCN architecture. As detailed in Sect. 2 and 3, the building block of the DCell architecture are presented. We offer a customizable implementation of the DCell architecture. We have implemented 3-level DCell topology that can accommodate millions of servers by having less than eight nodes in dcell 0 . The user can configure the number of server in dcell0 . The number of servers in the DCell increases exponentially with the increase in number of servers in dcell0 . Table 2 presents the total number of servers in the DCell topology. As can be observed, by having only eight servers can lead to a DCell comprised of 27 million servers. Because of the exponential increase in the number of server and upper level dcells, we enabled the configuration of the number of the dcells at each level. The user can configure the number of servers, the number of dcell0 in dcell 1, number of dcell1 in each dcell2 , and number of dcell 2 in dcell 3 to control the number of servers in resulting DCell. Each dcell 0 contains a Ethernet switch that is used for packet forwarding among the servers within the dcell0 . The traffic forwarding among the servers in different

Modeling and Simulation of Data Center Networks

957

Table 2 Number of Servers in DCell Number of servers in dcell0

Total number of servers in 3-level DCell

Total number of nodes (including switch) in 3-level DCell

2

1806

2709

3

24492

32656

4

176,820

221,025

5

865,830

1,038,996

6

3,263,442

3,80349

7

10,192,056

11,648,064

8

27,630,792

31,084,641

dcells is performed by servers, that are interconnected to each other. Each server node is equipped with multiple interface cards to directly connect switch and other servers. The default bandwidth of the network links is 1Gbps that is also configurable. We used a realistic IP address assignment to each server in the DCell architecture implementation. The DCell does not specify any custom addressing scheme. However, the DCell routing scheme takes into account the placement of the server within the dcells. For instance, the NodeId (3, 1, 2) specifies the server number 2, in dcell 0 number 1, and dcell1 number 3. We implemented programming routines that can find the IP address of a specified server number and vice versa. The DCell uses a custom routing protocol called DCell Fault-tolerant Routing (DFR) protocol for packet forwarding. The DFR is a recursive and source based routing protocol. When a node wants to initiate communication with some other node, the DFR is invoked to calculate the end-to-end path for the flow. The DFR first calculates the link connecting the dcells of the source and destination node. Then the DFR calculates the path from source to the link and from link to the destination. The combination of all paths is the end-to-end path. We implemented the DFR protocol. The output path from the DFR provides the NodeIds instead of the IP address. We use a custom programmed routine to convert the NodeIds based path to IP based path. We place the complete source to destination path in an extra header in each packet, as the DFR is a source based routing protocol. Each intermediate node parses the header and decides the forwarding port/link for the next hop. Unfortunately, the algorithm listing for the DCell in the original paper was incomplete and erroneous (please see Fig. 3 in [8]). In Sect. 4.1.1 of the original paper [8], it is mentioned that if (Sk−m < dk−m ), then the link interconnecting the sub-dcells can be calculated as ‘([Sk−m , dk−m − 1], [dk−m , Sk−m ])’. The ‘else clause’ for the aforementioned ‘if statement’ is not given in the original paper. Therefore, the implementation of the DFR was erroneous and incomplete. We figured the else clause for the aforementioned scenario and implemented the complete DFR algorithm for traffic routing. Moreover, the example for the path calculated using DFR also had a typographical mistake in the original paper.

958

K. Bilal et al.

References 1. IBM, IBM Data Center Networking Planning for Virtualization and Cloud Computing, 2011. Online: http://www.redbooks.ibm.com/redbooks/pdfs/sg247928.pdf 2. Gartner, Market Trends: Platform as a Service, Worldwide, 2012–2016, 2H12 Update, 2012. 3. K. Bilal, S.U. Khan, L. Zhang, H. Li, K. Hayat, S.A. Madani, N. Min-Allah, L. Wang, and D. Chen, “Quantitative Comparisons of the State of the Art Data Center Architectures,” Concurrency and Computation: Practice and Experience, (DOI:10.1002/cpe.2963). 4. K. Bilal, S. U. R. Malik, O. Khalid, A. Hameed, E. Alvarez, V. Wijaysekara, R. Irfan, S. Shrestha, D. Dwivedy, M. Ali, U. S. Khan, A. Abbas, N. Jalil, and S. U. Khan, “A Taxonomy and Survey on Green Data Center Networks,” Future Generation Computer Systems. (Forthcoming.) 5. M. Manzano, K. Bilal, E. Calle, S. U. Khan, “On the connectivity of Data Center Networks”, IEEE Communication Letters. (Forthcoming) 6. K. Yoshiaki, and M. Nishihara. “Survey on Data Center Networking Technologies.” IEICE transactions on communications, Vol. 96, No. 3, 2013. 7. P. Mahadevan, P. Sharma, S. Banerjee, and P. Ranganathan, “Energy aware network operations,” IEEE INFOCOM Workshops 2009, pp. 1–6. 8. C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, “DCell: A Scalable and Fault-tolerant Network Structure for Data Centers.” ACM SIGCOMM Computer Communication Review, Vol. 38, No. 4, 2008, pp. 75–86. 9. H. Abu-Libdeh, P. Costa, A. Rowstron, G. O’Shea, and A. Donnelly, “Symbiotic Routing in Future Data Centers,” ACM SIGCOMM 2010 conference, New Delhi, India, 2010, pp. 51–62. 10. Cisco, Cisco Data Center Infrastructure 2.5 Design Guide, Cisco press, 2010. 11. A. Greenberg, J. Hamilton, N. Kandula, C. Kim, and S. Sengupta, “VL2: a scalable and flexible data center network,” ACM SIGCOMM Communication Review, Vol. 39, No. 4, 2009, pp. 51–62. 12. M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture,” ACM SIGCOMM 2008 conference on Data communication, Seattle, WA, 2008, pp. 63–74. 13. K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Y. Zomaya, “On the Characterization of the Structural Robustness of Data Center Networks,” IEEE Transactions on Cloud Computing. (Forthcoming.) 14. G. Carneiro, ns-3, Network Simulator 3, 2010. Online: http://www.nsnam.org/tutorials/NS-3LABMEETING-1.pdf 15. C. Hopps, Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992, Internet Engineering Task Force, 2000. 16. C. Kachris and L. Tomkos, “A Survey on Optical Interconnects for Data Centers,” Communications Surveys & Tutorials, IEEE, Vol. 14, No. 4, 2012, pp. 1021–1036 17. X. Zhou, Z. Zhang, Y. Zhu, Y. Li, S. Kumar, and A. Vahdat, “Mirror Mirror On The Ceiling: Flexible Wireless Links For Data Centers,” ACM SIGCOMM Computer Communication Review, Vol. 42, No. 4, pp. 443–454, 2012. 18. ns-3 Simulator, online: http://www.nsnam.org/. 19. Mohammad Al-Fares, Alexander Loukissas, and Amin Vahdat, “A Scalable, Commodity Data Center Network Architecture,” ACM SIGCOMM, Seattle, Washington, USA, August 17–22, 2008, pp: 63–74.

Suggest Documents