Strict-Sense Nonblocking Networks with Three ...

8 downloads 0 Views 293KB Size Report
In this paper we analyze the three-stage Clos network in which three multiplexing ..... Switches in the second stage are also called center stage switches and will ...
Strict-Sense Nonblocking Networks with Three Multiplexing and Switching Levels Wojciech Kabaci´nski, Senior IEEE Member, Janusz Kleban, IEEE Member, ˙ IEEE Member Marek Michalski, IEEE Member, Mariusz Zal, Poznan University of Technology, Chair of Communication and Computer Networks, ul. Polanka 3, 60-965 Pozna´n, Poland e-mail: {wojciech.kabacinski; marek.michalski; janusz.kleban; mariusz.zal}@put.poznan.pl, http://nss.et.put.poznan.pl

Abstract—Changes in traffic characteristics impose new requirements on interconnection network in large data centers. These requirements force changes to be done in the network architecture and used technology. Traditional tree network topology is being replaced by Clos network, called also Leaf-Spine topology, and optical interconnection networks are added in parallel to conventional electronic switching networks. Circuit switching is also used in such network instead of packet switching. In this paper we analyze the three-stage Clos network in which three multiplexing schemes are used simultaneously on input, output, and interstage links and with circuit switching mode. Strict-sense nonblocking conditions for this network with three multiplexing levels (space, wavelength, and mode) are derived and proved. Up till now such network with at most two multiplexing levels (space and time, or space and wavelength) has been considered. Index Terms—Data Center Network, Clos network, nonblocking conditions

I. I NTRODUCTION In the last years we observe rapid data explosion demanding more large scale data centres. It is caused by the new multimedia Internet services, Internet of things, cloud computing, sensor networks, computer graphics, using computers for busines purposses etc. It is expected that this trend will stay for the next few years. The development of smart mobile terminals, tablets, and common access to broadband services (video on demand, IPTV, videoconferences, cloud computing) results in changes how services are available to users and how data are stored in a network. Currently web pages, films, music, pictures or other data are stored not on private storage devices or servers, but are located in big data centers. Such centers contain tens and even hundreds of thousands of servers [1]. When a data center is considered as the place where data accessible by applications and users are stored, more traffic in a data center is from servers to users and users to servers (so called North-South traffic). But now data centers are changing to computers, in which computing resources are fungible [2]. This new approach results in the changes of traffic characteristics. Now more traffic is located between CPUs and memories in a data center (so called East-West traffic) than between a data center and users. These changes impose new requirements on network infrastructure. According to data presented by CISCO, the total IP traffic should reach 1.4 zettabytes by 2017, while traffic inside data centers has already reach a volume of 2.6 zettabytes per

year, and it is expected, that it will reach 7.7 zettabytes per year in 2017 [3]. In the same White Paper it was shown, that this high volume of data center traffic is due to the data exchange between servers inside the data center. It is recorded, that 76% of the whole data center traffic is the inside data center traffic, and it is predicted, that the same relation will be in 2017. This internal traffic is due to functions like replication, backup, read/write procedures or task division in parallel processing, and caused by functional separation of application servers, storage, and databases. The remaining 24% of traffic is divided between traffic to servers in external data centers (7%) and users (17%). The big volume of traffic served inside data centers will require very efficient and large-scale network not only connecting users to data centers and data centers between themselves, but also a network serving internal traffic. This network is called a Data Center Network — DCN. DCNs must serve this big volume of traffic with high throughput and low latency. Moreover, the latency should remain low even if the data center continue to grow up in size to hundreds of thousands of servers. It is also expected, that the power consumption will be significantly reduced in future networks. Currently, data center power consumption is the main part of the overall consumption by cloud computing [4], and it was about 2.2% of the global electrical power consumption in 2008 [5]. Data centers’ energy consumption was the smallest in 2011 when accounted 17% of total ICT power consumption. The data center power requirements are expected to increase most rapidly, and will grow to approximately 23% in 2020 [6]. Therefore, the way current DCNs are designed must be changed. The method for designing such new DCNs, their control and data routing are subject of many research activities and projects [7]. Main principles and considerations of data center designs related to DCN include [8]: Scalability – allows to add more hardware to enlarge storage and processing capacity. The scalability of the data center network is crucial for its continuous development. Incremental scalability – makes possible to add small number of the new equipment with minimal impact on the system. Bisection bandwidth – it is the bandwidth between two segments splitted from the original network using an arbitrary partition manner. Aggregated throughput (system throughput) – the sum of the aggregated data rates.

Oversubscription – the ratio of the maximal aggregate bandwidth between the end hosts in the worst case to the total bisection bandwidth. Energy consumption – it is one of the most challenging issues in data centers because the total power consumption of IT hardware increases rapidly, as well as the HVAC (HeatingVentilation and Air-Conditioning) equipment. Reliability – essential for DCN designing. Latency - influences mainly the performance of offered services. To increase DCNs efficiency, one of the method is to use three stage Clos network [9], [10], and also to introduce circuit-switching [11], [12]. Optical transmission is commonly used to connect switches in DCNs [13], [14]. Wavelength Division Multipexing (WDM) is also often use to take advantage of fibers’ capacity. Recently, a new multiplexing method which uses different modes in multimode fibers was proposed [15], [16], [17], [18]. It is called Mode Division Multiplexing (MDM) and can be used separately on each wavelength of a WDM system. Two multiplexing methods which can be used in each fiber link make it possible to use three levels of multiplexing (space, wavelength, and mode). These three multiplexing levels can be also used in switching. In this paper we analyze the three-stage Clos network in which three multiplexing schemes are used in all links and with the circuit switching mode. Strict-sense nonblocking conditions for this network are derived and proved. Combinatorial properties of three-stage Clos networks were subject of many reasearch papers. Strict-sense nonblocking conditions for space-division switching fabrics were first given by C. Clos in 1953 [19]. After introduction of time-division multiplexing in transmission systems, this multiplexing scheme was also used in switching. Three-stage switching networks with space-division and time-division switching was presented in [20]. Strict-sense nonblocking operation of such networks was firstly considered by A. Jajszczyk [21]. Another example of switching networks with two multiplexing levels is the network with space-division and wavelength-division multiplexing. They are used in optical switches and optical switching networks of Optical Crossconnect Systems, for instance. Their nonblocking operation was considered by Y. Yang [22], [23]. A survey of different combinatorial properties of Clos networks, not only strict-sense nonblocking, but also wide-sense nonblocking and rearrangeable, can be found in [24], [25], [26]. In case of optical networks nonblocking conditions and control algorithms with crosstalk constraints were considered for instance in [27]. Up till now, considerations on Clos networks were limited only to cases with at most two multiplexing levels, i.e., apart of space-division switching, switching in one more domain was used (time or wavelength). In this paper we will consider three-stage Clos networks with three multiplexing levels. According to our knowledge, such networks have not been considered in the literature, yet. The paper is organized as follows. In Section II we present changes in the DCN architecture and point out where the network, considered in the paper, may be used. In Section

III the network architecture and its components are described. Strict-sense nonblocking conditions for the proposed network are derived and proved in the next section. The paper ends with conclusions and remarks on future works. II. DATA C ENTRE N ETWORKS The most popular design of DCNs is currently based on layered approach. The following layers are considered in data center implementation in the world: core, aggregation, and access. The core layer provides the high-speed packet switching between aggregation modules and serves all incoming and outgoing flows. The aggregation layer transports server-to-server multi-tier traffic. It consolidates L2 traffic in a high-speed packet switching fabric providing service modules integration. The access layer enables servers to be physically attached to the network. Generally, the DCN can be classified into three classes, namely: switch-based networks, direct networks, and hybrid networks employing the optical circuit switching as well as electrical packet switching to establish required connections [8]. The solution discussed in this paper may be employed in the switch-based networks, or hybrid networks. The current data centers use mostly the switch-based networks with multi-level tree of switches. Usually two or three levels of switches are used in interconnection networks adopted in existing terascale data centers. In these cases DCN is build as fat-tree 2-Tier or 3-Tier architecture and is able to deliver communication between tens of thousands of modules (see Fig. 1). Within the fat-tree topology "fatter" links are used from the leaves towards the root. The network is organized as follows: servers are connected through the leaf switches, called also top of the rack (ToR) switches using 1 Gbps Ethernet links. The ToR switches support communication within the rack. The layer two consists of aggregation switches with 10 Gbps links to interconnect ToR switches in the tree topology. These layer-2 switches are connected, in the 3-Tier architecture, to the core switches using 10 Gbps or 100 Gbps links. The core switches are more powerful than layer-2 switches, but they may cause the bottleneck problem. To alleviate this problem in classical fat-tree architecture groups of servers are organized into PODs, and each core switch is connected to other POD switches. It is very easy to scale the fat-tree topology. This kind of DCN is also fault tolerant, because it is possible to connect ToR switch to two or more aggregate switches, these in turn are connected to at least two core switches. Capacity can be increased by adding next switches on each tier. The main problem with this architecture is the necessity to choose the "best path" from a set of alternative paths using e.g. SPF (Shortest Path First) mechanism. Some possible alternative paths may be blocked by spanning-tree protocol to prevent connection loops. To avoid this problem other interconnection pattern, proposed by C. Clos [19], may be used in DCN. This architecture is called Leaf-Spine (see Fig 2) [10]. In this case DCN consists of ToR switches and core switches. The ToR switches work as leaf switches, and they are connected to the core switches representing the spine. In this topology every leaf switch is connected to each spine

OCS

10/100 Gb/s

10 Gb/s

1 Gb/s

Servers ToR

Core layer

EPS

Servers ToR

Aggregation layer

10/100 Gb/s

Fig. 3. A DCN architecture with electronic and optical switching Fig. 1. A DCN based on the fat-tree architecture

Spine layer

1

Leaf layer

n

1{m1,

…, mk}

1{m1,

…, mk}

l{m1,

…, mk}

l{m1,

…, mk}

1{m1,

…, mk}

1{m1,

…, mk}

l{m1,

…, mk}

l{m1,

…, mk}

1

n

Fig. 4. A switch with three multiplexing levels Fig. 2. A DCN architecture based on the Clos network

switch creating a full-mesh. No direct connections between the leaf switches as well as between the spin switches are established. To create Leaf-Spin DCN network it is possible to use identical and inexpensive devices. The main problem of the current solutions used in DCNs is in general low capacity, higher delay in the network of greater capacity and more store-and-forward operations, and big power consumption [28]. Optical transmission is used to increase links’ capacity. However, electronic switching in nodes requires electrical/optical (E/O) and optical/electrical (O/E) conversions, which is also power hungry and increase overall power consumption. Electronic switching imposes also limits on link speed capacity The possible solution for reduction in power consumption is to use optics also for switching. But Optical Packet Switching (OPS), also being a subject of intensive research through recent years, is not mature yet and cannot be used in practical networks. The switching time of optical switching elements is also not sufficient to be used in optical packet switching with current optical link transmission speed. Instead of OPS, the new approach is to use optical circuit switching (OCS) in parallel with electronic packet switching (EPS) to create the hybrid DCN network. The routes with high traffic should be implemented using OCS, while less occupied routes will be still built using traditional EPS. Several architectures have been proposed for using OCS in DCNs, like c-Through [29] or Helios [30]. A survey of optical switching in data centers can be found for instance in [11]. The idea of a DCN based on EPS and OCS is shown in Fig. 3. This problem is also a subject of few European projects [31]. The multipath interconnection architecture can be used in both domains (OCS and EPS), as it is presented in Fig. 3.

Additionally, to increase a transmission speed of optical links multiple multiplexing schemes may be used. In this paper we concentrate on the OCS part of such architecture and consider the three-stage Clos network in which switching is performed on three multiplexing and switching levels. We derive nonblocking conditions for such network, which can be used for dimensioning the network architecture in data centers. III. S WITCHES AND A S WITCHING FABRIC In this Section we describe the switch operation and switching fabric architecture. As it was already mentioned in Section I, in current transmission systems different multiplexing technologies may be used at the same time. In the paper we assume, that in each fiber wavelength-division multiplexing scheme is used, and on each wavelength — mode-division multiplexing scheme enables to enlarge the transmission capacity of the system. The general idea of the switch considered in this paper is presented in Fig. 4. It contains n input and n output fibers, numbered from 1 to n. They are also called inputs and outputs. In each fiber signals are transmitted on l wavelengths denoted by λ1 to λl . On each wavelengths m different modes, denoted by m1 to mk , can be used for transmitting user data. In this paper, by λi [mj ] we will denote a channel which is transmitted on mode mj of wavelength λi . Channels in input and output fibers are called input and output channels, respectively. In the switch of Fig. 4, any input channel can be switched to any of the output channels. We call this switch as the switch with three levels of multiplexing, since we can switch signals between three multiplexing systems: space-division (from any input fiber to any output fiber), wavelength-division (from any input wavelength to any output wavelength), and mode-

Wavelength demultiplexers

1

n

1[m1,

…, mk]

l[m1,

…, mk]

1[m1,

…, mk]

l[m1,

…, mk]

1[m1,

…, mk]

l[m1,

…, mk]

1[m1,

…, mk]

1[m1]

1[m1]

1[mk]

1[mk]

l[m1]

l[m1]

l[mk]

l[mk]

1[m1] 1[mk]

l[m1,

Mode multiplexers

Mode demultiplexers

…, mk]

Space switch

1[m1,

…, mk]

l[m1,

…, mk]

1[m1,

…, mk]

1[m1] 1[mk]

l[m1]

l[m1]

l[mk]

l[mk]

l[m1,

1 m1 , , mk  l m1 , , mk 

1 m1 , , mk 

Wavelength multiplexers

1

1[m1,

…, mk]

l[m1,

…, mk]

1[m1,

…, mk]

l[m1,

…, mk]

1 1

1

1 m1 , , mk  l m1 , , mk 

0

0

0

2

0

2

1

1

1

r1

p

r2

2

1 n2

1

1 n

0

1

n1

1 m1 , , mk  l m1 , , mk 

0

l m1 , , mk 

n1

n2

…, mk]

Mode and wavelength converters

Fig. 5. Possible functional architecture of the switch shown in Fig 4

Fig. 6. A three stage switching fabric with three multiplexing levels

wavelength. IV. S TRICT- SENSE N ONBLOCKING C ONDITIONS

division (from any input mode to any output mode). Possible functional architecture of such switch is shown in Fig. 5. At the input side, signals from input fibers are demultiplexed in both, wavelength and mode, domains. All signals are then switched by a space switch of capacity nmk λl × nmk λl . Then signals are converted by wavelength converters and mode converters to prepare them for transmission in the output fiber. In Figure 5 one functional block of mode and wavelength converters is shown. In practical implementation such conversion may by done separately, and depending on technical possibilities may be located also behind the first stage of multiplexers. For instance, at the output of the space switch we may have mode converters which prepare signal so that group of signals which are multiplexed by one multiplexer has signals on different modes. Then signals multiplexed in the mode domain are converted by wavelength converters so that they can be multiplexed to one output fiber. It should be noted, that wavelength converters must maintain modes of signals multiplexed on wavelength for conversion. We will not consider here the technical implementation of such functions. It should be however noted that some works in this area have been already published [18], [32], [33]. Switches presented in Fig. 4 may be used for constructing switching fabrics of greater capacity. We will discuss the threestage Clos network, which currently is also considered for using in DCNs, as it was stated in Section II and is shown in Fig. 2. The structure of the asymmetrical switching fabric is presented in Fig. 6. It contains r1 switches in the first stage, p switches in the second stage, and r2 switches in the third stage. Switches in the first stage, called also input switches, are numbered from 1 to r1 , and will be denoted by Ii , 1 ≤ i ≤ r1 . Switches in the third stage, called also output switches, are numbered from 1 to r2 , and will be denoted by Oj , 1 ≤ j ≤ r2 . Switches in the second stage are also called center stage switches and will be numbered from 1 to p. Switches in adjacent stages are connected between themselves by means of interstage links. Each input switch has n1 inputs and p outputs, while each output switch has p inputs and n2 outputs. Center stage switches have a capacity of r1 × r2 links. Input links in the input switches carries λl1 wavelengths, each wavelength transmits mk1 modes. Output links in the output switches contain λl2 wavelengths, with mk2 modes of each wavelength. Finally, through each of interstage links signals on λl0 wavelengths are sent, with mk0 modes on each

In this Section we will discuss the strict-sense nonblocking operation of the three-stage switching fabric with three multiplexing levels presented in the previous Section. In the switching fabric connection can be set up between any input and output channels. However, when combinatorial properties like nonblocking conditions are considered, it is only important from which input switch to which output switch connection has to be set up. Therefore, we will denote a connection from any input channel λx [my ] of switch Ii to any output channel λv [mw ] of switch Oj by (Ii , Oj ). The strict-sense nonblocking conditions are given by the following theorem. Theorem 1 The three stage switching fabric with three multiplexing levels, presented in Fig. 6 is nonblocking in the strict sense if and only if: nj k j k n1 mk1 λl1 −1 n m λ −1 p ≥ min + 2 mkk2 λll2 +1; mk λl 0

0

0

0

(1) j

r1 n1 mk1 λl1 −1 mk0 λl0

k

+ 1;

j

r2 n2 mk2 λl2 −1 mk0 λl0

k

+1

o

Proof. The theorem provides sufficient and necessary conditions for the strict-sense nonblocking operation of the switching fabric. We will use the similar approach as Clos [19]. Sufficiency can be proved by showing the worst state in the switching fabric. Let us assume, that the new connection is (Ii , Oj ). In the worst case, in the first stage switch Ii there may be at most n1 mk1 λl1 − 1 connections to outputs in r2 − 1 switches of the third stage (others than switch Oj ). This limits the number of such connections, which can be set up, to not more than (r2 − 1)n2 mk2 λl2 . Since between any input and center stage switches we have mk0 λl0 channels, the number of middle stage switches, to which all channels of interstage links are occupied by these connections, is equal to:   min {n1 mk1 λl1 − 1; (r2 − 1) n2 mk2 λl2 } . (2) a1 = mk0 λl0 Similarly, to the third stage switch Oj there may be at most n2 mk2 λl2 −1 connections from the first stage switches (others than switch Ii ), but no more than (r1 − 1)n1 mk1 λl1 such connections can be set up. These connections may occupy another set of   min {n2 mk2 λl2 − 1; (r1 − 1) n1 mk1 λl1 } (3) a2 = mk0 λl0

middle stage switches. When numbers in floor brackets in (2) and (3) are integer, all channels in interstage links are occupied and only one input channel is left free in switches Ii and Oj . Otherwise, bnx mkx λlx − 1/mk0 λl0 c middle stage switches may be unavailable from switches Ii (x = 1) and Oj (x = 2), and the remaining connections may occupy only part of channels to the additional center stage switch. In the worst case these sets of a1 and a2 center stage switches are disjoint and one more switch is needed to set up the connection (Ii , Oj ). We have:   min{n1 mk1 λl1 −1;(r2 −1)n2 mk2 λl2 } + p ≥ mk0 λl0   min{n2 mk2 λl2 −1;(r1 −1)n1 mk1 λl1 } + 1. (4) + mk λl 0

0

When n1 mk1 λl1 −1 and n2 mk2 λl2 −1 are minima we have     n1 mk1 λl1 − 1 n2 mk2 λl2 − 1 p ≥ p1 = + + 1. (5) mk0 λl0 mk0 λl0 When n1 mk1 λl1 − 1 and (r1 − 1)n1 mk1 λl1 are minima we have     (r1 − 1)n1 mk1 λl1 n1 mk1 λl1 − 1 + + 1. (6) p ≥ p2 = mk0 λl0 mk0 λl0

TABLE I C OMPARISON OF REQUIRED NUMBER OF CENTER STAGE SWITCHES p AND INTERSTAGE LINK CAPACITY mk0 ∗ λl0 IN SWITCHING FABRICS OF THE SAME CAPACITY BUT DIFFERENT mk , λl , r, n Network Capacity N = 4096 N = 4096 N = 4096 N = 4096 N = 4096 N = 4096 N = 4096 N = 4096

Selected values of mk , λl , r, n parameters mk λl = 8, r = 32, n = 16 mk λl = 8, r = 16, n = 32 mk λl = 16, r = n = 16 mk λl = 32, r = 16, n = 8 mk λl = 32, r = 8, n = 16 mk λl = 64, r = n = 8 mk λl = 128, r = 8, n = 4 mk λl = 128, r = 4, n = 8

4 127 255 255 255 511 511 511 1023

8 63 127 127 127 255 255 255 511

mk0 ∗ λl0 16 32 31 15 63 31 63 31 63 31 127 63 127 63 127 63 255 127

64 7 15 15 15 31 31 31 63

128 1 7 7 7 15 15 15 31

occupied. In this state, any new connection (Ii , Oj ) will have to be set up through the next middle stage switch, i.e. the switch a1 + a2 + 1 is needed.  From Theorem 1 we can derive strict-sense nonblocking conditions for the symmetrical three-stage Clos networks. If we put in (1) n1 = n2 = n, mk0 = mk1 = mk2 = mk , and λl0 = λl1 = λl2 = λl , we obtain the following equation:   nmk λl − 1 p≥2 + 1. (11) mk λl

However, it should be noted that the total number of connections which can be set up in this case, cannot be greater than the total number of input channels. Therefore, we can reduce equation (6) to the following one:   r1 n1 mk1 λl1 − 1 + 1. (7) p2 ≤ mk0 λl0

It should be also noted, that when there is no mode multiplexing, i.e. mm0 = mm1 = mm2 = 1 and there is no wavelength multiplexing, i.e. λl0 = λl1 = λl2 = 1, we have the asymmetrical space-division three-stage Clos network, and we obtain the known nonblocking conditions for this case:

When (r2 − 1)n2 mk2 λl2 and n2 mk2 λl2 − 1 are minima we have     n2 mk2 λl2 − 1 (r2 − 1)n2 mk2 λl2 p ≥ p3 = + + 1, (8) mk0 λl0 mk0 λl0

V. N UMERICAL E XAMPLES

and using the same arguments as in equations (6) and (7) we get:   r2 n2 mk2 λl2 − 1 + 1. (9) p3 ≤ mk0 λl0 It should be noted, that both (r2 − 1)n2 mk2 λl2 and (r1 − 1)n1 mk1 λl1 cannot be minima at the same time, so finally we obtain: p ≥ min {p1 ; p2 ; p3 } ,

(10)

i.e. the equation given in (1). Necessity can be proved by showing the set of events leading to the blocking state when less switches are used in the middle stage than those given by equation (1). This can be done easily by setting up connections from switch Ii through center stage switches numbered from 1 to a1 until all channels in interstage links between these middle stage switches and switch Ii are fully occupied. Next, connections to switch Oj should be set up through the next set of switches numbered from a1 + 1 to a1 + a2 untill all channels in interstage links between these middle stage switches and switch Oj are fully

p ≥ min {n1 + n2 − 1; n1 r1 ; n2 r2 } .

(12)

In Table I we present some examples showing the relationship between the required number of center stage switches and the number of wavelength and mode channels (mk0 ∗ λl0 ) in the symmetrical network of capacity N = N1 = N2 = 4096. In these examples switching fabric parameters are denoted by mk = mk1 = mk2 , λl = λl1 = λl2 , r = r1 = r2 , and n = n1 = n2 . As can be seen from the table, when for the same capacity the number of input channels (mk ∗ λl ) is lower and r ∗ n is greater, the number of required center stage switches is lower. This number is also decreasing when mk0 ∗ λl0 increases. It may be concluded, that it is better to have more fibers with less multiplexed channels at input/output fibers and more channels in interstage fibers. In this case the number of required center stage switches is lower. However, the capacity of this switch is higher since the switching fabric has greater r. Therefore the tradeoff is also between the number of center stage switches, their capacity and their cost. This however will require more complex study and depends also on technology used for designing switches. It should be also pointed out, that the switching fabric considered in the paper ensures nonblocking operation provided that requested input and output channels are free. The number of ports which will be used for connecting ToR routers to the optical switching fabric is another design option. It depends

on data centre traffic characteristics, services, and dividing this traffic between EPS and OCS domains. However, this problem is out of the scope of this paper and will be the subject of further research and simulation analysis. VI. C ONCLUSIONS AND F UTURE W ORKS In this paper we studied the strict-sense nonblocking operation of three-stage Clos networks in which input, output, and interstage links operate with three multiplexing levels. Switching is also performed at three multiplexing levels, i.e. space, wavelength, and mode. The considered network may be used in large data centers, large optical crossconnect systems, or other optical switching applications. We derived and proved conditions under which this network is nonblocking in the strict sense. The presented results are a generalization of earlier known results with reference to switching networks employing three multiplexing and switching levels. To the best of our knowledge, earlier only switching fabrics with two multiplexing and switching levels have been considered. The practical implementation of switches which can perform this switching at three levels are not available in the market yet, however, recent publications in MDM and WDM transmission systems, multiplexers, and demultiplexers suggest, that such technology may be available in the near future. ACKNOWLEDGEMENTS The work described in this paper was financed from funds of Ministry of Science and Higher Education for year 2015.

R EFERENCES [1] R. Miller, “Estimate: Amazon Cloud Backed by 450,000 Servers,” Data Center Knowledge, http://www.datacenterknowledge.com/ archives/2012/03/14/estimate-amazon-cloud-backed-by-450000servers/, March 14, 2012. [2] T. Hoff, “Changing Architectures: New Datacenter Networks Will Set Your Code And Data Free,” http://highscalability.com/blog/2012/9/4/changing-architectures-newdatacenter-networks-will-set-your.html, 2012. [3] Cisco Systems, “Cisco Global Cloud Index: Forecast and Methodology, 2012–2017,” http://www.cisco.com/c/en/us/solutions/collateral/serviceprovider/global-cloud-index-gci/Cloud_Index_White_Paper.html, 2013. [4] Greenpeace International, “Make IT Green: Cloud Computing and its Contribution to Climate Change,” 2010. [5] J. Koomey, "Worldwide electricity used in data centers", Environmental Research Letters, vol. 3, no. 034008. 1 August 2008. [6] L. Neves, J. Krajewski, P. Jung, M. Bockemuehl, "GESI SMARTer 2020: The Role of ICT in Driving a Sustainable Future", Global e-sustiainibility initiative (GeSI) 2012 raport, http://gesi.org/SMARTer2020, 5 September 2014. [7] D. Abts, J. Kim, “High Performance Datacenter Networks,” Morgan Claypool Publishers, 2011. [8] K. Wu, J. Xiao, and L. M. Ni, “Rethinking the architecture design of data center networks,” Frontiers of Computer Science, vol. 6, no. 5, pp. 596-603, 2012. [9] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta, “VL2: A Scalable and Flexible Data Center Network,” SIGCOMM 2009, Proceedings of the ACM SIGCOMM 2009 Conference on Data Communication, Barcelona, Spain, Aug 2009. [10] S. Hogg, “Clos Networks: What’s Old Is New Again,” NetworkWorld, http://www.networkworld.com/article/2226122/cisco-subnetlosnetworks–what-s-old-is-new-again/cisco-subnet/clos-networks–what-sold-is-new-again.html, Jan. 11, 2014. [11] C. Kachris and I. Tomkos, “A survey on optical interconnects for data centers,” IEEE Commun. Surv. Tutorials, vol. 14, no. 4, pp. 1021–1036, 2012. [12] G. Porter, R. Strong, N. Farrington, A. Forencich, P. Chen-Sun, T. Rosing, Y. Fainman, G. Papen, and A. Vahdat, “Integrating Microsecond Circuit Switching into the Data Center,” in Proceedings of the ACM SIGCOMM’13 Conference on Data Communication, pp. 447–458, 2013.

[13] C. Kachris, K. Kanonakis, and I. Tomkos, “Optical interconnection networks in data centers: recent trends and future challenges,” IEEE Communications Magazine, vol. 51, no. 9, pp. 39-45, 2013. [14] K. Chen, A. Singla, A. Singh, K. Ramachandran, L. Xu, Y. Zhang, H. Wen, and Y. Chen, “OSA: An Optical Switching Architecture for Data Center Networks with Unprecedented Flexibility,” Presented as part of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI 12), pp.239-252, San Jose, CA, 2012. [15] R. Ryf, S. Randel, A.H. Gnauk, C. Bolle, A. Sierra, et al., “Modedivision multiplexing over 96 km of few-mode fiber using coherent 6x6 MIMO processing,” Journal of Lightwave Technology, vol. 30, no. 4, pp. 521–536, February 2012. [16] P. Boffi, P. Martelli, A. Gatto, M. Martinelli, “Optical vortices: an innovative approach to increase spectral efficiency by fiber modedivision multiplexing,” Proc. SPIE 8647, Next-Generation Optical Communication: Components, Sub-Systems, and Systems II, 864705, 2013. [17] S. Ö. Arik, J. M. Kahn, and K.-P. Ho, “MIMO Signal Processing for Mode-Division Multiplexing,” IEEE Signal Processing Magazine, vol. 31, pp. 25-34, March 2014 (Invited Paper). [18] K.-P. Ho, J. M. Kahn, and J. P. Wilde, “Wavelength-Selective Switches for Mode-Division Multiplexing: Scaling and Performance Analysis", Journal of Lightwave Technology, vol. 32, no. 22, pp. 3724-3735, November 2014. [19] Clos C.: „A study of non-blocking switching networks”. The Bell System Technical Journal, pp. 406–424, 1953. [20] P. Charransol, J. Hauri, C. Athènes, and D. Hardy, “Development of a time division switching network usable in a very large range of capacities,” IEEE Transactions on Communications, vol. COM-27, no. 7, pp. 982–988, July 1979. [21] A. Jajszczyk, “On nonblocking switching networks composed of digital symmetrical matrices,” IEEE Transactions on Communications, vol. COM-31, no. 1, pp. 2–9, January 1983. [22] Y. Yang, J. Wang, and C. Qiao, “Nonblocking WDM multicast switching networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 11, no. 12, pp. 1274–1287, Dec 2000. [23] X. Qin and Y. Yang, “Nonblocking WDM switching networks with full and limited wavelength conversion,” IEEE Transactions on Communications, vol. 50, no. 12, pp. 2032–2041, Dec 2002. [24] F. K. Hwang, The Mathematical Theory of Nonblocking Switching Networks, 2nd Ed., World Scientific Publishing Co., Singapore, 2004. [25] W. Kabaci´nski, Nonblocking Electronic and Photonic Switching Fabrics. Springer, 2005. [26] A. Pattavina, Switching Theory – Architectures and Performance in Broadband ATM Networks. John Wiley & Sons, England, 1998. [27] G. Danilewicz, W. Kabacinski, M. Michalski, M. Zal, A New Control Algorithm for Wide-Sense Nonblocking Multiplane Photonic BanyanType Switching Fabrics with Zero Crosstalk . IEEE Journal on Selected Areas in Communications (JSAC), vol. 26, no. 3, pp. 54–64, April 2008. [28] K. Chen, C. Hu, X. Zhang, K. Zheng, Y. Chen, and A. V. Vasilakos, “Survey on routing in data centers: insights and future directions,” IEEE Network, vol. 25, no. 4, pp. 6–10, July 2011. [29] G. Wang, D. G. Andersen, M. Kaminsky, K. Papagiannaki, T. E. Ng, M. Kozuch, and M. Ryan, “c-Through: Part-time Optics in Data Centers,” in Proceedings of the ACM SIGCOMM 2010 Conference, pp. 327–338, 2010. [30] G. Wang, N. Farrington, G. Porter, S. Radhakrishnan, H. H. Bazzaz, V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat, “Helios: a hybrid electrical/optical switch architecture for modular data centers,” in Proceedings of the ACM SIGCOMM 2010 Conference, pp. 339–350, 2010. [31] LIGHTNESS, “Low latency and high throughput dynamic network infrastructures for high performance datacentre interconnects,” European Project, http://www.ict-lightness.eu/, 2013. [32] L.-W. Luo, N. Ophir, C. Chen, L. H. Gabrielli, C. B. Poitras, K. Bergman, and M. Lipson. “Simultaneous Mode and Wavelength Division Multiplexing On-Chip” arXiv.org, paper arXiv:1306.2378 http://arxiv.org/abs/1306.2378, June 2013. [33] D. J. Richardson, J. M. Fini, and L. E. Nelson, “Space-division multiplexing in optical fibres”, Nature Photonics, vol. 7, pp. 354–362, April 2013.

Suggest Documents