amount of work is available on Storage Area Networks (SAN). Storage Area ... KeywordsâGreen Cloud, SAN, SAN Simulator, Power consumption,Energy ...... Nahrstedt, "Lightning: Self Adaptive, Energy-Conserving, Multi-Zoned,. Commodity ...
2013 11th International Conference on Frontiers of Information Technology
GSAN: Green Cloud-Simulation for Storage Area Networks Mohsin Ikram, Qurrat-ul-ain Babar, Zahid Anwar, Asad Waqar Malik {12msitmikram, qurrat.ain, zahid.anwar, asad.malik}@seecs.edu.pk National University of Sciences and Technology (NUST) Islamabad, Pakistan
mechanism for monitoring their collective and singular working; also to perform maintenance activity. The data is transmitted using optical Fibre cable but different from network implementation, they are based on SCSI SAN based protocols for internal SAN communication, management, and implementation. This is because SAN devices communicate differently and have different protocols and port level access monitoring referred to as Fibre channel and not the typical TCP/IP network communication. All SAN device layers are closely tied and linked together as a stack with each layer having different dedicated functionality to enable a collective representation of SAN.
Abstract—The drive towards utilizing green and renewable energy to power large IT data centers is rapidly gaining momentum. Considerable research has gone into developing simulation models to comprehensively study the energy dynamics of different types of data center equipment in order to better understand how best to conserve it. Most of this research has gone towards modeling of servers and switches but relatively less amount of work is available on Storage Area Networks (SAN). Storage Area Networks play a very vital role in the data center architecture by using specialized storage hardware to manage data. In this paper, we have extended the GreenCloud simulator for computing the energy consumed by different SAN components. A simulation environment for SAN has been presented using the 'Dynamic Voltage and Frequency Scaling' (DVFS) and 'Dynamic Shutdown' (DNS) techniques. The simulation results obtained through two SAN prototype models shows the energy consumption of SAN components to vary with the network topology and type of technique used in scheduling of the workloads. Our experimental simulations revealed that there is an increasing percentage of energy savings in DNS mode for both topologies for an increasing number of servers. Results demonstrated up to 35.6% energy savings. Keywords—Green
Cloud, consumption,Energy consumption
I.
SAN,
SAN
Simulator,
This tells us that SAN uses Fibre Channel (FC) protocols and Fibre Channel switches which are different from regular TCP networks of TCP/IP protocols and Ethernet based switches. FCs are for high speed data access and transmission which enable effective storage management at disk drives level, while the internet protocol is used for moving files over longer distances. Besides this, in terms of SAN technology, FC has many other advantages. Essential three layers are Host, Fabric and Storage Array:
Power
Host layer comprised of servers with Host Bus Adapters (HBA) and associated Gigabit Interface Converter (GBIC). There is implementation software to enable HBA for communication with the Fabric layer. HBA is an I/O card adapter that fits inside servers connected to fabric layer. GBIC converts data inside server into optical pulses in terms of optic laser and electronic pulses. Fibre optic layer transfers these signals as carriers.
INTRODUCTION
Storage Area Network (SAN) is a high speed optically connected collection of computers and storage devices with the purpose to store, manage and protect data efficiently. SAN store data in a Block format on to disk drives assembled in an array of disk implemented using 'Random Array of Independent/Inexpensive Disks' (RAID) Technology, which is efficiently controlled by using dedicated controller management layer. SAN involves following essential components for its operation which distinguishes it from data centers:
Fabric layer also known as middle layer is the SAN actual network which connects SAN devices via cables including Hubs, switches and physical network. In Storage Array the data actually resides and is stored in terms of RAID. This layer is intelligent and preserves data and its integrity by creating various data replicas and involves device status monitoring.
a) Small Computer Storage Interconnect (SCSI) or internet Small Computer Storage Interconnect (iSCSI) b) Fibre Channel (FC) involves optical cables and separate SAN protocols to transmit data.
SAN is composed of layers that are very different and perform their dedicated tasks. SAN uses a common protocol language for communication, handshaking and negotiation. They include Fibre Channel protocol for hardware intercommunication and the SCSI protocol used by device software and application to communicate with hard drives arranged in RAID architecture for efficient read, write and update capabilities.
Most SAN designs currently involve plug and play concept so that scalability can be achieved as per needs instead of monolithic arrangement. It comprises of racks, arrays of disks controlled by disk controllers for RAID and managed by assembled array of server boards in chase with each board representing a single linked server. This chase along with storage array of disks is linked to a combined display 978-1-4799-2293-2/13 $31.00 © 2013 IEEE DOI 10.1109/FIT.2013.55
265
major portion of energy which can be as high as about 2/3 of the peak workload [9].
As the data centers have achieved increasing popularity in terms of provision of resources and as SAN is a part of a data center, energy consumptions becomes a major concern. It is one of the main contributing factors towards data center operational expenses [1, 2]. Consequently, in order to keep the operational expenses of data centers to a minimum, it is important to keep the power and energy consumed by the data centers to the lowest possible level. Recent research in energy optimization has more focus on augmenting the processing elements. Many studies show that data center architecture and energy-aware scheduling techniques can be improved to significantly reduce energy consumed by the components [3].
Another technique has been proposed by Pinheiro et al. to minimize power consumptions [10]. This technique is used for managing physical machines clusters in terms of reducing their energy consumptions while sustaining the level of quality of service. GreenCloud simulator enables communication level data centric infrastructure simulations providing fine grain control to code level parameters for cloud computing environment [15]. The role of computing servers in the tool includes task execution with units of million instructions per second, memory storage and scheduling mechanisms. The simulator models workloads as job sequence having different sets of tasks.
In this paper, simulation models have been presented to show how SAN components contribute towards the energy consumption of a data center. GreenCloud is a packet level simulator which is an extension of network simulator NS2 [4]. GreenCloud extracts, aggregates, and makes information about the energy consumed by computing and communication elements of the data center available [5]. Our configurations were rendered to evaluate energy modeling for SAN. Two model scenarios are experimentally tested through simulation with variations, in order to extend GreenCloud data center application for Storage Area Networks energy modeling. This shows that GreenCloud can successfully be configured or programmed for different data center architectures and be used for calculating energy utilization for different scenarios.
Power can also be saved with reduced access of disks and energy-efficient models. The consumption of energy in U.S. for data centers is estimated to 100 Terawatt hours yearly in 2011 which costs more than 10 billion dollars [16]. One more reason for high energy consumption of data centers is the equipment used for cooling and associated strategies. The energy dissipation density of 150 to 200 Watt per square foot for cooling accounts for the one third data center power consumption [17]. Disk power consumption was studied by Freitas and Wilcke [18] in which the overall disk power consumption was shown equivalent to the sum of Spindle motor power, Seek power and interface for control-logic power consumption; with each of the three factors contributing to about one third of the total disk power utilization.
The rest of the paper is organized as follows: Section II presents related work in green cloud computing and related energy models; Section III describes the simulation environment and presents the experimental results; Section IV concludes the paper. II.
III.
SIMULATION OF STORAGE AREA NETWORK
For simulating SAN using energy aware cloud computing tool the architecture and linkage settings have been adjusted that exist in a SAN. The simulator is designed to simulate workload distribution with energy utilization by servers, switches and links [5] that we have applied in terms of SAN for simulating a realistic packet level pattern of communication. There are some other models such as BCube that have a server-oriented architecture in which switches act as interconnect for data centers. BCube is a model for data centers; however we have focused this research on Storage Area Networks which is distinct in itself. Here instead of using data center technologies such as BCube, VL2 or DCell we have chosen Storage Area Networks, due to its actual use and benefits such as scalability and high performance.
RELATED WORK
Green Cloud computing is growing as a new domain for studying technology and optimizing the process of communication along with resource management to save energy. This is a major concern of most cloud computing organizations. Research is being carried out in this field to devise methodologies to further optimize the overall power consumptions of the data center and its components. Although research is on-going in this field, the power consumption of SAN components is unexplored till now which is the basis of our research. The initial research carried out on power reserves was dedicated to hardware components of the data center. Two of the widely used techniques are Dynamic Voltage and Frequency Scaling (DVFS), and Dynamic Power Management (DPM) [6]. Both techniques work differently. DVFS takes into account the computing load and regulates the power consumption accordingly. This mechanism is also referred to as power-down mechanism. Same technique has been applied to different Green Cloud architectures in [7] and interesting results have been gathered. Other prospective research areas have been identified such as virtual network topologies.
A. Experimental Setup By having a multi switched redundant fabric it is easier to preserve constant connectivity between SAN components even if one of the switches becomes unavailable [12]. A similar architecture of SAN [13] having data center services of SANs for Medical Center hosted applications with redundancy are available. The SANs of this model used controllers of IBM DS4400 and DS4500. Aameek et al [14] showed another realistic model as test-bed for Storage Area Network called HARMONY.
DPM optimizes energy consumptions by using the scheduling technique for job consolidation, this mechanism allows to increase the number of unloaded servers so that they can be turned off or put in sleep mode [8]. However such techniques have limitations as idle servers may consume a
We have designed two realistic models for demonstrating and analyzing the energy utilized in relation to the above discussed Storage Area Network. Both models are configured inside GreenCloud simulator. GreenCloud is a complex simulation tool built as an extension of NS-2 network
266
increasing server count and power management modes. To the paired switches storage array of disks with controller are linked separately. This array is implemented in a RAID (Redundant Array of Independent Disks) storage technology. Green-Cloud does not model RAID storage.
simulator, with considerable part coded in C++ and the remaining part involves command line scripts. GreenCloud simulator also offers power management tools for checking different architectures in the light of power management modes. The supported modes of energy management in the simulator are referred as Dynamic Shutdown (DNS) and Dynamic Voltage / Frequency Scaling (DVFS). Servers are responsible for task execution. Servers are of single core with a predefined processing limit represented in Million Instructions per Second (MIPS). If both of the consumption modes are enabled then the behavior is DNS for any idle servers during simulation else it is shifted to DVFS mode. We have simulated two different topologies of Storage Area Network for measuring the energy consumptions and task distribution. In both we simulated architectures based upon real topologies similar to [12] and [13] in GreenCloud, we have made some essential changes to match SAN architecture along with some assumptions based on the simulation tool which is originally designed for data center energy analysis. We have configured the three-tier model and have limited the number of L3 and L2 switches similar to a small Storage Area Network topology. The configurations of a switch in such tools are to mimic the switching behavior of the topology, so the internal switch configurations are not tampered to maintain the original switching behavior intact. The main difference is that SAN switches are compatible with the Fibre channel protocol and like other switches SAN switches are capable of many to many communications. The links bandwidth has been configured to 3.2 Gbits and the propagation time through link to 5 ns in order to match the SAN Optical links. The tasks per server for computation are set to be 1000000 MIPS (default) and the task size is 850Kbytes. The speed of light through a Fibre optic link is 1ms per 200Km for a good quality cable [11]. So the value is calculated as:
Fig. 1. SAN Simulation topology – I
The second SAN architectural design prototype is quiet similar to the first but with an additional pair of switches at the second level which will be connected to the servers and the RAID storage modules architecturally. The second SAN prototype is designed because such architectures are found useful when considering SAN Zoning concept which involves different name server databases across intermediate switch layer or SAN Islands, but this concept will not be discussed further in our experimental setup. The SAN architecture under consideration is shown in Figure 2. B. Results and Outcomes SAN Prototype Design-I The outcome of different scenarios with respect to server quantity and prototype design for Design 1 is given below. Let "S" denote server and "Sw" denote switch for our topology. Table 1 for the design 1 shows outcome for only 2 servers. From simulation we observed that when DVFS for server is enabled both servers share tasks S1 = 31 tasks while S2 = 4 tasks, total tasks 35 (31+4). In terms of SAN Energy Consumption in Watt Hour (W*h) using Power Management Modes, we have the following outcomes:
1ms / 200 Km
10 Km = 50 microseconds = 0.05ms 1Km = 0.05 / 10 = 0.005ms 1000 m = 0.005ms 0.005 1m = ms 1000 1m = 5ns
Similarly from SAN Cable matrix [11] it is noted that there are three types of SAN cables namely Multimode 50μm (Short wave up to 500m), Single-mode 9μm (Long wave up to 100Km) and Multimode 62.5μm (Short wave up to 300m) each having bandwidths ranging from 100-400MB/s, 100-400MB/s and 100-200MB/s respectively. We have considered in our scenario 400MB bandwidth which as per code set to value 3.2 Gbits, calculated as: 400 MB = 0.4GB = 0.4 × 8Gbits = 3.2Gbits
We have simulated SAN architectures and have compared the energy outcomes in terms of servers and power management schemes. The first SAN prototype or "SAN Island" design is composed of 1 Top Rack SAN L3 switch which is connected to two internal SAN L3 switches (1 pair). This switch pair is connected to SAN L2 switch that links to SAN servers. We analyzed the energy consumptions based on
Fig. 2. SAN Simulation topology – II
267
TABLE I. Topology 1 Energy consumption with no. of servers = 2, Tasks = 35 and Average Task per server = 177.5 Power Management Mode
SAN Load %
Avg Load per Server %
Total T Eneergy Wh
Server Energy W-h
Serv=> No, Switch=> No
28.5
0.3
1 151.3
7.6
Serv=> DVFS, Switch=> No
18.1
0.2
1 150.4
6.7
Serv=> DNS, Switch=> No
28.5
0.3
1 146.6
2.9
Serv=> DVFS+DNS, Switch=> No
18.1
0.2
150
6.3
Serv=> No, Switch=> DVFS
28.5
0.3
1 151.3
7.6
Serv=> DVFS+DNS, Switch=> DVFS
18.1
0.2
150
6.3
Server Energy (71.7 W-h) Switch Energy Core (47.1 W-h) Switch Energy Access (2.8 W-h) Switch Energy Agg (94.1 W-h)
33%
44%
Servers for Disks Management in RAID : 50 Power Mgmt. (Servers) : DNS Power Mgmt. (Switches) : No
22% %
1%
Fig. 3. Distribution of energy consuumption of SAN Design-I with DNS ( Total Energyy= 216.7 W-h )
8%
Server Energy (14.4 W-h) Switch Energy Core (47.1 W-h)
TABLE II. Topology 1 Energy consumption with no. of o servers = 5, Tasks = 86 and Average Task per server = 177.2 SAN Load %
Avg Load per Server %
Total Energy E W--h
Server Energy W-h
Serv=> No, Switch=> No
28.5
0.3
1622.8
19.1
Serv=> DVFS, Switch=> No
18.1
0.2
1661
17.3
Serv=> DNS, Switch=> No
28.5
0.3
1500.9
7.2
Serv=> DVFS+DNS, Switch=> No
18.1
0.2
1660
16.3
Serv=> No, Switch=> DVFS
28.5
0.3
1622.8
19.1
Serv=> DVFS+DNS, Switch=> DVFS
18.1
0.2
1660
16.3
Power Management Mode
Switch Energy Agg (94.1 W-h)
60%
SAN Load %
Avg Load per Server %
Total Energy W-h W
Server Energy W-h
Serv=> No, Switch=> No
28.7
0.3
182
38.3
Serv=> DVFS, Switch=> No
18.3
0.2
181.3
37.6
Serv=> DNS, Switch=> No
28..7
0.3
158.1
14.4
Serv=> DVFS+DNS, Switch=> No
18.3
0.2
179.6
35.9
Serv=> No, Switch=> DVFS
28.7
0.3
182
38.3
Serv=> DVFS+DNS, Switch=> DVFS
18.3
0.2
179.6
35.9
Fig. 4. Distribution of energy consuumption of SAN Design-I with DNS ( Total Energyy= 158.1 W-h )
SAN Prototype Design-II s outcome for different Similarly the Green Cloud simulation scenarios of SAN Prototype Design 2 (4 internal switches) are shown below. TABLE V. Topology 2 Energy consumption with no. of servers = 2, Tasks = 35 and Average Taask per server = 17.5
SAN Load %
Avg Load per Server %
Tootal Energgy W-h
Server Energy W-h
Serv=> No, Switch=> No
28.6
0.3
3335.1
191.1
Serv=> DVFS, Switch=> No
18.2
0.2
7774.4
630.4
Serv=> DNS, Switch=> No
28.6
0.3
2115.7
71.7
Serv=> DVFS+DNS, Switch=> No
18.2
0.2
7 767
623
Serv=> No, Switch=> DVFS
28.6
0.3
3335.1
191.1
Serv=> DVFS+DNS, Switch=> DVFS
18.2
0.2
7 767
623
Power Management Mode
SAN Load %
Avg Load per Server %
Total Energy W-h
Server Energy W-h
Serv=> No, Switch=> No
28.5
0.3
245.5
7.6
Serv=> DVFS, Switch=> No
18.1
0.2
244.6
6.7
Serv=> DNS, Switch=> No
28.5
0.3
240.8
2.9
Serv=> DVFS+DNS, Switch=> No
18.1
0.2
244.2
6.3
Serv=> No, Switch=> DVFS
28.5
0.3
245.5
7.6
Serv=> DVFS+DNS, Switch=> DVFS
18.1
0.2
244.2
6.3
TABLE VI. Topology 2 Energy consuumption with no. of servers = 5, Tasks = 86 and Average Taask per server = 17.2
TABLE IV. Topology 1 Energy consumption with no. of o servers = 50, Tasks = 865 and Average Task per server = 17.3 Power Management Mode
Servers for Disks Management in RAID : 10 Power Mgmt. (Servers) : DNS Power Mgmt. (Switches) : No
2 2%
TABLE III. Topology 1 Energy consumption with no. of servers = 10, Tasks = 174 and Average Task per server = 17.4 Power Management Mode
Switch Energy Access (2.5 W-h)
30%
Power Management Mode
SAN Load %
Avg Load per Server %
Total Energy W-h
Server Energy W-h
Serv=> No, Switch=> No
28.5
0.3
257
19.1
Serv=> DVFS, Switch=> No
18.1
0.2
255.2
17.3
Serv=> DNS, Switch=> No
28.5
0.3
245.1
7.2
Serv=> DVFS+DNS, Switch=> No
18.1
0.2
254.2
16.3
Serv=> No, Switch=> DVFS
28.5
0.3
257
19.1
Serv=> DVFS+DNS, Switch=>
18.1
0.2
254.2
16.3
DVFS
268
TABLE VII. Topology 2 Energy consumption with no. of servers = 10, Tasks = 174 and Average Task per server = 17.4 Power Management Mode
SAN Load %
Avg Load per Server %
Totaal Energy W-h
Server Energy W-h
Serv=> No, Switch=> No
28.7
0.3
2 276.2
38.3
Serv=> DVFS, Switch=> No
18.3
0.2
2 275.5
37.6
Serv=> DNS, Switch=> No
28.7
0.3
2 252.3
14.4
Serv=> DVFS+DNS, Switch=> No
18.3
0.2
2 273.8
35.9
Serv=> No, Switch=> DVFS
28.7
0.3
2 276.2
38.3
Serv=> DVFS+DNS, Switch=> DVFS
18.3
0.2
2 273.8
35.9
server computation load was high as compared to the small energy consumption. It was also evident from simulation results that as the number of seervers was increased there was a considerable increase in the nuumber of tasks performed by the servers. The simulation assumees load as static per server MIPS and we extracted the outcomees using 1 MIPS and 1.5 MIPS server load settings. The outcomes are explained in figure 7 and figure 8.
SAN Load %
Avg Load per Server %
Totall Energy W W-h
57%
1% 14%
TABLE VIII. Model 2 Energy consumption with no. off servers = 50, Tasks = 865 and Average Task per server = 17.3 1 Power Management Mode
Server Energy (191.1 W-h) Switch Energy Core (47.1 W-h) Switch Energy Access (2.8 W-h) Switch Energy Agg (94.1 W-h)
28%
Servers for Disks Management in RAID : 50 Power Mgmt. (Servers) : No Power Mgmt. (Switches) : No
Fig. 5. Distribution of energy consum mption of SAN Design-I without DNS enabled ( Total Ennergy= 335.1 W-h )
Server Energy W-h
Server Energy (38.3 W-h) Switch Energy Core (47.1 W-h) Switch Energy Access (2.5 W-h) Switch Energy Agg (94.1 W-h)
21 1%
Serv=> No, Switch=> No
28.6
0.3
429.3
191.1
Serv=> DVFS, Switch=> No
18.2
0.2
868.6
630.4
Serv=> DNS, Switch=> No
28.6
0.3
309.9
71.7
Serv=> DVFS+DNS, Switch=> No
18.2
0.2
861.2
623
Serv=> No, Switch=> DVFS
28.6
0.3
429.3
191.1
Serv=> DVFS+DNS, Switch=> DVFS
18.2
0.2
861.2
623
26% 1%
Fig. 6. Distribution of energy consum mption of SAN Design-I without DNS enabled ( Total Energy= E 182 W-h ) 250 200
TABLE IX. Switch (L3 / L2) Energy conssumption
150 SAN Model Type
L3 Top Switch Energy
L3 Pair Swittch Energy
L2 Switch Energy
SAN Model I SAN Model II
47.1 47.1
94.1 188.3
2.5 2.5
Servers for Disks Management in RAID : 10 Power Mgmt. (Servers) : No Power Mgmt. (Switches) : No
52%
865
Primary Axis Server Power and SAN Load (0-250)
100 50
174 35
1000 800
SAN Load %
600
Server Power W-h
400
Total Tasks
200
86
0
0
TABLE X. Server Load Table for 1 MIPS M Secondary Axis (0-1000) Total Tasks
2 Servers 5 Servers 10 Servers 50 Servers
Total Tasks
SAN Load %
Server Power W-h W
Server Load MIPS
35 86 174 865
28.5 28.5 28.7 28.6
7.6 19.1 38.3 191.1
1.00 1.00 1.00 1.00
IV.
Fig. 7. Server Load L for 1 MIPS 250 200 150 100 50 0
SAN WORKLOAD
By increasing the maximum server coomputing capacity from 1 MIPS to 1.5 MIPS in server param metric settings the work load for Model I was studied. This refflected an increase in SAN load as well as server energy. This ennergy increase was minimal. While comparing 2 servers with 5 servers the overall load was reduced from 29% to 26.9%. By enabling e DNS the energy consumption for servers was considderably reduced as expected.
Primary Axis Server Power and
1298
SAN Load %
1000
SAN Load (0-250)
Server Power W-h 260
52
1500
500
121
Total Tasks
0 Secondary Axis (0-1500) Total Tasks
Fig. 8. Server Load L for 1.5 MIPS
This implies that as load is increaased the energy consumption by servers is also increased buut marginally. The
269
Storage Area Networks for making such technologies more energy efficient. REFERENCES [1] X. Fan, W.-D. Weber, and L. A. Barroso, “Power Provisioning for aWarehouse-sized Computer,” In Proceedings of the ACM International Symposium on Computer Architecture, San Diego, CA, June 2007. [2] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, and X. Zhu, “No “Power” Struggles: Coordinated Multi-level Power Management for the Data Center,” APLOS 2008. [3] P. Mahadevan, P. Sharma, S. Banerjee, and P. Ranganathan, “Energy Aware Network Operations,” IEEE INFOCOM, pp. 1 – 6, 2009. [4] The Network Simulator Ns2, [Online] Avialable at: http://www.isi.edu/nsnam/ns/ [5] D. Kliazovich, P. Bouvry, Y. Audzevich, and S. U. Khan“GreenCloud: A Packet-level Simulator of Energy-aware Cloud Computing Data Centers” 53rd IEEE Global Communications Conference (Globecom), Miami, FL, USA, December 2010. [6] T. Horvath, T. Abdelzaher, K. Skadron, and Xue Liu,“Dynamic Voltage Scaling in Multitier Web Servers with End-to End Delay Control,” IEEE Transactions on Computers, vol. 56, no. 4, pp. 444 – 458, 2007. [7] R. Buyya, A. Beloglazov, and J. H. Abawajy, “Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges,” Proceedings of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA2010), Las Vegas, USA, July 12-15, 2010, vol. abs/1006.0308, 2010. [8] Bo Li, Jianxin Li, Jinpeng Huai, Tianyu Wo, Qin Li, and Liang Zhong, “EnaCloud: An Energy-Saving Application Live Placement Approach for Cloud Computing Environments,” IEEE International Conference on Cloud Computing, Bangalore, India, 2009. [9] G. Chen, W. He, J. Liu, S. Nath, L. Rigas, L. Xiao, and F. Zhao, “Energy-aware server provisioning and load dispatching for connection intensive internet services,” the 5th USENIX Symposium on Networked Systems Design and Implementation, Berkeley, CA, USA, 2008. [10] E. Pinheiro, R. Bianchini, E. V. Carrera, and T. Heath, “Load balancing and unbalancing for power and performance in cluster-based systems,” in Proceedings of the Workshop on Compilers and Operating Systems for Low Power (COLP’01), Sep 2001, pp. 182–195. [Online] Available: http://research.ac.upc.es/pact01/colp/paper04.pdf [11] Christopher Poelker and Alex Nikitin “Storage Area Networks for Dummies”, 2nd Edition, Wiley Publishing Inc. [12] Brien M. Posey, "A Crash Course in Storage Area Networks Part 5", Oct 2012, Available:http://www.windowsnetworking.com/articlestutorials/netgeneral/Crash-Course-Storage-Area-Networking-Part5.html [13] "Storage Area Network (SAN) Technology", University of Miami Leonard M. Millar School of Medicine, Information Technology. Available: http://it.med.miami.edu/x443.xml [14] Aameek Singh, Madhukar korupolu, Dushmanta Mohapatra, "Server-Storage Virtualization: Integration and Load Balancing in Data Centers", IEEE Conference publication, International Conference for High Performance Computing, Networking, Storage and Analysis, Austin, Texas, USA, 2008. [15] Dzmitry Kliazovich, Pascal Bouvry, Samee Ullah Khan, "Simulation and Performance Analysis of Data Intensive and Workload Intensive Cloud Computing Data Centers", Optical Interconnects for Future Data Center Networks, Optical Networks, Springer Science Business Media, New York, 2013. [16] Rini Kaushik, Ludmila Cherkasova, Roy Campbell, Klara Nahrstedt, "Lightning: Self Adaptive, Energy-Conserving, Multi-Zoned, Commodity Green Cloud Storage System", 19th ACM International Symposium on High Performance Distributed Computing, Chicago, Illinois, U.S.A., 2010. [17] Qingbo Zhu, Yuanyuan Zhou, "Power-aware storage cache management", IEEE Transactions on Computers, IEEE Computer Society, May 2005. [18] Freitas, R. F., Wilcke, W.W., "Storage-class memory: The next storage system technology", IBM Journal of Research and Development, July 2008.
Fig. 9. Energy of Idle servers clipped in DNS Mode
V.
CONCLUSION
In this paper, we designed and analyzed Storage Area Network model for energy consumption using Green-Cloud simulator. From the simulation we conclude that as the number of servers is increased the server energy and the total energy both are increased. While comparing both designs the total energy is increased by an amount of 94.1 W-h which is the energy consumed by two switches which are added in the second design, but the energy consumed by the number of servers matches to be the same in both design. This increase in energy consumption is due to the increase in switch energy consumption. The results also show an increase by 94.1 W-h because one switch consumes energy of 47.1 W-h therefore the added switch pair in the second design leads total consumption by an amount 2x47.1=94.1W-h. It is evident from the results that less energy is consumed when DNS (Dynamic shutdown) power management mode is enabled. In DNS mode idle servers are shutdown to avoid using excessive energy thus reducing server energy consumption. TABLE XI. Server Load Table for 1.5 MIPS Total Tasks
SAN Load %
Server Power W-h
Server Load MIPS
2 Servers
52
29
7.7
1.50
5 Servers
121
26.9
19
1.50
10 Servers
260
28.7
38.3
1.50
50 Servers
1298
28.7
191.2
1.50
Results showed that total energy savings of SAN topology I in DNS mode increased from 3.1% (2 servers) up to 35.6% (50 servers). Results for SAN topology II showed the total energy savings in DNS mode increased from 1.91% (2 servers) up to 27.81% (50 servers). This outcome makes it evident that there is an increasing percentage of energy savings in DNS mode for both topologies when the number of servers is increased. Thus when DNS is enabled there is more energy saving in SAN architecture for an increasing number of servers. Storage Area Networks consist of switches, servers, storage RAID module and optical links. These components collectively contribute towards the energy consumption of an overall SAN structure in use. Finally moving towards RAID technologies RAID 5 and RAID10 (that is RAID1+0) are essential to note due to their advantages. RAID technologies should be chosen based on type of storage network especially for critical applications. As the total energy is the sum of energy consumed by switches (FC layer), the energy consumed by servers (Host Layer) and the energy consumed by the disk array (storage layer). So the future work can be to simulate RAID energy utilization for
270