Migration to VPLEX Metro using LUN encapsulation â disruptive to host access ..................... 24. Migration to ..
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro An Architectural Overview
Abstract
This white paper describes the design, deployment, and validation of a virtualized application environment incorporating VMware vSphere, Oracle E-Business Suite Release 12, SAP, Microsoft SharePoint 2007 and SQL Server 2008 on-line transaction processing (OLTP) on virtualized storage presented by EMC® VPLEX™ Metro. May 2010
Copyright © 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part number: H6983 VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 2
Table of Contents
Executive summary ........................................................................................................................... 6 Business case ............................................................................................................................... 6 Product overview ........................................................................................................................... 6 Key results..................................................................................................................................... 8 Introduction ....................................................................................................................................... 9 Introduction to this white paper ..................................................................................................... 9 Purpose ......................................................................................................................................... 9 Scope .......................................................................................................................................... 10 Audience ..................................................................................................................................... 10 Terminology................................................................................................................................. 10 Configuration ................................................................................................................................... 12 Overview ..................................................................................................................................... 12 Physical environment .................................................................................................................. 12 Hardware resources .................................................................................................................... 13 Software resources ..................................................................................................................... 13 Common elements in this distributed virtualized data center test environment.............................. 14 Introduction to the common elements ......................................................................................... 14 Contents ...................................................................................................................................... 14 VMware vSphere............................................................................................................................. 15 VMware vSphere overview ......................................................................................................... 15 VMware vSphere configuration ................................................................................................... 15 EMC Symmetrix VMAX ................................................................................................................... 18 EMC Symmetrix VMAX overview ................................................................................................ 18 EMC Symmetrix .......................................................................................................................... 18 VMAX configuration .................................................................................................................... 18 EMC CLARiiON CX4-480 ............................................................................................................... 19 EMC CLARiiON CX4-480 overview ............................................................................................ 19 EMC CLARiiON CX4-480 configuration ...................................................................................... 19 VCE Vblock 1 .................................................................................................................................. 21 VCE Vblock 1 overview ............................................................................................................... 21 VCE Vblock 1 configuration ........................................................................................................ 21 VPLEX Metro .................................................................................................................................. 22 VPLEX Metro overview ............................................................................................................... 22 SAN design for VPLEX Metro ..................................................................................................... 23 VPLEX Metro features for storage usage ................................................................................... 23 Storage best practice – partition alignment ................................................................................. 23 Distributed mirroring – DR1 device ............................................................................................. 23 VPLEX Metro back-end zoning ................................................................................................... 24 VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 3
VPLEX Metro front-end zoning ................................................................................................... 24 VPLEX Metro WAN connectivity ................................................................................................. 24 Migration to VPLEX Metro using LUN encapsulation – disruptive to host access ..................... 24 Migration to VPLEX Metro using VMware Storage VMotion – nondisruptive to host access ..... 25 Migration to VPLEX Metro DR1 – disruptive to host access ...................................................... 25 Migration from Site A to Site B VPLEX Metro LUN – nondisruptive to host access ................... 26 VPLEX Metro administration ........................................................................................................... 27 Introduction to VPLEX Metro administration ............................................................................... 27 VPLEX Metro administration procedure ...................................................................................... 27 Microsoft Office SharePoint Server 2007........................................................................................ 29 Microsoft SharePoint Server 2007 overview ............................................................................... 29 Microsoft SharePoint Server 2007 configuration ............................................................................ 29 Microsoft SharePoint Server 2007 configuration overview ......................................................... 29 Microsoft SharePoint Server 2007 design considerations .......................................................... 29 Microsoft SharePoint Server 2007 farm virtual machine configurations ..................................... 30 Virtual machine configuration and resource allocation ............................................................... 31 Testing approach—SharePoint farm user load profile ................................................................ 32 Validation of the virtualized SharePoint Server 2007 environment................................................. 33 Test summary.............................................................................................................................. 33 Validation without encapsulation to VPLEX ................................................................................ 33 Validation with VMotion running between local and remote sites ............................................... 34 Validation of cross-site VMotion .................................................................................................. 35 Microsoft SQL Server 2008 ............................................................................................................. 36 Microsoft SQL Server 2008 overview.......................................................................................... 36 Microsoft SQL Server 2008 configuration ....................................................................................... 36 Design considerations ................................................................................................................. 36 SQL Server test application ........................................................................................................ 36 OLTP workloads .......................................................................................................................... 36 Key components of SQL Server testing ...................................................................................... 36 Partitioning the SQL database .................................................................................................... 37 Broker and customer file groups partitioning .............................................................................. 37 Broker and customer file groups ................................................................................................. 37 Validation of the virtualized SQL Server 2008 environment ........................................................... 39 Test summary.............................................................................................................................. 39 Validation prior to encapsulation ................................................................................................. 39 Validation after encapsulation ..................................................................................................... 39 Validation of cross-site VMotion .................................................................................................. 40 SAP ................................................................................................................................................. 41 SAP overview .............................................................................................................................. 41 SAP configuration ........................................................................................................................... 41 SAP ERP 6.0 ............................................................................................................................... 41 VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 4
SAP BW 7.0 ................................................................................................................................ 41 Business scenario ....................................................................................................................... 41 Design considerations ................................................................................................................. 42 Validation of the virtualized SAP environment ................................................................................ 44 Test objectives ............................................................................................................................ 44 Test scenario ............................................................................................................................... 44 Test procedure ............................................................................................................................ 44 Test results .................................................................................................................................. 45 Oracle .............................................................................................................................................. 46 Oracle overview........................................................................................................................... 46 Oracle configuration ........................................................................................................................ 46 Configuration of the Oracle E-Business Suite environment ....................................................... 46 Design considerations ................................................................................................................. 47 Oracle E-Business Suite Database Server ................................................................................ 48 Oracle E-Business Suite Application Servers 1 and 2 ............................................................... 49 Oracle E-Business Suite Infrastructure Server .......................................................................... 49 Validation of the virtualized Oracle environment ............................................................................ 50 Tuning and baseline tests ........................................................................................................... 50 Baseline test ................................................................................................................................ 51 Encapsulate RDM (Raw Device Mapping) to vStorage .............................................................. 51 VMotion migration test ................................................................................................................ 52 100 km distance simulation for FC .............................................................................................. 53 Batch process test ....................................................................................................................... 54 Conclusion ...................................................................................................................................... 55 Summary ..................................................................................................................................... 55 Findings ....................................................................................................................................... 55 Next steps ................................................................................................................................... 55 References ...................................................................................................................................... 56 White papers ............................................................................................................................... 56 Product documentation ............................................................................................................... 56 Other documentation ................................................................................................................... 56
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 5
Executive summary Business case
As companies increasingly realize the great business and technical benefits of virtualizing their servers and applications, they are looking to apply the same model to their storage systems as well. Server virtualization allows hardware resources to be pooled into resource groups and dynamically allocated for application workloads, providing a flexible and fluid infrastructure. Storage, too, must evolve beyond a point of simple consolidation into virtual storage, which allows storage resources to be aggregated and virtualized to provide a dynamic storage infrastructure to complement the dynamic virtual server infrastructure. EMC delivers a virtual storage solution, which builds on fully automated storage tiering to address the need for mobility and flexibility in the underlying storage infrastructure. The way that this is addressed is through federation—delivering cooperating pools of storage resources. Federation enables IT to quickly and efficiently support the business through pools of resources that can be dynamically allocated. This flexibility elevates the value IT offers within the business, as application and data movement is possible for better support of services. Together, cooperating pools of server applications and storage enable a new model of computing—IT as a service. To proactively avoid potential disaster threats such as forecasted weather events, IT departments must overcome the challenges that storage virtualization introduces with distance. Up until now it has been impossible to accomplish this without relying on array replication between the data center locations and a site failover process.
Product overview
EMC® VPLEX™ Metro enables disparate storage arrays at two separate locations to appear as a single, shared array to application hosts, allowing for the easy migration and planned relocation of application servers and application data, whether physical or virtual, within and/or between data centers across distances of up to 100 km. VPLEX Metro enables companies to ensure effective information distribution by sharing and pooling storage resources across multiple hosts over synchronous distances. VPLEX Metro empowers companies with new ways to manage their virtual environment over synchronous distances so they can: • Transparently share and balance resources across physical data centers • Ensure instant, realtime data access for remote users • Increase protection to reduce unplanned application outages Transparently share and balance resources within and across physical data centers Using VPLEX Metro, IT departments can migrate and relocate virtual machines, applications, and data within, across, and between data centers. VPLEX Metro works in conjunction with VMware VMotion and Storage VMotion to: • Enable administrators to use standard management tools to easily distribute running applications between two sites, making it easy to load balance operations
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 6
• Transparently move running applications and data between sites, eliminating service disruption during scheduled downtime events • Easily add or remove storage so that the actual location of the data on a single array becomes much less important. With virtual storage, incorporating new storage systems into the IT environment is faster and simpler • Accelerate private cloud deployment by creating a seamless, multi-site storage layer that can easily be hosted onsite or shifted to a hosting provider Ensure instant, realtime data access for remote users Using VPLEX Metro, data is distributed and access is shared across sites, enabling IT environments to: • Provide concurrent read and write access to data by multiple hosts across two locations • Provide realtime data access to remote physical data centers without local storage • Share storage in geographically-dispersed environments up to 100 km apart Increase protection to reduce unplanned application outages Using VPLEX Metro, IT can increase high availability and workload resiliency across sites, while also proactively avoiding potential disaster threats such as forecasted weather events. • VPLEX Metro ensures continuous data access at each site or cluster in the event of a component failure within each VPLEX cluster with an n+1 cluster architecture and provides heterogeneous storage mirroring between array types and the Virtual Computing Environment coalition (VCE) Vblock 1. For more information about supported arrays, refer to the EMC Support Matrix. • Combined with VMware VMotion between geographically-dispersed VMware clusters, VPLEX Metro enables IT to respond to a potential threat before it becomes a disaster, proactively by moving workloads nondisruptively from one site to another.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 7
Key results
This solution enabled by VPLEX Metro solves a major IT challenge in a way that could not have been easily achieved before now. Traditionally, customers were tasked with the challenge of migrating data and applications between geographicallydispersed data centers through a series of manual tasks and activities. Customers would either make physical backups or use data replication services to transfer application data to the alternate site. Applications had to be stopped and could not be restarted until testing and verification was complete. With VPLEX Metro, these migration challenges can be resolved quickly and easily. Once the distributed RAID 1 device (DR1) is established, applications can be started immediately at the remote site, even before all the data has been copied over. VPLEX Metro provides companies with a more effective way of managing their virtual storage environments, by enabling transparent integration with existing applications and infrastructure, and providing the ability to migrate data between remote data centers with no interruption in service. With VPLEX Metro leveraged in this solution, companies can: • Easily migrate applications in real time from one site to another with no downtime or disruption, using standard infrastructure tools such as VMware VMotion and Storage VMotion. • Provide an application-transparent and nondisruptive solution for disaster avoidance and data migration, so reducing the operational impact of more traditional solutions, such as tape backup and data replication, from days or weeks, to minutes and hours. • Transparently share and balance resources between geographically-dispersed data centers with standard infrastructure tools.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 8
Introduction Introduction to this white paper
This white paper begins by briefly describing the technology and components used in the environment. Next, the white paper discusses the common elements that supported this distributed virtualized data center test environment. The document goes on to outline the configuration of the Microsoft SharePoint, SQL, SAP, and Oracle applications used in this solution. The white paper closes by summarizing the testing methodology and validated results. This white paper includes the following sections:
Purpose
Topic
See Page
Configuration
12
Common elements in this distributed virtualized data center test environment
14
VMware vSphere
15
EMC Symmetrix VMAX
18
EMC CLARiiON CX4-480
19
VCE Vblock 1
21
VPLEX Metro
22
VPLEX Metro administration
27
Microsoft Office SharePoint Server 2007
29
Microsoft SQL Server 2008
36
SAP
41
Oracle
46
Conclusion
55
References
56
The purpose of this document is to provide readers with an overall understanding of the VPLEX Metro technology and how it can be used with tools such as VMware VMotion and Storage VMotion to provide effective resource distribution and sharing between data centers across distances of up to 100 km with no downtime or disruption.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 9
Scope
The scope of this white paper is to document the: • Environment configuration for multiple applications utilizing virtualized storage presented by EMC VPLEX Metro • Migration from directly-accessed, SAN-attached storage to a virtualized storage environment presented by EMC VPLEX Metro • Application functionality within a geographically-dispersed VPLEX Metro virtualized storage environment
Audience
This white paper is intended for: • Field personnel who are tasked with implementing a multi-application virtualized data center utilizing VPLEX Metro as the local and distributed federation platform • Customers, including IT planners, storage architects, and administrators involved in evaluating, acquiring, managing, operating, or designing an EMC multiapplication virtualized data center • EMC staff and partners, for guidance and the development of proposals
Terminology
The following table defines terms used in this document. Term
Definition
CNA
Converged Network Adapter
COM
Communication—identifies inter- and intra-cluster communication links
DR
Disaster Recovery
FCoE
Fibre Channel over Ethernet
HA
High Availability
Metro-Plex
Multiple clusters connected within metropolitan area network (MAN) distances—for example, the same building, site, or campus with a maximum distance of 100 km apart
OATS
Oracle Application Testing Suite server
OLTP
On-line transaction processing
SAP ABAP
SAP Advanced Business Application Programming
SAP BI
SAP Business Intelligence
SAP CI
SAP Central Instance
SAP ERP
SAP Enterprise Resource Planning
UCS
Cisco Unified Computing System
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 10
VCE
Virtual Computing Environment coalition, consisting of Cisco and EMC, with VMware, that represents an unprecedented level of joint collaboration, services, and partner enablement, which “derisks” the infrastructure virtualization journey to the private cloud.
VM
Virtual Machine. A software implementation of a machine that executes programs like a physical machine.
VPLEX Metro
Provides distributed federation within, across and between two clusters (within synchronous distances)
VMDK
Virtual Machine Disk format. A VMDK file stores the contents of a virtual machine's hard disk drive. The file can be accessed in the same way as a physical hard disk.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 11
Configuration Overview
The following section identifies and briefly describes the technology and components used in the environment.
Physical environment
The following diagram illustrates the overall physical architecture of the environment.
Data Center B
Data Center A Virtual machines SQL-02 Web front end-03 SP Excel Server Oracle
SAP
Virtual machines
Virtual machines
Virtual machines SAP BI database SAP BI CI
Virtual machines SAP ERP database SAP ERP CI
SQL-01 Web front end-02 Index
SAP ERP database SAP ERP CI
SAP
Virtual machines
Virtual machines SP Application Web front end-01 SQL Oracle
SAP BI database SAP BI CI
Vblock 1 with EMC CLARiiON CX4 TCP/IP network
SAN switch
SAN switch
Virtualized LUNs
Fibre Channel
VPLEX Management Network
EMC VPLEX Metro
EMC VPLEX Metro
Note: EMC VPLEX Metro back-end storage is provided by the Vblock above. EMC CLARiiON CX4
EMC Symmetrix VMAX
Ethernet Fibre Channel BI: Business Intelligence CI: Central Instance ERP: Enterprise Resource Planning GEN-001298
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 12
Hardware resources
Software resources
The hardware used to validate the solution is listed in the following table. Equipment
Quantity
Configuration
Intel x86-based servers
5
Quad CPU, 96 GB RAM, Dual 10 GB Converged Network Adapters (CNAs)
VCE Vblock 1
1
Cisco Unified Computing System (UCS), Cisco Nexus 6120 switches, EMC CLARiiON CX4
EMC Symmetrix VMAX™
1
Fibre Channel (FC) connectivity, 450 GB/15k FC drive
EMC CLARiiON® CX4-480
1
FC connectivity, 450 GB/15k FC drive
EMC VPLEX Metro
2
VPLEX Metro storage clusters, dualengine, 4-director midsize configuration
WAN Emulator
1
1 GbE, 100 km distance
Fibre Channel SAN distance emulator
1
1/2/4 GB FC, 100 km distance
The software used to validate the solution is listed in the following table. Software
Version
VMware vSphere
4.0 U1 Enterprise plus Build 208167
VMware vCenter
4.0 U1 Build 186498
EMC PowerPath®/VE
5.4.1 Build 33
Red Hat Enterprise Linux
5.3
DB2
9.1 for Linux, UNIX, and Windows
Microsoft Windows
2008 R2 (Enterprise Edition)
Microsoft SQL Server
2008
Microsoft Office SharePoint Server
2007 (SP1 and cumulative updates)
Microsoft Visual Studio Test Suite
2008
KnowledgeLake Document Loader
1.1
Microsoft TPCE BenchCraft kit
MSTPCE 1.9.0-1018
SAP Enterprise Resource Planning
6.0
SAP Business Warehouse
7.0
Oracle E-Business Suite
12.1.1
Oracle RDBMS
11GR1 11.1.0.7.0
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 13
Common elements in this distributed virtualized data center test environment Introduction to the common elements
The virtualized data center environment described in this white paper was designed and deployed with a shared infrastructure in mind. From server to local and distributed federation to network consolidation, all layers of the environment were shared to create the greatest return on infrastructure investment, while achieving the necessary application requirements for functionality and performance. Using server virtualization, based on VMware vSphere, Intel x86-based servers were shared across applications and clustered to achieve redundancy and failover capability. VPLEX Metro was utilized to present shared data stores across the physical data center locations, enabling VMotion migration of application virtual machines (VMs) between the physical sites. Physical Site A storage consisted of a Symmetrix VMAX Single Engine (SE) for the SAP environment, and a CLARiiON CX4-480 for the Microsoft and Oracle environments. Vblock 1 was used for the physical Site B data center infrastructure and storage.
Contents
This section describes the common elements in this distributed virtualized data center test environment as listed in the following table. Topic
See Page
VMware vSphere
15
EMC Symmetrix VMAX
18
EMC CLARiiON CX4-480
19
VCE Vblock 1
21
VPLEX Metro
22
VPLEX Metro administration
27
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 14
VMware vSphere VMware vSphere overview
VMware vSphere is the industry’s most reliable platform for data center virtualization of the IT infrastructure. It enables the most scalable and efficient use of the x86 server hardware in a robust, highly-available environment. VMware ESX Server: • Abstracts server processor, memory, storage, and networking resources into multiple virtual machines, forming the foundation of the VMware VSphere 4 suite. • Partitions physical servers into multiple virtual machines. Each virtual machine represents a complete system with processors, memory, networking, storage, and BIOS. • Shares single server resources across multiple virtual machines and clusters ESX Servers for further sharing of resources.
VMware vSphere configuration
In this solution, VMware vSphere was configured as follows: • Site A—Microsoft and Oracle application environment • Site A—SAP application environment • Site B—Microsoft, Oracle, and SAP application environment Site A – Microsoft and Oracle application environment The virtual infrastructure at Site A for Microsoft and Oracle consists of the following enterprise-class servers (two in total) running VMware vSphere 4 Update 1: Part
Description
Memory
128 GB RAM
CPUs
4: 6 core, 2.659 GHz X7460 Intel Xeon processors
SAN and network connections
• 2: 10 GB Emulex LightPulse LP21000 CNAs for Fibre Channel and Ethernet connectivity • 2: Broadcom 5708 GbE adapters
High Availability networking
• 2: 1 Gb/s physical connections for the VMware service console • 2: physical 10 Gb/s connections on a VLAN for virtual machine application connectivity and VMotion
VMDKs
Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 15
Site A – SAP application environment The virtual infrastructure at Site A for SAP consists of the following enterprise-class servers (two in total) running VMware vSphere 4 Update 1: Part
Description
Memory
96 GB RAM
CPUs
2: Quad core, 2.792 GHz X5560 Intel Xeon processors
SAN and network connections
• 2: 10 GB Emulex LightPulse LP21000 PCI FCoE CNAs for Fibre Channel and Ethernet connectivity • 2: Broadcom 5708 GbE adapters
High Availability networking
• 2: 1 Gb/s physical connections for the VMware service console • 2: physical 10 Gb/s connections on a VLAN for virtual machine application connectivity and VMotion
VMDKs
Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs
Site B – Microsoft, Oracle, and SAP application environment The virtual infrastructure at Site B for all applications consists of the following enterprise-class Cisco UCS Blade Servers, as part of Vblock 1, running VMware vSphere 4 Update 1: Part
Description
Memory
48 GB RAM
CPUs
2: Quad core, 2.526 GHz E5540 Intel Xeon processors
SAN and network connections
2: Cisco UCS CNA M71KR-E-Emulex FCoE CNAs for Fibre Channel and Ethernet connectivity
High Availability networking
2: Physical 10 Gb/s connections for virtual machine application connectivity, VMotion, and VMware Service Console
VMDKs
Virtual machine disks were used for the virtual machines’ boot LUNs, as well as the application data LUNs
The following image shows the Site A and Site B clusters.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 16
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 17
EMC Symmetrix VMAX EMC Symmetrix VMAX overview
The EMC Symmetrix VMAX series provides an extensive offering of new features and functionality for the next era of high-availability virtual data centers. With advanced levels of data protection and replication, the Symmetrix VMAX system is at the forefront of enterprise storage area network (SAN) technology. Additionally, the Symmetrix VMAX array has the speed, capacity, and efficiency to transparently optimize service levels without compromising its ability to deliver performance on demand. These capabilities are of the greatest value for large virtualized server deployments such as VMware virtual data centers. The Symmetrix VMAX system is EMC’s high-end storage array that is purpose-built to deliver infrastructure services within the next-generation data center. Built for reliability, availability, and scalability, Symmetrix VMAX uses specialized engines, each of which includes two redundant director modules providing parallel access and replicated copies of all critical data. ™ Symmetrix VMAX’s Enginuity operating system provides several advanced features, such as:
• Auto-provisioning Groups for simplification of storage management • Virtual Provisioning™ for ease of use and improved capacity utilization • Virtual LUN technology for nondisruptive mobility between storage tiers
EMC Symmetrix VMAX configuration
The SAP application environment deployed in this solution used a Symmetrix VMAX array for the primary storage at Site A. Boot and Data LUNs were provisioned as detailed in the following table. Note See the SAP section of this white paper for the breakdown detail of the LUN allocation by virtual machine. Capacity
Number of LUNs
RAID type
500 GB
2
RAID 5 (7+1)
250 GB
6
RAID 5 (7+1)
85 GB
8
RAID 5 (7+1)
65 GB
2
RAID 5 (7+1)
32 GB
4
RAID 1/0
All drives were 400 GB 15k FC drives. LUNs were presented from the Symmetrix VMAX through two front-end adapter (FA) directors for redundancy and throughput. After encapsulation into VPLEX Metro, devices of the same size and type were presented as DR1s.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 18
EMC CLARiiON CX4-480 EMC CLARiiON CX4-480 overview
The EMC CLARiiON CX4 series delivers industry-leading innovation in midrange storage with the fourth-generation CLARiiON CX storage platform. The unique combination of flexible, scalable hardware design and advanced software capabilities enables EMC CLARiiON CX4 series systems, powered by Intel Xeon processors, to meet the growing and diverse needs of today’s midsize and large enterprises. Through innovative technologies like Flash drives, UltraFlex™ technology, and CLARiiON Virtual Provisioning, customers can: • Decrease costs and energy use • Optimize availability and virtualization The EMC CLARiiON CX4 model 480 supports up to 256 highly-available, dualconnected hosts and has the capability to scale up to 480 disk drives for a maximum capacity of 939 TB. Delivering up to twice the performance and scaling capacity as the previous CLARiiON generation, CLARiiON CX4 is the leading midrange storage solution to meet a full range of needs, from departmental applications to data-centerclass business-critical systems.
EMC CLARiiON CX4-480 configuration
The Oracle and Microsoft application environments deployed in this solution used a CX4-480 array for the primary storage at Site A. Boot and Data LUNs were provisioned as detailed in the following tables. Note Refer to the Oracle, Microsoft Office SharePoint Server 2007, and Microsoft SQL Server 2008 sections of this white paper for the breakdown detail of the LUN allocation by virtual machine. SQL/SharePoint Capacity
Number of LUNs
RAID type
200 GB
2
RAID 5 (4+1)
150 GB
2
RAID 5 (4+1)
125 GB
4
RAID 5 (4+1)
100 GB
16
RAID 5 (4+1)
75 GB
24
RAID 5 (4+1)
50 GB
3
RAID 5 (4+1)
20 GB
12
RAID 1/0
15 GB
4
RAID 5 (4+1)
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 19
Oracle Capacity
Number of LUNs
RAID type
500 GB
1
RAID 5 (4+1)
150 GB
2
RAID 1/0
80 GB
4
RAID 5 (4+1)
50 GB
1
RAID 5 (4+1)
All drives were 400 GB 15k FC drives. LUNs were presented from the CLARiiON CX4-480 through four storage processor (SP) ports for multipathing support (for redundancy and throughput). After encapsulation into the VPLEX Metro, devices of the same size and type were presented as DR1 devices.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 20
VCE Vblock 1 VCE Vblock 1 overview
Vblocks are pre-engineered, tested, and validated units of IT infrastructure that have a defined performance, capacity, and availability profile. Vblocks grew out of an idea to simplify IT infrastructure acquisition, deployment, and operations. While Vblocks are tightly defined to meet specific performance and availability bounds, their value lies in a combination of efficiency, control, and choice. In Vblock 1, each Cisco UCS chassis contains B-200 series blades, six with 48 GB RAM and two with 96 GB RAM. This offers good price and performance and supports memory-intensive applications, such as in-memory databases within the Vblock definition. Within a Vblock 1, there are no hard disk drives in the B-200 series blades as all boot services and storage are provided by the SAN, which in the case of Vblock 1, is a CX4-480 storage array.
VCE Vblock 1 configuration
A Vblock 1 was used for the computing and storage resources at Site B. This allowed for workload balancing and disaster avoidance failover capabilities for the applications deployed in the use case. Using a standard minimum configuration for Vblock 1, the computer resources were provided by Cisco UCS B-Series Blade Servers and storage resources from the CLARiiON CX4-480. For more information about Vblocks, see the Vblock Infrastructure Packages Reference Architecture. Note Presenting Vblock storage through VPLEX Metro may reduce certain Vblock management functionality. Consult your EMC representative for information about the potential impact to your Vblock environment. Four of the 16 blades in Vblock 1 were used in the testing of this environment, as illustrated in the following image.
Two two-node ESX clusters were created at site B: one to host Microsoft and Oracle applications, and one to host the SAP application. The storage provided by the Vblock was sized to duplicate the primary site environment. Devices were configured as part of the DR1 devices created in VPLEX Metro, being paired with the primary site LUNs.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 21
VPLEX Metro VPLEX Metro overview
VPLEX Metro is a storage area network-based (SAN) block local and distributed federation solution that allows the physical storage provided by traditional storage arrays to be virtualized, accessed, and managed across the boundaries between data centers. This new form of access, called AccessAnywhere™, removes many of the constraints of the physical data center boundaries and its storage arrays. AccessAnywhere storage allows data to be moved, accessed, and mirrored transparently between data centers, effectively allowing storage and applications to work between data centers as though those physical boundaries were not there. Traditional SAN-based storage access The following image illustrates traditional SAN-based storage access.
Storage access through a storage virtualization layer The following image illustrates storage access through a storage virtualization layer.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 22
SAN design for VPLEX Metro
The role of the VPLEX Metro in a SAN environment is both as a target and an initiator. From the host perspective, VPLEX Metro is a target and from the back-end storage array perspective, VPLEX Metro is an initiator. If an environment is configured so that all LUNs are presented to the hosts through a VPLEX Metro, then SAN zoning can be done. In this way, the hosts are in the same SAN as the VPLEX Metro front-end ports, and the storage arrays are in the same SAN as the VPLEX Metro back-end ports. In the case of an environment where hosts need to access the storage arrays directly, as well as access VPLEX Metro LUNs, for example, in a migration situation, the hosts, the VPLEX Metro front end and back end, and the storage arrays all need to be in the same SAN, so that the hosts can see the LUNs from both sources.
VPLEX Metro features for storage usage
VPLEX Metro provides the ability to encapsulate and de-encapsulate existing storage devices while preserving their data. It provides data access and mobility between two VPLEX Metro clusters within synchronous distances. With a unique scale-up and scale-out architecture, VPLEX Metro's advanced data-caching and distributed-cache coherency provides workload resiliency, automatic sharing, balancing, and failover of the storage domains. It enables both local and remote data access with predictable service levels. Note Any storage volume that is not a multiple of 4 KB cannot be claimed or encapsulated.
Storage best practice – partition alignment
Storage best practices that apply to directly-accessed storage volumes apply to virtual volumes as well. One important best practice to follow is partition alignment for any x86-based OS platform. Misaligned partitions can consume resources or cause additional work in a storage array, leading to performance loss. With misaligned partitions, I/O operations to an array cross track or cylinder boundaries and lead to multiple read or write requests to satisfy the I/O operation. This can be avoided by aligning partitions on 32 KB boundaries.
Distributed mirroring – DR1 device
The distributed mirroring feature of EMC VPLEX Metro-Plex provides the ability to create mirrored virtual volumes, where the mirror legs of the volume are supported by physical storage residing at each site of the Metro-Plex. To the hosts, the DR1 device is a single, logical volume with the same volume identity provided by both clusters. I/O to the device can be issued to either VPLEX Metro cluster concurrently. The two VPLEX Metro clusters use advanced data-caching and distributed-cache coherency to provide workload resiliency, automatic sharing, balancing, and failover of storage domains, and enable both local and remote data access with predictable service levels.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 23
VPLEX Metro back-end zoning
Back-end zoning was configured for throughput and redundancy, with each storage array having multiple front-end adapter (FA) connections in the case of Symmetrix VMAX, or SP connections, in the case of a CLARiiON CX4-480 Vblock, to each VPLEX Metro back-end director. The number of VPLEX Metro ports configured depends on the number of LUNs in use and the amount of data transferred from host to array. Each environment should be sized accordingly. Devices were masked to ensure that only the LUNs to be claimed by VPLEX Metro were seen. Back-end zoning was configured on a Cisco MDS 9500 switch. There were two CLARiiON SP ports per one VPLEX Metro back-end zone. The VPLEX Metro backend ports and COM ports can be validated using the VPLEX Command Line Interface (VPlexcli). After back-end zoning was completed, it was necessary to rediscover the storage array. The storage volumes can be checked using VPlexcli or the Management Console.
VPLEX Metro front-end zoning
Front-end zoning was configured for throughput and redundancy, with each ESX host having two FC adapters (through CNAs) and each adapter being zoned to multiple VPLEX Metro front-end director ports. The number of VPLEX Metro ports configured depends on the number of LUNs in use and the amount of data transferred from host to array. Each environment should be sized accordingly. Each server in the application clusters was configured identically, with access to all of the same LUNs. The front-end ports can be enabled only after VPLEX Metro metavolumes are created. The metavolumes contain critical system configuration data. For more information about metavolumes, refer to the EMC VPLEX Installation and Setup Guide.
VPLEX Metro WAN connectivity
The WAN configuration was designed for redundancy and throughput. WAN ports from each director were connected to the multilayer director switch (MDS) fabric at each simulated location. A two-port inter-switch link (ISL) was configured on the FC switches, and those connections were passed through a WAN emulator to introduce a latency of 100 km.
Migration to VPLEX Metro using LUN encapsulation – disruptive to host access
One method that can be used to migrate LUNs from a directly-accessed storage array to the VPLEX Metro array is encapsulation. In the encapsulation process, the VPLEX Metro takes ownership of a LUN that it sees from the original array. Once it is encapsulated, the host can no longer see the LUN directly from the original array. The host must be configured, through zoning, to see the LUN from the VPLEX Metro. Encapsulation requires that the virtual machine is removed from the inventory in ESX and that the ESX host does a rescan to see the “new” LUN and virtual machine—this is considered a disruptive migration. From a storage utilization perspective, this method requires the least amount of incremental capacity since the original LUNs are being encapsulated and, therefore, do not require a transit LUN that would take up additional capacity.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 24
Migration to VPLEX Metro using VMware Storage VMotion – nondisruptive to host access
VMware Storage VMotion can be used to nondisruptively migrate from a directlyaccessed storage array to a LUN presented through VPLEX Metro. This is accomplished by presenting both the original LUN and the new VPLEX Metro LUNs to the hosts at the same time and then executing VMware Storage VMotion from the original LUN to the VPLEX Metro LUN. Assuming there is no need to revert to the original LUN, that original LUN can then be reclaimed by the storage array and the disk capacity made available for other purposes. From a storage utilization perspective, this method requires additional storage capacity during the migration, since the new LUNs need to be created on the VPLEX Metro prior to executing VMotion from the existing LUN. However, the original LUN can then be destroyed and the capacity added back into the unused pool on the array.
Migration to VPLEX Metro DR1 – disruptive to host access
If downtime is not a concern, data migration to a VPLEX Metro DR1 device can be done without the need for an extra transit LUN. The migration procedure is as follows: Step
Action
1
Power off the virtual machine and remove it from the vCenter inventory.
2
Encapsulate the original non-VPLEX Metro LUN.
3
Remove the LUNs from the storage group.
4
Add these LUNs to a VPLEX storage group or storage masking.
5
Rescan the storage arrays.
6
Claim the storage volumes with the Application Consistency option.
7
Create an extent and local device.
8
Create the DR1 devices.
9
Add the newly-encapsulated LUN to the DR1 device and create the virtual volumes over the DR1 device.
10
Assign the virtual volume to the host view.
11
Rescan the ESX host to see the new DR1 device.
12
Add the virtual machine to the inventory and power up the virtual machine.
From a storage utilization perspective, this method requires the least amount of incremental capacity since the original LUNs are being encapsulated and so do not require a transit LUN, which takes up additional capacity during the migration. However, additional capacity is needed for the remote device of the DR1 virtual volume.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 25
Migration from Site A to Site B VPLEX Metro LUN – nondisruptive to host access
In some situations, it may be required to migrate from a VPLEX Metro LUN (nonDR1) at one site in a Metro-Plex to a VPLEX Metro LUN (non-DR1) at the other site in a Metro-Plex. This can be accomplished through the use of a transit DR1 device spanning the Metro-Plex. The procedure is as follows: Step
Action
1
Present both the original VPLEX Metro LUN and the VPLEX Metro transit DR1 device to the hosts at both sites.
2
Use VMware Storage VMotion to migrate from the Site A VPLEX Metro LUN to the transit DR1 device.
3
Use VMware VMotion to migrate the virtual machine to the Site B host.
4
Use VMware Storage vMotion to migrate from the transit DR1 device to the Site B VPLEX Metro local LUN.
From a storage utilization perspective, this method requires additional storage capacity during the migration, since a new LUN needs to be created on the VPLEX Metro prior to migrating with VMware Storage VMotion from the existing LUN. However, the original LUN can then be destroyed and the capacity added back into the unused pool on the array.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 26
VPLEX Metro administration
Introduction to VPLEX Metro administration
When bringing an existing storage array into a virtualized storage environment, the options are to: • Encapsulate storage volumes from existing storage arrays that have already been used by hosts or • Create a new VPLEX Metro LUN and migrate the existing data to that LUN From a migration time perspective, encapsulation is much faster (approximately 4-5 times faster in this environment) than migration to a new VPLEX Metro LUN via Storage VMotion. The benefit of Storage VMotion is that the application server experiences no downtime, whereas with the encapsulation option the hosts need to rescan and replace the VMs, which results in downtime. VPLEX Metro provides an option to encapsulate the existing data using VPlexcli. When application consistency is set (using the –appc flag), the volumes claimed are data-protected and no data is lost.
VPLEX Metro administration procedure
In this solution, administration of VPLEX Metro was done primarily through the Management Console, although the same functionality exists with VPlexcli. On authenticating to the secure web-based GUI, the user is presented with a set of on-screen configuration options, listed in the order of completion. For more information about each step in the workflow, refer to the EMC VPLEX Management Console online help. The following table summarizes the steps to be taken, from the discovery of the arrays up to the storage being visible to the host. Step
Action
1
Discover available storage VPLEX Metro automatically discovers storage arrays that are connected to the back-end ports. All arrays connected to each director in the cluster are listed in the Storage Arrays view.
2
Claim storage volumes Storage volumes must be claimed before they can be used in the cluster (with the exception of the metadata volume, which is created from an unclaimed storage volume). Only after a storage volume is claimed, can it be used to create extents, devices, and then Virtual Volumes.
3
Create extents Create extents for the selected storage volumes and specify the capacity.
4
Create devices from extents A simple device is created from one extent and uses storage in one cluster only.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 27
5
Create a Virtual Volume Create a Virtual Volume using the device created in the previous step.
6
Register initiators When initiators (hosts accessing the storage) are connected directly or through a Fibre Channel fabric, VPLEX Metro automatically discovers them and populates the Initiators View. Once discovered, you must register the initiators with VPLEX Metro before they can be added to a storage view and access storage. Registering an initiator gives a meaningful name to the port’s WWN, which is typically the server’s DNS name, to allow you to easily identify the host.
7
Create a storage view For storage to be visible to a host, first create a storage view and then add VPLEX Metro front-end ports and virtual volumes to the view. Virtual volumes are not visible to the hosts until they are in a storage view with associated ports and initiators. The Create Storage View wizard enables you to create a storage view and add initiators, ports, and virtual volumes to the view. Once all the components are added to the view, it automatically becomes active. When a storage view is active, hosts can see the storage and begin I/O to the virtual volumes. After creating a storage view, you can only add or remove virtual volumes through the GUI. To add or remove ports and initiators, use the CLI. For more information, refer to the EMC VPLEX CLI Guide.
For comprehensive information about VPLEX Metro commands, refer to the EMC VPLEX CLI Guide.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 28
Microsoft Office SharePoint Server 2007 Microsoft SharePoint Server 2007 overview
This section covers the following topics: • Microsoft SharePoint Server 2007 configuration • Validation of the virtualized SharePoint Server 2007 environment
Microsoft SharePoint Server 2007 configuration Microsoft SharePoint Server 2007 configuration overview
With customers increasingly moving their SharePoint environments into a virtualized environment, server farms may be built on multiple sites with complicated back-end storage supported. This leads to two challenges: • How can an existing SharePoint Server move between different data centers without interrupting operations on the farm? • How can storage maintenance costs be reduced? The virtualized SharePoint Server 2007 overcomes these challenges by building on vSphere 4.0 with VPLEX Metro supported, which enables disparate storage arrays at multiple locations to provision a single, shared array on the SharePoint 2007 farm.
Microsoft SharePoint Server 2007 design considerations
In this SharePoint 2007 environment design, the major configuration highlights include: • The SharePoint farm shared two of the five ESX servers on one site, with virtualized SQL and Oracle environments. • Web front-ends (WFEs) were also configured as query servers in order to improve query performance through a balanced load (recommended for enterprise-level SharePoint farms). • User request load was balanced across all available WFEs by using a contextsensitive network switch. The following sections define the SharePoint Server 2007 application architecture for the virtualized data center. Multi-server SharePoint Server 2007 farms use a three-tier web application architecture, as follows: • Web server tier—coordinates user requests and serves web content.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 29
• Application tier—services specific requests including: − Excel − Document conversions − Central administration − Content indexing • Database tier—manages document content, SharePoint farm configuration, and search databases.
Microsoft SharePoint Server 2007 farm virtual machine configurations
The following table outlines the virtual machine configurations of the SharePoint Server 2007 farm. Configuration
Description
Three WFE VMs
This division of resources offers the best search performance and redundancy in a virtualized SharePoint farm. As WFE and query roles are CPU-intensive, the WFE VMs were allocated four virtual CPUs with 4 GB of memory. The query (Search) volume was configured as a 100 GB virtual disk.
Index Server
The Index Server was configured as the sole indexer for the portal along with a dedicated WFE role. This means that while the index virtual machine is crawling for content, it can use itself as the WFE to crawl. This minimizes network traffic and ensures that the SharePoint farm performance does not suffer when a user-addressable WFE is affected by the indexing load. Four virtual CPUs and 6 GB of memory were allocated for the Index Server. The indexing process needs to merge index content, which requires double disk space. Therefore, a 150 GB virtual search disk was allocated.
Application Excel Servers
Two virtual CPUs and 2 GB of memory were allocated for the Application and Excel Servers as these roles require less resources.
SQL Server
Four virtual CPUs and 16 GB of memory were allocated for the SQL Server virtual machine as CPU utilization and memory requirements for SQL in a SharePoint farm are high. With more memory allocated to the SQL virtual machine, the SQL Server becomes more effective in caching SharePoint user data, leading to fewer required physical IOPS for storage and better performance.
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 30
Virtual machine configuration and resource allocation
The following table details the virtual machine configuration of the SharePoint farm with allocated resources.
Server Role
Quantity
vCPUs
Memory (GB)
Bootdisk (GB)
Search Disk (GB)
WFE Servers
3
4
4
40
100
Index Servers
1
4
6
50
150
Application Servers
1
2
2
40
Not Applicable
Excel Server(Host Central Admin)
1
2
2
40
Not Applicable
SQL Server 2008
1
4
16
40
Not Applicable
To summarize, in this virtualized environment, SharePoint 2007 infrastructure resource allocations totaled: • vCPUs: 24 • Memory: 38 GB • Boot disk: 290 GB • Search disk: 450 GB
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro— An Architectural Overview 31
Testing approach— SharePoint farm user load profile
KnowledgeLake DocLoaderLite was used to populate SharePoint with random user data. It took the documents, copied and distributed them to the SharePoint farm’s document library, based on a load profile, while the Microsoft Visual Studio Team System (VSTS) emulated the client user load. The following table shows the document distribution in the virtualized SharePoint farm. Document type
No. of documents
Average doc size (KB)
Percentage
.doc
289056
261.6
15.79%
.docx
285902
110.3
15.62%
.gif
90514
76.5
4.94%
.jpg
71566
95.0
3.91%
.mpp
287140
240.6
15.69%
.pptx
269118
199.6
14.70%
.vsd
262014
485.4
14.31%
.xlsx
275172
27.0
15.03%
Total
1830482
187.0
100.00%
During validation, a Microsoft heavy user load profile was used to determine the maximum user count that the Microsoft SharePoint 2007 server farm could sustain while ensuring that average response times remained within acceptable limits. Microsoft standards state that a heavy user performs 60 requests per hour; that is, a request every 60 seconds (See the following article for additional information on user load guidelines: http://technet.microsoft.com/en-us/library/cc261795.aspx). The user profiles in this testing consisted of three user operations: • 80 percent browse • 10 percent search • 10 percent modify
VMotion Over Distance for Microsoft, Oracle, and SAP Enabled by VCE Vblock 1, EMC Symmetrix VMAX, EMC CLARiiON, and EMC VPLEX Metro—An Architectural Overview 32
Validation of the virtualized SharePoint Server 2007 environment Test summary
In addition to validating the SharePoint Server 2007 operations before and after encapsulation into the VPLEX Metro cluster, the following sections also validate the cross-site testing of VMotion during the run. The baseline test is performed first to log the SharePoint 2007 farm base performance. The test then validates the performance impact after the CLARiiON FLARE® LUNs for the SharePoint farm are encapsulated into the VPLEX Metro cluster. VMware VMotion is tested between the local site (Site A) and the remote site (Site B) with up to 100 km distance. During test validation, Virtual Studio Team System (VSTS) continuously generates workloads (for example, browse the portal and sub sites, random search document, and random replace document with another one) against the WFEs. These operations keep the WFE CPU utilization at around 80 percent in each test session. SharePoint 2007, VPLEX Metro, and VMware performance data were logged for analysis during the test run lifecycle. This data presents an account of results from VSTS 2008, which generates continuous workload (Browse/Search/Modify) to the WFEs of the SharePoint 2007 farm, while simultaneously consolidating the SQL and Oracle OLTP workload on the same VMware vSphere 4.0 data center.
Validation without encapsulation to VPLEX
The following image shows the baseline performance of passed tests per second without encapsulating into the VPLEX LUNs on the SharePoint virtual machines.
With a mixed user profile of 80/10/10, the virtualized SharePoint farm can support a maximum of 107,400 users with 1 percent concurrency, while satisfying Microsoft’s acceptable response time criteria, as shown in the following tables. User activity as percentages Browse/Search/Modify
Acceptable response time (seconds)
Baseline response time (seconds)
80/10/10