Cisco Unified Computing System with VMware Horizon 6 with View ... [PDF]

9 downloads 317 Views 4MB Size Report
Design and implementation best practices covering Cisco UCS .... Uses a unique Cisco UCS Virtual Interface Card (VIC) 1225: a dual-port 10 .... per disk group in the VMware vSphere host, and by the number of VMware vSphere hosts in the cluster. ... users can use web-based or locally installed client software to connect ...
White Paper

Cisco Unified Computing System with VMware Horizon 6 with View and Virtual SAN Reference Architecture December 2014

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 1 of 59

Contents Executive Summary ................................................................................................................................................. 3 Solution Overview.................................................................................................................................................... 3 Cisco Unified Computing System .......................................................................................................................... 4 VMware vSphere .................................................................................................................................................. 7 VMware Virtual SAN ............................................................................................................................................. 7 VMware Horizon 6 with View ................................................................................................................................ 8 System Configuration (Design) ............................................................................................................................ 11 Cisco UCS Configuration .................................................................................................................................... 13 VMware Virtual SAN Configuration ..................................................................................................................... 18 VMware Horizon with View Configuration ........................................................................................................... 21 Test Results ........................................................................................................................................................... 23 Test Summary..................................................................................................................................................... 23 Test 1: 400 VMware View Linked Clones on Four Cisco UCS C240 M3 Servers in VMware Virtual SAN Cluster ........................................................................................................................................................ 24 Test 2: 800 VMware View Linked Clones on Eight Cisco UCS C240 M3 Servers in a VMware Virtual SAN Cluster ........................................................................................................................................................ 28 Test 3: 800 VMware View Full Clones on Eight Cisco UCS C240 M3 Servers on a VMware Virtual SAN Cluster ........................................................................................................................................................ 31 Test 4: Mixed 400 VMware View Linked Clones and 400 Full Clones on Eight Cisco UCS C240 M3 Servers ... 35 VMware View Operations Tests .......................................................................................................................... 39 VMware Virtual SAN Availability and Manageability Tests .................................................................................. 42 Test Methodology .................................................................................................................................................. 51 VMware View Planner 3.5 ................................................................................................................................... 51 VMware Virtual SAN Observer............................................................................................................................ 53 System Sizing ........................................................................................................................................................ 55 Virtual Machine Test Image Builds ...................................................................................................................... 55 Management Blocks ........................................................................................................................................... 56 Host Configuration .............................................................................................................................................. 56 Bill of Materials ...................................................................................................................................................... 57 Conclusion ............................................................................................................................................................. 57 For More Information ............................................................................................................................................. 58 Acknowledgements ............................................................................................................................................... 59

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 2 of 59

Executive Summary The reference architecture described in this document uses VMware Horizon 6 with View hosted on the Cisco Unified Computing System™ (Cisco UCS®) with VMware Virtual SAN as the hypervisor-converged storage solution. The purpose of this reference architecture is to provide guidance about the following aspects of deploying this joint solution: ●

Scalability and performance results while hosting 800 VMware Horizon 6 with View virtual desktops using industry-standardized benchmarking of real-world workloads



Design and implementation best practices covering Cisco UCS configurations, VMware Virtual SAN storage policies, and data-store-sizing guidance for hosting VMware Horizon 6 with View virtual desktops.



Availability and resiliency considerations related to proper maintenance procedures and the handling of various failure scenarios with VMware Virtual SAN

This reference architecture uses VMware View Planner as the standardized benchmarking tool for the applicationcentric benchmarking of real-world workloads. This approach helps ensure that the end-user experience and the performance of the solution components are taken into account. VMware Horizon 6 with View virtual desktops are hosted on Cisco UCS C240 M3 Rack Servers on the basis of their compatibility with VMware Virtual SAN. The results, summarized in Table 1 and described in more detail later in this document, demonstrate that an architecture using VMware Virtual SAN with Cisco UCS allows easy scalability of a VMware Horizon 6 with View virtual desktop environment, while maintaining superior performance and manageability. Table 1.

Test Results Summary

Test Results Deployed 800 desktops

Test Summary 800 linked clones 800 full clones 800 mixed clones (400 linked clones and 400 full clones) (100% concurrency)

80 minutes

800 linked clones deployed

8 minutes

800 linked clones started

72 minutes

800 linked clones refreshed

121 minutes

800 linked clones recomposed

Less than 3 milliseconds (ms) average application latency

Standard Microsoft Office applications

Less than 15 ms average disk latency

VMware Virtual SAN disk latency

Main Points ● Linear scalability: Scaled from 400 desktops on 4 nodes, to 800 desktops on 8 nodes ● Excellent application response times: Both for linked clones and full clones, helps ensure excellent end-user performance for practical workloads ● Proven resiliency and availability: Provides greater application uptime ● Faster desktop operations: Increases IT efficiency

(100 desktops per host)

Solution Overview The VMware Horizon 6 with View hosted on Cisco UCS with VMware Virtual SAN reference architecture supports and closely matches the specifications of the Cisco UCS solution’s VMware Virtual SAN Ready Nodes for VMware View virtual desktops. The reference architecture provides scalability and performance guidance for the joint solution. This section provides an overview of the individual components used in the solution. For more information about each product, refer to the respective product documentation.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 3 of 59

Cisco Unified Computing System Cisco UCS is a next-generation data center platform. Cisco UCS unites computing, networking, storage access, and virtualization resources into a single cohesive, integrated architecture that is designed to reduce the total cost of ownership (TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86architecture servers. The system is an integrated, scalable, multi-chassis platform that enables all resources to participate in a unified management domain (Figure 1). Figure 1.

Cisco UCS Deployed with Cisco Nexus Products

Cisco UCS represents a radical simplification of traditional architectures that dramatically reduces the number of servers that are required to enable the platform. Cisco UCS helps reduce TCO by automating element management tasks through the use of service profiles that enable just-in-time provisioning. Service profiles increase business agility by quickly aligning computing resources with rapidly changing business and workload requirements. In addition, Cisco UCS delivers end-to-end optimization for virtualized environments, while retaining the capability to support traditional operating system and application stacks in physical environments.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 4 of 59

The main advantages of Cisco UCS are: ●

Less infrastructure, more intelligent servers: The Cisco UCS architecture enables end-to-end server visibility, management, and control in both virtual and bare-metal environments. The Cisco UCS platform facilitates the move to cloud computing and IT as a service (ITaaS) with fabric-based Infrastructure.



Resource consolidation with Cisco UCS servers: Cisco UCS servers simplify traditional architectures and optimize virtualized environments across the entire system. With Cisco servers, bare-metal and virtualized applications can be supported in the same environment.



Accelerated server deployment: The smart, programmable infrastructure of Cisco UCS simplifies and accelerates enterprise-class application and service deployment in bare-metal, virtualized, and cloud computing environments. With unified model-based management, hundreds of servers can be configured as quickly as just one server, resulting in a lower cost of ownership and improved business continuity.



Simplified management: Cisco UCS offers simplified and open management with a large partner ecosystem. Cisco UCS Manager provides embedded management of all software and hardware components in Cisco UCS. With Cisco UCS Central Software, management can be extended globally to thousands of servers across multiple Cisco UCS domains and locations. In addition, Cisco UCS Director unifies management across computing, networking, and storage components in converged-infrastructure solutions.

For more information about the capabilities and features of Cisco UCS technologies, see the For More Information section at the end of this document. Cisco UCS C-Series Rack Servers Cisco UCS C-Series Rack Servers deliver unified computing in an industry-standard form factor to reduce TCO and increase agility. Each product addresses different workload challenges through a balance of processing, memory, I/O, and internal storage resources. Cisco UCS C-Series Rack Servers provide the following benefits: ●

Form-factor-independent entry point to Cisco UCS



Simplified and fast deployment of applications



Extension of unified computing innovations and benefits to rack servers



Increased customer choice with unique benefits in a familiar rack package



Reduced TCO and increased business agility

Several Cisco UCS C-Series Rack Server models are available, each optimized for particular types of deployments. For VMware Virtual SAN deployments, disk density is the critical factor in model selection. Computing power is also an important consideration. When connected through a Cisco Fabric Interconnect® 6200 series Switch, these servers can be managed by the built-in, Cisco UCS Manager. Cisco UCS Manager supplies a totally integrated management process for both rack and blade servers in a single tool. All Cisco UCS servers use leading Intel® Xeon® processors. For Cisco UCS with VMware Virtual SAN, the UCS C240 M3 Rack Server was considered the optimal choice for the development of this solution. The Cisco UCS C240 M3 Rack Server is a high-density, enterprise-class, 2socket, 2-rack-unit (2RU) rack server designed for computing, I/O, storage, and memory-intensive standalone and

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 5 of 59

virtualized applications. The addition of the Intel Xeon processor E5-2600/E5-2600v2 product family delivers an optimal combination of performance, flexibility, and efficiency gains. The Cisco UCS C240 M3 Rack Server: ●

Is suitable for nearly all memory-intensive, storage-intensive, 2-socket applications



Uses a unique Cisco UCS Virtual Interface Card (VIC) 1225: a dual-port 10 Gigabit Ethernet PCI Express (PCIe) adapter that can support up to 256 PCIe standards-compliant virtual interfaces



Acts as an exceptional building block and entry point for Cisco UCS



Supports continual innovations in Cisco server technology at all levels of Cisco UCS

The Cisco UCS C240 M3 comes in two models. It can support either large form-factor (3.5-inch) or small formfactor (2.5-inch) hard drives. Due to the VMware Virtual SAN technology’s need for SSDs, the small form-factor model typically is required. http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-c240-m3-rackserver/data_sheet_c78-700629.html The Cisco UCS C240 M3 supports: ●

Up to two Intel Xeon E5-2600/E5-2600v2 processors



Up to 768GB of RAM with 24 dual in-line memory module (DIMM) slots



Capacity for up to 24 serial attached SCSI (SAS), serial ATA (SATA), and solid-state disk (SSD) drives for workloads demanding large internal storage



Five PCIe Generation 3 (Gen 3) slots and four 1 Gigabit Ethernet LAN interfaces on the motherboard



Trusted platform module (TPM) for authentication and tool-less access

Computing performance is important because the virtual machines using the VMware Virtual SAN data store reside on the same hosts that contribute disk capacity to the data store. This configuration delivers outstanding levels of internal memory and storage expandability, along with exceptional performance. For more information about the capabilities and features of Cisco UCS technologies, see the For More Information section at the end of this document. Service Profiles In Cisco UCS, a service profile adds a layer of abstraction to the actual physical hardware. The server is defined in a configuration file that is stored on the Cisco UCS 6248UP 48-Port Fabric Interconnect. It can be associated with the physical hardware by using a simple operation from Cisco UCS Manager. When the service profile is applied, Cisco UCS Manager configures the server, adaptors, fabric extenders, and fabric interconnects according to the specified service profile. The service profile makes the physical hardware transparent to the operating systems and virtual machines running on it, enabling stateless computing and increasing the utilization of data center resources. A number of parameters can be defined in the service profile, depending on the environment requirements. Administrators can create policies to define specific rules and operating characteristics. These policies can be referenced in the service profiles to help ensure consistent configuration across many servers. Updates to a policy can be propagated immediately to all servers that reference that policy in their service profiles, or in the case of firmware updates, at the next power-cycling event.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 6 of 59

In addition, the advantages of the service profile can be extended when server-specific parameters, such as the universally unique ID (UUID), MAC address, and worldwide name (WWN), are themselves parameterized and the service profile is converted to a template. The template can be used to rapidly deploy new servers with consistent general parameters and unique server-specific parameters. When combined with templates, service profiles enable the rapid provisioning of servers with consistent operational parameters and high-availability functions. Service profiles can be configured in advance and used to move servers to a new blade, chassis, or rack in the event of a failure.

VMware vSphere VMware vSphere is the industry-leading virtualization platform for building cloud infrastructure. It enables users to run business-critical applications with confidence and respond quickly to business needs. VMware vSphere accelerates the shift to cloud computing for existing data centers, and it enables compatible public cloud offerings, forming the foundation for the industry’s best hybrid cloud models. For more information about the capabilities and features of VMware vSphere, see the For More Information section at the end of this document.

VMware Virtual SAN VMware Virtual SAN is a hypervisor-converged storage solution that is fully integrated with VMware vSphere. VMware Virtual SAN combines storage and computing for virtual machines into a single device, with storage provided within the hypervisor, instead of using a storage virtual machine that runs alongside the other virtual machines. VMware Virtual SAN aggregates locally attached disks in a VMware vSphere cluster to create a storage solution, called a shared data store, which can be rapidly provisioned from VMware vCenter Server during virtual machine provisioning operations. VMware Virtual SAN is an object-based storage system that is designed to provide virtual machine–centric storage services and capabilities through a storage policy–based management (SPBM) platform. The SPBM platform and virtual machine storage policies are designed to simplify virtual machine storage placement decisions for VMware vSphere administrators. VMware Virtual SAN is fully integrated with core VMware vSphere enterprise features such as VMware vSphere vMotion, High Availability, and Distributed Resource Scheduler (DRS). Its goal is to provide both high availability and scale-out storage capabilities. In the context of quality of service (QoS), virtual machine storage policies can be created to define the levels of performance and availability required on a per-virtual machine basis. A VMware Virtual SAN shared data store is constructed with a minimum of three VMware vSphere ESXi hosts, each containing at least one disk group with at least one SSD and one magnetic drive, as shown in Figure 2. It can also support up to seven magnetic drives per disk group and up to five disk groups per host. The VMware virtual machine files are stored on the magnetic drive, and the SSD handles read caching and write buffering. The disk group on each host is joined to a single network partition group that is shared and controlled by the hosts.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 7 of 59

Figure 2.

VMware Virtual SAN Cluster

The size and capacity of the VMware Virtual SAN shared data store are dictated by the number of magnetic disks per disk group in the VMware vSphere host, and by the number of VMware vSphere hosts in the cluster. VMware Virtual SAN is a scale-out solution, in which more capacity and performance can be obtained by adding more disks to a disk group, adding more disk groups to a host, and adding more hosts to a cluster. With VMware Virtual SAN, the SPBM platform plays a major role in determining how administrators can use virtual machine storage policies to specify a set of required storage capabilities for a virtual machine or, to be more specific, to specify a set of requirements for the application running in the virtual machine. The following VMware Virtual SAN data store capabilities configured on VMware vCenter Server: ●

Number of failures to tolerate



Number of disk stripes per object



Flash-memory read cache reservation



Object space reservation



Force provisioning

For more information about the capabilities and features of VMware Virtual SAN, see What’s New in VMware Virtual SAN.

VMware Horizon 6 with View VMware Horizon with View brings the agility of cloud computing to the desktop by transforming desktops into highly available and agile services delivered from the cloud. VMware View delivers virtual sessions that follow end users across devices and locations. It enables fast, secure access to corporate data across a wide range of devices, including the Microsoft Windows, Mac OS, and Linux operating systems for desktop computers, and iOS and Android for tablets (Figure 3).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 8 of 59

With VMware vCenter Server, VMware View can be used to create desktops from virtual machines that are running on VMware ESXi hosts and to deploy these desktops to end users. After a desktop is created, authorized end users can use web-based or locally installed client software to connect securely to centralized virtual desktops, back-end physical systems, or terminal servers. VMware View uses the existing Microsoft Active Directory infrastructure for user authentication and management. Figure 3.

VMware Horizon with View Components

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 9 of 59

VMware View Storage Accelerator VMware View Storage Accelerator is an in-memory host caching capability that uses the content-based read cache (CBRC) feature in VMware ESXi hosts. CBRC provides a per host RAM-based solution for VMware View desktops, which reduces the number of read I/O requests that are sent to the storage layer. It also addresses boot storms— which can occur when multiple virtual desktops are booted at the same time—which can cause a large number of read operations. CBRC is beneficial when administrators or users load applications or data frequently. Note that CBRC was used in all tests that were performed on VMware Horizon with View running on VMware Virtual SAN on Cisco UCS. VMware vCenter Operations Manager for Horizon VMware vCenter Operations Manager for Horizon (V4H) simplifies the management of the virtual desktop infrastructure (VDI) and provides end-to-end visibility into the health and performance of the VDI. It presents data through alerts and metrics displayed on predefined custom dashboards, as shown in Figure 4. Administrators can also create custom dashboards to view environment health metrics in a meaningful format. Figure 4.

VMware vCenter Operations Manager for Horizon Dashboard

VMware vCenter Operations Manager for Horizon extends the capabilities of VMware vCenter Operations Manager Enterprise, and it enables IT administrators and help-desk specialists to monitor and manage VMware Horizon with View environments. The VMware vCenter Operations Manager architecture uses an adapter to pull data from the VMware View Connection Server and Horizon View Agent, as shown in Figure 5. The VMware View adapter obtains the topology from the VMware Horizon environment, collects metrics and other types of information from the desktops, and passes the information to VMware vCenter Operations Manager. Then another VMware vCenter Server adapter pulls data relating to VMware vSphere, networking, storage, and virtual machine performance. VMware vCenter Operations Manager for Horizon provides out-of-the-box dashboards that monitor the health of the VMware Horizon infrastructure and components. These dashboards are accessed using the web-based VMware vCenter Operations Manager console.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 10 of 59

Figure 5.

VMware vCenter Operations Manager for View Architecture

System Configuration (Design) This section describes how the VMware Horizon 6 with View hosted on Cisco UCS with VMware Virtual SAN reference architecture components were configured. As shown in Figure 6, 800 Microsoft Windows 7 and VMware Horizon 6 with View virtual desktops were hosted on eight Cisco UCS C240 M3 Rack Servers with VMware Virtual SAN 5.5 on VMware vSphere 5.5 U2. With the scalability testing performed using VMware View Planner as a benchmarking tool, the joint solution exhibits linear scalability with exceptional end-user performance. Each host supports 100 virtual desktops, which is the maximum supported configuration with VMware Virtual SAN 5.5. The solution demonstrated linear scalability in expanding a 4-node configuration with 400 virtual desktops to an 8-node configuration with 800 virtual desktops with consistent performance, demonstrating that this solution can be further scaled up to 3200 virtual desktops supported by VMware Virtual SAN on 32 hosts in a single cluster.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 11 of 59

Figure 6.

VMware View Running on VMware Virtual SAN Using Cisco UCS (Details)

The Cisco UCS solution’s VMware Virtual SAN Ready Nodes specifications outline the optimal number of disks and disk groups that can be configured while testing linked clones, full clones, and a mixed linked-clone and fullclone setup, as shown in Table 2. Table 2.

VMware Virtual SAN Disk Group Configuration

Type of Virtual Desktops

Number of Disk Groups per Host

Number of SSDs and HDDs in Disk Groups

Used and Total Capacity

400 linked clones

1

1 SSD and 4 HDDs

1.76 TB used out of 13.08 TB

800 linked clones

1

1 SSD and 4 HDDs

3.59 TB used out of 26.16 TB

800 full clones

2

2 SSDs and 12 HDDs

36.57 TB used out of 78.47 TB

400 linked clone and 400 full clones

2

2 SSDs and 12 HDDs

20.04 used out of 78.47 TB

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 12 of 59

Cisco UCS Configuration In this configuration, VMware ESXi is booted from the on-board Cisco FlexFlash SD card. For more information, see Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers. The Cisco FlexFlash SD card configuration is performed through a local disk policy that is applied to the service profile, as shown in the example in Figure 7. Figure 7.

Local Disk Configuration Policy: Cisco FlexFlash State Enabled

VMware Virtual SAN Storage Controller With Cisco UCS C240 M3 Rack Servers, the LSI MegaRAID SAS 9271CV-8i controller (Cisco UCS-RAID-9271-AI) is supported in RAID 0 mode. This controller achieves higher performance than other controllers because of its greater (1024) queue depth. Controllers with a queue depth of less than 256 are not supported with VMware Virtual SAN. For more information, see the VMware knowledge base. For this configuration, a virtual RAID 0 drive must be created for each physical HDD that VMware Virtual SAN uses. To configure virtual RAID 0 with the LSI 9271CV-81 controller: 1.

Download the LSI StorCLI software from the LSI website and install it on the VMware ESXi server. For more information, see LSI.

2.

Copy the vmware-esx-storcli-1.12.13.vib to /var/log/vmware/ directory.

3.

Install VIB using the following command: /var/log/vmware # esxcli software vib install -v=vmware-esx-storcli-1.12.13.vib -no-sig-check

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 13 of 59

4.

Run StorCLI commands from the VMware ESXi console or through Secure Shell (SSH) to create virtual disks of type RAID 0 for each of the individual disks: ./storcli /c0 add vd each type=raid0 pdcache=off

5.

Configure VMware ESXi to mark SSDs because it cannot identify SSDs abstracted behind a RAID controller. Use the following command: esxcli storage nmp satp rule add –-satp VMW_SATP_LOCAL –-device -option="enable_local enable_ssd"

6.

Reboot the hosts to make the changes effective.

Service Profile Configuration The main configurable parameters of a Cisco UCS service profile are summarized in Table 3. Table 3.

Service Profile Parameters

Parameter Type

Parameter

Server hardware

● UUID

● Obtained from defined UUID pool

● MAC addresses

● Obtained from defined MAC address pool

● Worldwide port name (WWPN) ● Worldwide node name (WWNN)

● Obtained from defined WWPN and WWNN pools

● Boot policy

● Boot path and order

● Disk policy

● RAID configuration

● LAN

● Virtual NICs (vNICs), VLANs, and maximum transmission unit (MTU)

● SAN

● Virtual host bus adapters (vHBAs) and virtual SANs (VSANs)

● Quality-of-service (QoS) policy

● Class of service (CoS) for Ethernet uplink traffic

● Firmware policy

● Current and backup versions

● BIOS policy

● BIOS version and settings

● Statistics policy

● System data collection

● Power-control policy

● Blade server power allotment

Fabric

Operation

Description

For Cisco UCS service profiles for hosts in a VMware Virtual SAN cluster, the policy configuration shown here is recommended. This configuration does not include all Cisco UCS service profile settings. The settings shown here are specific to an implementation of Cisco UCS with VMware Virtual SAN for VMware Horizon with View.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 14 of 59

BIOS Policy The BIOS policy configured for the VMware Virtual SAN environment is aimed at achieving high performance, as shown in the example in Figure 8 and in Table 4. Figure 8.

BIOS Policy Configuration for the VMware Virtual SAN Environment

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 15 of 59

Table 4.

BIOS Policy Settings for the VMware Virtual SAN Environment

Policy

Settings ● Turbo Boost = Enabled

Processor

● Enhanced Intel Speedstep = Enabled ● Hyperthreading = Enabled ● Virtualization Technology (VT) = Enabled ● Direct Cache Access = Enabled ● CPU Performance = Enterprise ● Power Technology = Performance ● Energy Performance = Enterprise Intel Directed IO

● VT for Directed IO = Enabled

Memory

● Memory RAS Config = Maximum Performance ● Low-Voltage DDR Mode = Performance Mode

Boot Policy The boot policy is created with a Secure Digital (SD) card as the preferred boot option after the local CD or DVD boot option (Figure 9). Figure 9.

Boot Policy Configuration

Networking VMware vSphere Distributed Switch (VDS) is configured for all hosts in the cluster. It allows virtual machines to maintain a consistent network configuration as the virtual machines migrate across multiple hosts. A separate vNIC is created for each traffic type for virtual machine data, VMware Virtual SAN, VMware vMotion, and management. These vNICs are configured as separate vNIC templates in Cisco UCS and applied as part of the service profile (Table 5).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 16 of 59

Table 5.

vNIC Template Configuration

vNIC Template Name

Fabric ID

Comments

VM-Data_A

Fabric A

MTU = 9000; QoS policy VMData

VM-Data_B

Fabric B

MTU = 9000; QoS policy VMData

Virtual SAN

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy VSAN

vMotion

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy vMotion

MGMT

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy MGMT

The network control policy is set to Cisco Discovery Protocol Enabled, and the dynamic vNIC connection policy is applied with an adapter policy of “VMware.” QoS Policies Table 6 and Figure 10 show the QoS policy and QoS system-class mappings in Cisco UCS for the vNICs. Table 6.

QoS Policy Configuration

QoS Policy Name

Priority

VMData

Gold

Virtual SAN

Platinum

vMotion

Silver

MGMT

Bronze

Figure 10.

QoS System-Class Configuration

VLANs A dedicated VLAN is recommended for the VMware Virtual SAN VMkernel NIC, and multicast is required in the Layer 2 domain. This setting is configured as part of the VLAN as a multicast policy with snooping enabled. The following VLANs were created: ●

VLAN for virtual desktops: This is a /22 subnet with 1022 IP addresses to accommodate all 800 virtual desktops.



VLAN for VMware Virtual SAN: This is a /28 subnet with 14 IP addresses to accommodate 8 hosts.



VLAN for management components: This is a /24 subnet with 254 IP addresses to accommodate all management components, plus the VMware View Planner desktops for running the test workflows.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 17 of 59

VMware Virtual SAN Configuration VMware Virtual SAN is a VMware ESXi cluster-level feature that is configured using the VMware vSphere Web Client. The first step in enabling VMware Virtual SAN is to select one of the two modes of disk-group creation: ●

Automatic: Enable VMware Virtual SAN to discover all the local disks on the hosts and automatically add the disks to the VMware Virtual SAN data store.



Manual: Manually select the disks to add to the VMware Virtual SAN shared data store.

In this setup, disk groups were created manually, and the storage policies listed in Table 7 were applied based on whether the VMware Virtual SAN configuration is for linked clones or full clones. These storage polices are tied to the storage requirements for each virtual machine and are used to provide different levels of availability and performance for virtual machines. Important: Use different policies for different types of virtual machines in the same cluster to meet application requirements. Table 7.

Storage Policies for VMware View

Policy

Definition

Default (Value Applied)

Maximum

Number of disk stripes per object

Defines the number of magnetic disks across which each replica of a storage object is distributed

1

12

Flash-memory read cache reservation

Defines the flash memory capacity reserved as the read cache for the storage object

0%

100%

0 (linked clone); 1 (full clone and replicas)

3 (in 8-host cluster)

Number of failures to tolerate

● Defines the number of host, disk, and network failures that a storage object can tolerate ● For n failures tolerated, n + 1 copies of the object are created, and 2n + 1 hosts of contributing storage are required

Forced provisioning

Determines whether the object is provisioned, even when currently available resources do not meet the virtual machine storage policy requirements

Disabled

Enabled

Object-space reservation

Defines the percentage of the logical size of the storage object that needs to be reserved (thick provisioned) upon virtual machine provisioning (the remainder of the storage object is thin provisioned)

0%

100%

Default storage policy values are configured for linked clones, full clones, and replicas.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 18 of 59

VMware View Configuration VMware Virtual SAN integrates with the VMware View pod and block design methodology, which consists of the following components: ●

VMware View Connection Server: A VMware View Connection Server supports up to 2000 concurrent connections. The tests used two VMware View Connection Servers operating in active-active mode. The two VMware View Connection Servers actively broker and possibly tunnel connections.



VMware View block: VMware View provisions and manages desktops through the VMware vCenter Server. Each VMware vCenter instance supports up to 10,000 virtual desktops. The tests used one VMware vCenter and one VMware Virtual SAN cluster with eight hosts. Note that the maximum number of VMware High Availability protected virtual machines allowed in a VMware vSphere cluster is 2048 per data store.



VMware View management block: A separate VMware vSphere cluster was used for management of servers to isolate the volatile desktop workload from the static server workload. For larger deployments, a dedicated VMware vCenter Server for the management and VMware View blocks is recommended.

VMware vSphere Clusters Two VMware Virtual SAN clusters were used in the environment: ●

An 8-node VMware Virtual SAN cluster was deployed to support 800 virtual desktops, as shown in Figure 11 and Table 8.



A 4-node VMware Virtual SAN cluster was deployed to support infrastructure, management, and VMware View Planner virtual machines used for scalability testing.

Figure 11.

VMware View Running on VMware Virtual SAN Using Cisco UCS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 19 of 59

Table 8.

VMware Virtual SAN Cluster Configuration

Property

Setting

Default

Revised

● HA



Enabled

● DRS



Enabled

● Host Monitoring Status

Enabled



● Admission Control

Enabled



● Admission Control Policy

Host failures the cluster tolerates = 1



● Virtual Machine Options > Virtual Machine Restart Priority

Medium



● Virtual Machine Options > Host Isolation Response

Leave powered on



● Virtual Machine Monitoring

Disabled



● Data Store Heartbeating

Select any, taking into account my preferences (no data store preferred)



● Automation Level

Fully automated (apply 1, 2, 3 priority recommendations)



● DRS Groups Manager





● Rules





● Virtual Machine Options





● Power Management

Off



● Host Options

Default (disabled)



Enhanced VMware vMotion capability

Disabled



Swap-file location

Store in the same directory as the virtual machine



Cluster features

VMware vSphere High Availability

VMware vSphere Storage DRS

Properties regarding security, traffic shaping, and NIC teaming can be defined for a port group. The settings used with the port group design are shown in Table 9. Table 9.

Port Group Properties: VMware dvSwitch v5.5

Property

Setting

Default

Revised

General

● Port Binding

Static



Policies: Security

● Promiscuous Mode

Reject



● MAC Address Changes

Accept

Reject

● Forged Transmits

Accept

Reject

Policies: Traffic Shaping

● Status

Disabled



Policies: Teaming and Failover

● Load Balancing

Route based on the originating virtual port ID

● Failover Detection

Caution: Link status only



● Notify Switches

Yes



Policies: Resource allocation

● Network I/O Control

Disabled

Enabled

Advanced

● Maximum MTU

1500

9000

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 20 of 59

VMware Horizon with View Configuration The VMWare Horizon with View installation included the following core systems: ●

Two connection servers (N+1 recommended for production)



One VMware vCenter Server with the following roles:

◦ VMware vCenter ◦ VMware vCenter single sign-on (SSO) ◦ VMware vCenter inventory service ●

VMware View Composer

Note that VMware View security servers were not used during this testing. VMware View Global Policies The VMware View global policy settings used for all system tests are shown in Table 10. Table 10.

VMware View Global Policies

Network Resource Pool

Host Limit (Mbps)

USB access

Allow

Multimedia redirection (MMR)

Allow

Remote mode

Allow

PCoIP hardware acceleration

Allow: Medium priority

VMware View Manager Global Settings The VMware View Manager global policy settings that were used are shown in Table 11. Table 11.

VMware View Manager Global Settings

Attribute

Specification

Session timeout

600 (10 hours)

VMware View Administrator session timeout

30 minutes

Auto-update

Enabled

Display prelogin message

No

Display warning before logout

No

Reauthenticate secure tunnel connections after network interruption

No

Enable IP Security (IPsec) for security server pairing

Yes

Message security mode

Enabled

Disable single sign-on for local-mode operations

No

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 21 of 59

VMware vCenter Server Settings VMware View Connection Server uses VMware vCenter Server to provision and manage VMware View desktops. VMware vCenter Server is configured in VMware View Manager as shown in Table 12. Table 12.

VMware View Manager: VMware vCenter Server Configuration

Attribute

Setting

Specification

Connect using SSL

vCenter Server Settings > SSL

Yes

VMware vCenter port

vCenter Server Settings > Port

443

VMware View Composer port

View Composer Server Settings > Port

18,443

Enable VMware View Composer

View Composer Server Settings > Co-Installed

Yes

Advanced settings

Maximum Concurrent vCenter Provisioning Operations

20

Maximum Concurrent Power Operations

50

Maximum Concurrent View Composer Maintenance Operations

12

Maximum Concurrent View Composer Provisioning Operations

12

Enable View Storage Accelerator



Default Host Cache Size

2048 MB

Storage settings

VMware View Manager Pool Settings The VMware View Manager pool settings were configured as shown in Tables 13 and 14. Table 13.

VMware View Manager: VMware View Manager Pool Configuration

Attribute

Specification

Pool type

Automated Pool

User assignment

Floating

Pool definition: VMware vCenter Server

Linked Clones

Pool ID

Desktops

Display name

Desktops

VMware View folder

/

Remote desktop power policy

Take no power action

Auto logoff time

Never

User reset allowed

False

Multi-session allowed

False

Delete on logoff

Never

Display protocol

PCoIP

Allow protocol override

False

Maximum number of monitors

1

Max resolution

1920 x 1200

HTML access

Not selected

Flash quality level

Do not control

Flash throttling level

Disabled

Enable provisioning

Enabled

Stop provisioning on error

Enabled

Provision all desktops upfront

Enabled

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 22 of 59

Table 14.

VMware View Manager: Test Pool Configuration

Attribute

Specification

Disposable file redirection

Do not redirect

Select separate data stores for replica and OS

Not selected

Data stores: Storage overcommit

Conservative

Use VMware View storage accelerator

Selected

Reclaim virtual machine disk space*



Disk types

OS disks

Regenerate storage accelerator after

7 days

Reclaim virtual machine disk space



Use Quickprep

Enabled

* VMware Virtual SAN does not support the space-efficient (SE) sparse disk format.

Test Results VMware View running on VMware Virtual SAN on the Cisco UCS reference architecture was tested based on realworld test scenarios, user workloads, and infrastructure system configurations. The tests performed included the following configurations: ●

Test 1: 400 VMware View linked clones on four Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 2: 800 VMware View linked clones on eight Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 3: 800 VMware View full clones on eight Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 4: Mixed 400 VMware View linked clones and 400 full clones on eight Cisco UCS C240 M3 servers



VMware View operations tests



VMware Virtual SAN availability and manageability tests

All of these tests and the test results are summarized in the sections that follow

Test Summary VMware View Planner is a VDI workload generator that automates and measures a typical office worker’s activity: use of Microsoft Office applications, web browsing, reading a PDF file, watching a video, etc. The operations generated include opening a file, browsing the web, modifying files, saving files, closing files, and more. Each VMware View Planner operation runs iteratively. Each iteration is a randomly sequenced workload consisting of these applications and operations. The results of a test run consist of latency statistics collected for these applications and operations for all iterations. In addition to VMware View Planner scores, VMware Virtual SAN Observer and VMware vCenter Operations Manager for Horizon are used as monitoring tools. For more information about the applications used for this testing, see the Test Methodology section later in this document.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 23 of 59

Test 1: 400 VMware View Linked Clones on Four Cisco UCS C240 M3 Servers in VMware Virtual SAN Cluster VMware View Planner tests were run on 400 linked clones on four hosts with exceptional user performance, as represented by the VMware View Planner score and latency values. In the VMware View Planner results, QoS is determined for multiple types of applications categorized as Group A, Group B, and Group C user operations: ●

Group A applications are interactive, fast-running operations that are CPU bound: browsing a PDF file, modifying a Microsoft Word document, and so on.



Group B applications are long-running slow operations that are I/O bound: opening a large document, saving a Microsoft PowerPoint file, and so on.



Group C consists of background load operations that are used to generate additional load during testing. These operations are not used to determine QoS and hence have no latency thresholds.

The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. The test results in Figure 12 show that the latency values for the 95th percentile of applications in each group are lower than the required threshold. These results correspond to expected end-user performance while 400 linked clones are run on four hosts. Figure 12.

VMware View Planner Score: 400 Linked Clones

Test result highlights include: ●

Average of 85 percent CPU utilization



Average of up to 85 GB of RAM used out of 256 GB available



Average of 16.02 MBps of network bandwidth used



Average of 13.447 ms of I/O latency per host



Average of 1983 I/O operations per second (IOPS) per host

The specific latency values for all the applications are shown in Table 15. Table 15.

Application Latency Values: 400 Linked Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

1197

3.822986

3.530973

0.313

AdobeReader-Browse

A

23940

0.238951

0.196896

0.813

AdobeReader-Close

A

1197

0.766411

0.750155

0.053

AdobeReader-Maximize

A

2394

0.699528

0.766001

0.219

AdobeReader-Minimize

A

1197

0.312196

0.296619

0.204

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 24 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

AdobeReader-Open

B

1197

0.712551

0.582646

1.001

Excel Sort-Close

A

1197

0.307458

0.192492

1.048

Excel Sort-Compute

A

31122

0.025334

0.023062

0.426

Excel Sort-Entry

A

31122

0.179093

0.147062

0.860

Excel Sort-Maximize

A

3591

0.365166

0.323991

0.316

Excel Sort-Minimize

A

1197

0.000692

0.000657

0.678

Excel Sort-Open

B

1197

0.593777

0.515999

0.624

Excel Sort-Save

B

1197

0.578326

0.513369

0.394

Firefox-Close

A

1197

0.52622

0.513906

0.052

Firefox-Open

B

1197

1.037588

0.84357

0.805

IE ApacheDoc-Browse

A

65835

0.085855

0.068178

2.397

IE ApacheDoc-Close

A

1197

0.005479

0.001636

8.362

IE ApacheDoc-Open

B

1197

0.882902

0.468084

3.336

IE WebAlbum-Browse

A

17955

0.26255

0.159749

2.395

IE WebAlbum-Close

A

1197

0.007337

0.001726

9.868

IE WebAlbum-Open

B

1197

0.870285

0.480918

3.008

Outlook-Attachment-Save

B

5985

0.076468

0.056133

2.510

Outlook-Close

A

1197

0.619196

0.554815

0.403

Outlook-Open

B

1197

0.777402

0.703031

0.385

Outlook-Read

A

11970

0.323953

0.209812

1.951

Outlook-Restore

C

13167

0.386632

0.375205

0.594

PPTx-AppendSlides

A

4788

0.083413

0.064426

0.823

PPTx-Close

A

1197

0.548461

0.492398

0.547

PPTx-Maximize

A

4788

0.00122

0.000728

7.175

PPTx-Minimize

A

2394

0.000684

0.000616

1.263

PPTx-ModifySlides

A

4788

0.304398

0.268314

0.661

PPTx-Open

B

1197

3.062735

3.031899

0.117

PPTx-RunSlideShow

A

8379

0.341099

0.528672

0.484

PPTx-SaveAs

C

1197

3.818085

2.91416

1.148

Video-Close

A

1197

0.069317

0.038364

1.822

Video-Open

B

1197

0.155579

0.048608

7.257

Video-Play

C

1197

50.511642

50.434445

0.005

Word-Close

A

1197

0.572719

0.602094

0.307

Word-Maximize

A

3591

0.323592

0.263979

0.378

Word-Minimize

A

1197

0.000679

0.000621

2.133

Word-Modify

A

25137

0.056807

0.059311

0.434

Word-Open

B

1197

4.213084

3.775295

0.608

Word-Save

B

1197

3.44489

3.354615

0.215

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 13, 14, and 15. All hosts had similar utilization on average while hosting 100 virtual desktops each.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 25 of 59

Figure 13.

Host CPU Utilization from VMware View Planner: 400 Linked Clones, Average CPU Use in Percent

Figure 14.

Host Memory Utilization from VMware View Planner: 400 Linked Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 90 GB out of 256 GB available on the host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 26 of 59

Figure 15.

Network Utilization from VMware View Planner: 400 Linked Clones, Average Network Use

Disk latency values shown in Figure 16 are obtained from VMware Virtual SAN Observer. Average read and write latency is 14 ms for the host shown in the figure, and on average is 13.44 ms across all hosts. These values, below the target threshold of 20 ms, correlate with the low application response times measured by VMware View Planner, and the overall results of a better end user experience. In these tests, the average of 1983 IOPS is generated per host. This value is well below the maximum IOPS capacity for similar VMware Virtual SAN systems based on the Cisco UCS C240 M3 as detailed the document VMware Virtual SAN with Cisco Unified Computing System Reference Architecture. Figure 16.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 400 Linked Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 27 of 59

Test 2: 800 VMware View Linked Clones on Eight Cisco UCS C240 M3 Servers in a VMware Virtual SAN Cluster VMware View Planner tests were run on 800 linked clones on eight hosts with exceptional user performance. The tests demonstrated linear scalability from support for 400 desktops on four nodes to support for 800 desktops on eight nodes. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average of up to 82 GB of RAM used out of 256 GB available



Average of 17.12 MBps of network bandwidth used



Average of 14.966 ms of I/O latency per host



Average of 1616 IOPS per host

The QoS summary, application response times, and host utilization values are shown in Figure 17 and Table 16. Figure 17.

VMware View Planner Score: 800 Linked Clones

Table 16.

Application Latency Values: 800 Linked Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2391

3.354891

3.158507

0.303

AdobeReader-Browse

A

47820

0.223914

0.183528

0.822

AdobeReader-Close

A

2391

0.763632

0.750128

0.048

AdobeReader-Maximize

A

4782

0.697711

0.762164

0.218

AdobeReader-Minimize

A

2391

0.313204

0.301153

0.198

AdobeReader-Open

B

2391

0.665166

0.548225

1.013

Excel Sort-Close

A

2391

0.290767

0.186278

1.020

Excel Sort-Compute

A

62166

0.024431

0.022728

0.386

Excel Sort-Entry

A

62166

0.165498

0.140708

0.777

Excel Sort-Maximize

A

7173

0.364276

0.320248

0.321

Excel Sort-Minimize

A

2391

0.000661

0.000627

0.416

Excel Sort-Open

B

2391

0.548292

0.489547

0.588

Excel Sort-Save

B

2391

0.543247

0.484791

0.392

Firefox-Close

A

2391

0.526084

0.514294

0.052

Firefox-Open

B

2391

0.973367

0.785845

0.872

IE ApacheDoc-Browse

A

131505

0.082216

0.063011

2.456

IE ApacheDoc-Close

A

2391

0.005364

0.001548

8.607

IE ApacheDoc-Open

B

2391

0.782738

0.431805

3.245

IE WebAlbum-Browse

A

35865

0.250286

0.152367

2.460

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 28 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

IE WebAlbum-Close

A

2391

0.007503

0.001622

11.315

IE WebAlbum-Open

B

2391

0.805998

0.446739

2.963

Outlook-Attachment-Save

B

11955

0.068486

0.053753

2.305

Outlook-Close

A

2391

0.616925

0.554705

0.396

Outlook-Open

B

2391

0.735236

0.676026

0.336

Outlook-Read

A

23910

0.297843

0.199803

1.739

Outlook-Restore

C

26301

0.346654

0.340861

0.590

PPTx-AppendSlides

A

9564

0.078069

0.062656

0.763

PPTx-Close

A

2391

0.518743

0.461373

0.530

PPTx-Maximize

A

9564

0.001144

0.000679

7.695

PPTx-Minimize

A

4782

0.00062

0.000579

0.796

PPTx-ModifySlides

A

9564

0.291094

0.255203

0.686

PPTx-Open

B

2391

2.813034

2.8045

0.135

PPTx-RunSlideShow

A

16737

0.337466

0.527942

0.484

PPTx-SaveAs

C

2391

3.567793

2.791217

1.084

Video-Close

A

2391

0.067433

0.03201

2.166

Video-Open

B

2391

0.145677

0.045696

7.455

Video-Play

C

2391

50.486084

50.421127

0.005

Word-Close

A

2391

0.551263

0.585316

0.312

Word-Maximize

A

7173

0.321871

0.261876

0.379

Word-Minimize

A

2391

0.000609

0.000584

0.363

Word-Modify

A

50211

0.05865

0.065478

0.398

Word-Open

B

2391

3.889717

3.485008

0.578

Word-Save

B

2391

3.198789

3.167595

0.194

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 18, 19, and 20. Figure 18.

Host CPU Utilization from VMware View Planner: 800 Linked Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 29 of 59

Figure 19.

Host Memory Utilization from VMware View Planner: 800 Linked Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 82 GB out of 256 GB available on the host. Figure 20.

Network Utilization from VMware View Planner: 800 Linked Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figure 21. Combined average read and write latency is measured as 16 ms on one of the hosts shown here, and is an average of 14.96 ms for all hosts. In these tests, the average IOPS generated are 1616 IOPS per host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 30 of 59

Figure 21.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 800 Linked Clones

Test 3: 800 VMware View Full Clones on Eight Cisco UCS C240 M3 Servers on a VMware Virtual SAN Cluster In addition to the testing for linked clones, 800 full clones were tested with higher virtual machine specifications of two vCPUs and 40 GB of disk space to mimic higher desktop resources allocated to full dedicated desktops. The results show the QoS summary, application response times, and host utilization values. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average of up to 84 GB of RAM used out of 256 GB available



Average of 13.13 MBps of network bandwidth used



Average of 13.995 ms of I/O latency per host



Average of 1087.87 IOPS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 31 of 59

The QoS summary, application response times, and host utilization values are described in Figure 22 and Table 17. Figure 22.

VMware View Planner Score: 800 Full Clones

Table 17.

Application Latency Values: 800 Full Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2388

3.827199

3.525832

0.377

AdobeReader-Browse

A

47760

0.243642

0.200829

0.817

AdobeReader-Close

A

2388

0.76643

0.750171

0.058

AdobeReader-Maximize

A

4776

0.706106

0.766201

0.228

AdobeReader-Minimize

A

2388

0.313208

0.294657

0.211

AdobeReader-Open

B

2388

0.718087

0.577403

1.042

Excel Sort-Close

A

2388

0.335683

0.229137

0.927

Excel Sort-Compute

A

62088

0.026431

0.02438

0.511

Excel Sort-Entry

A

62088

0.184258

0.151464

0.901

Excel Sort-Maximize

A

7164

0.36963

0.330758

0.313

Excel Sort-Minimize

A

2388

0.000745

0.000662

3.522

Excel Sort-Open

B

2388

0.610323

0.531417

0.636

Excel Sort-Save

B

2388

0.61182

0.548862

0.380

Firefox-Close

A

2388

0.528206

0.514102

0.079

Firefox-Open

B

2388

1.070024

0.835468

0.972

IE ApacheDoc-Browse

A

131340

0.088938

0.069274

2.385

IE ApacheDoc-Close

A

2388

0.00579

0.001658

8.314

IE ApacheDoc-Open

B

2388

0.889725

0.477459

3.388

IE WebAlbum-Browse

A

35820

0.270266

0.162112

2.474

IE WebAlbum-Close

A

2388

0.007759

0.001714

10.623

IE WebAlbum-Open

B

2388

0.872419

0.484339

2.904

Outlook-Attachment-Save

B

11940

0.075901

0.057302

2.110

Outlook-Close

A

2388

0.685793

0.615624

0.394

Outlook-Open

B

2388

0.777585

0.699325

0.409

Outlook-Read

A

23880

0.333419

0.216472

1.990

Outlook-Restore

C

26268

0.388123

0.368232

0.655

PPTx-AppendSlides

A

9552

0.085008

0.06647

0.905

PPTx-Close

A

2388

0.562465

0.503776

0.535

PPTx-Maximize

A

9552

0.00135

0.000718

13.038

PPTx-Minimize

A

4776

0.000738

0.000613

6.706

PPTx-ModifySlides

A

9552

0.308703

0.269817

0.661

PPTx-Open

B

2388

3.07551

3.009046

0.147

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 32 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

PPTx-RunSlideShow

A

16716

0.341095

0.528568

0.484

PPTx-SaveAs

C

2388

4.132029

3.131973

1.178

Video-Close

A

2388

0.073594

0.037658

2.037

Video-Open

B

2388

0.15297

0.049374

6.926

Video-Play

C

2388

50.660026

50.456134

0.014

Word-Close

A

2388

0.569959

0.597739

0.319

Word-Maximize

A

7164

0.327242

0.265223

0.383

Word-Minimize

A

2388

0.000671

0.00061

1.571

Word-Modify

A

50148

0.057318

0.059561

0.461

Word-Open

B

2388

4.293781

3.73521

0.670

Word-Save

B

2388

3.635548

3.527526

0.238

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 23, 24, and 25. Figure 23.

Host CPU Utilization from VMware View Planner: 800 Full Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 33 of 59

Figure 24.

Host Memory Utilization from VMware View Planner: 800 Full Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 84 GB out of 256 GB available on the host. Figure 25.

Host CPU Utilization from VMware View Planner: 800 Full Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figure 26. Combined average read and write latency is measured as 18 ms on one of the hosts shown here, and is an average of 13.99 ms for all hosts. In these tests, an average of 1087.87 IOPS are generated per host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 34 of 59

Figure 26.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 800 Full Clones

Test 4: Mixed 400 VMware View Linked Clones and 400 Full Clones on Eight Cisco UCS C240 M3 Servers To simulate a production environment, which would typically have a mix of linked clones and full clones, a test with 400 linked clones and 400 full clones was conducted on eight nodes. For this testing, all eight nodes were made available for provisioning linked clones and full clones. In other words, the linked clones and full clones were distributed across the entire cluster. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average 80 to 85 GB of RAM used out of 256 GB available



Average 11.05 MBps of network bandwidth used



Average of 7.80 ms of I/O latency



Average of 1043.37 IOPS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 35 of 59

Figure 27 and Table 18 show the values. Figure 27.

VMware View Planner Score: 400 Linked Clones and 400 Full Clones

Table 18.

Application Latency Values: 400 Linked Clones and 400 Full Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2391

3.837502

3.609511

0.299

AdobeReader-Browse

A

47820

0.237904

0.199534

0.776

AdobeReader-Close

A

2391

0.76634

0.750187

0.054

AdobeReader-Maximize

A

4782

0.705092

0.765475

0.222

AdobeReader-Minimize

A

2391

0.313736

0.298537

0.206

AdobeReader-Open

B

2391

0.73929

0.601873

0.978

Excel Sort-Close

A

2391

0.326697

0.220315

0.948

Excel Sort-Compute

A

62166

0.026193

0.024479

0.390

Excel Sort-Entry

A

62166

0.180245

0.152002

0.756

Excel Sort-Maximize

A

7173

0.369944

0.334935

0.309

Excel Sort-Minimize

A

2391

0.000716

0.000687

0.462

Excel Sort-Open

B

2391

0.616223

0.543731

0.574

Excel Sort-Save

B

2391

0.616912

0.544978

0.399

Firefox-Close

A

2391

0.526329

0.51306

0.056

Firefox-Open

B

2391

1.035522

0.841609

0.820

IE ApacheDoc-Browse

A

131340

0.088842

0.069451

2.349

IE ApacheDoc-Close

A

2388

0.005519

0.001686

7.620

IE ApacheDoc-Open

B

2388

0.909516

0.496354

3.157

IE WebAlbum-Browse

A

35865

0.267615

0.162217

2.419

IE WebAlbum-Close

A

2391

0.007523

0.001767

9.844

IE WebAlbum-Open

B

2391

0.889531

0.513018

2.684

Outlook-Attachment-Save

B

11955

0.07668

0.057119

2.535

Outlook-Close

A

2391

0.686446

0.616239

0.381

Outlook-Open

B

2391

0.763189

0.69918

0.337

Outlook-Read

A

23910

0.334432

0.213805

2.002

Outlook-Restore

C

26301

0.419161

0.404693

0.596

PPTx-AppendSlides

A

9564

0.083011

0.066234

0.775

PPTx-Close

A

2391

0.558762

0.507701

0.492

PPTx-Maximize

A

9564

0.001278

0.000723

7.788

PPTx-Minimize

A

4782

0.000684

0.000624

0.681

PPTx-ModifySlides

A

9564

0.30651

0.268399

0.658

PPTx-Open

B

2391

3.094825

3.05699

0.126

PPTx-RunSlideShow

A

16737

0.340805

0.528658

0.483

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 36 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

PPTx-SaveAs

C

2391

3.937301

3.066142

1.046

Video-Close

A

2391

0.073392

0.038045

1.943

Video-Open

B

2391

0.145744

0.048822

6.829

Video-Play

C

2391

50.537753

50.442256

0.007

Word-Close

A

2391

0.56236

0.591607

0.314

Word-Maximize

A

7173

0.325756

0.265219

0.377

Word-Minimize

A

2391

0.000666

0.000622

0.956

Word-Modify

A

50211

0.058245

0.062483

0.431

Word-Open

B

2391

4.33827

3.819501

0.627

Word-Save

B

2391

3.620773

3.54622

0.191

The host utilization metrics for CPU, memory, network, and disk I/O values obtained while running the test are shown in Figures 28, 29, and 30. Figure 28.

Host CPU Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 37 of 59

Figure 29.

Host Memory Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 88 GB out of 256 GB available on the host. Figure 30.

Host CPU Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figures 31 and 32. Combined average read and write latency is measured as 8 ms on one of the hosts shown here, and is an average of 7.80 ms for all hosts. In these tests, average IOPS generated are 1043.37.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 38 of 59

Figure 31.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 400 Linked Clones and 400 Full Clones

Figure 32.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: 400 Linked Clones and 400 Full Clones

VMware View Operations Tests In addition to running VMware View Planner tests, VMware View operations tests were conducted to measure the effect of these administrative tasks on the environment, as shown in Table 19. Table 19.

VMware View on Cisco UCS C240 M3: Operations Test Results

Details

400 Linked Clones

800 Linked Clones

800 Full Clones

Mixed (400 Linked Clones and 400 Full Clones)

Hosts

4

8

8

8

VMware Virtual SAN disk groups

Single disk group per host: 1 SSD and 4 HDDs

Single disk group per host: 1 SSD and 4 HDDs

Two disk groups per host: 2 SSDs and 12 HDDs

Single disk group per host: 2 SSDs and 12 HDDs

Provisioning time

42 minutes

80 minutes

9 hours and 29 minutes

4 hours and 15 minutes

Recompose time

60 minutes

121 minutes



60 minutes for 400 linked clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 39 of 59

Details

400 Linked Clones

800 Linked Clones

800 Full Clones

Mixed (400 Linked Clones and 400 Full Clones)

Refresh time

36 minutes

72 minutes



36 minutes for 400 linked clones

Power-on time

4 minutes

8 minutes

8 minutes

8 minutes

Delete time

22 minutes

44 minutes

47 minutes

41 minutes

Times for these VMware View operations is measured through log entries found at C:\Program Data\VMware\VDM\logs\log-YEAR—MONTH—DAY for the VMware vCenter Server. In addition, CPU utilization during these operations is shown in Figures 33 through 39. Figure 33.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 400 Linked Clones and 400 Full Clones

Figure 34.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 800 Full Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 40 of 59

Figure 35.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Power-on Operation for 800 Linked Clones

Figure 36.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Recomposition Operation for 800 Linked Clones

Figure 37.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Desktop Refresh Operation for 800 Linked Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 41 of 59

Figure 38.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 800 Linked Clones

Figure 39.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deletion Operation for 400 Linked Clones

VMware Virtual SAN Availability and Manageability Tests VMware Virtual SAN is fully integrated with VMware vSphere advanced features, including VMware vMotion, DRS, and High Availability, to provide the best level of availability for the virtualized environment. For redundancy, VMware Virtual SAN uses a distributed RAID architecture, which enables a VMware vSphere cluster to accommodate the failure of a VMware vSphere host or a component within a host. For example, a VMware cluster can accommodate the failure of magnetic disks, flash memory–based devices, and network interfaces, while continuing to provide complete capabilities for all virtual machines. In addition, availability is defined for each virtual machine through the use of virtual machine storage policies. These policies, along with the VMware Virtual SAN distributed RAID architecture, virtual machines, and copies of the virtual machine contents, are distributed across multiple VMware vSphere hosts in the cluster. In the event of a failure, a failed node does not necessarily need to migrate data to a surviving host in the cluster.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 42 of 59

The VMware Virtual SAN data store is based on object-oriented storage. In this approach, a virtual machine on the VMware Virtual SAN is made up of these VMware Virtual SAN objects: ●

The virtual machine home or namespace directory



A swap object (if the virtual machine is powered on)



Virtual disks or virtual machine disks (VMDKs)



Delta disks created for snapshots (each delta disk is an object)

The virtual machine namespace directory holds all the virtual machine files (.vmx files, log files, and so on). It excludes VMDKs, delta disks, and swap files, which are maintained as separate objects. This approach is important because it determines the way in which objects and components are built and distributed in VMware Virtual SAN. For instance, there are soft limitations, and exceeding those limitations can affect performance. In addition, witnesses are deployed to arbitrate between the remaining copies of data in the event of a failure within the VMware Virtual SAN cluster. The witness component helps ensure that no split-brain scenarios occur. Witness deployment is not predicated on any failures-to-tolerate (FTT) or stripe-width policy settings. Rather, witness components are defined as primary, secondary, and tie-breaker and are deployed based on a defined set of rules, as follows: ●

Primary witnesses: Primary witnesses require at least (2 x FTT) + 1 nodes in a cluster to tolerate the FTT number of node and disk failures. If the configuration does not have the required number of nodes after all the data components have been placed, the primary witnesses are placed on exclusive nodes until the configuration has (2 x FTT) + 1 nodes.



Secondary witnesses: Secondary witnesses are created to help ensure that each node has equal voting power in its contribution to a quorum. This capability is important because each node failure needs to affect the quorum equally. Secondary witnesses are added to allow each node to receive an equal number of components, including the nodes that hold only primary witnesses. The total count of data components, plus witnesses on each node, is equalized in this step.



Tie-breaker witnesses: After primary witnesses and secondary witnesses have been added, if the configuration has an even number of total components (data and witnesses), then one tie-breaker witness is added to make the total component count an odd number.

The following sections describe the VMware Virtual SAN data store scenarios for maintaining resiliency and availability while performing day-to-day operations. Planned Maintenance For planned operations, the VMware Virtual SAN provides three host maintenance mode options: Ensure Accessibility, Full Data Migration, and No Data Migration. Each is described in the sections that follow.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 43 of 59

Ensure Accessibility The Ensure Accessibility option is the default host maintenance mode. With this option, VMware Virtual SAN helps ensure that all accessible virtual machines on the host remain accessible, either when the host is powered off or when it is removed from the cluster. In this case, VMware Virtual SAN copies just enough data to other hosts in the cluster to help ensure the continued operation of all virtual machines, even if this process results in a violation of the FTT policy. Use this option only when the host will remain in maintenance mode for only a short period of time. During this time period, the system cannot guarantee resiliency after failures. Typically, this option requires only partial data evacuation. Select Ensure Accessibility to remove the host from the cluster temporarily, such as to install upgrades, and then return the host to the same cluster. Do not use this option to permanently remove the host from the cluster. Full Data Migration When Full Data Migration is selected, the VMware Virtual SAN moves all its data to other hosts in the cluster. Then it maintains or fixes availability compliance for the affected components in the cluster. This option results in the largest amount of data transfer, and this migration consumes the most time and resources. Select the Full Data Migration option only when the host needs to be migrated permanently. When evacuating data from the last host in the cluster, be sure to migrate the virtual machines to another data store, and then put the host in maintenance mode. The testing described in this document included a Full Data Migration test. With VMware Virtual SAN, placing a host in maintenance mode with the Full Data Migration option causes the virtual machine objects to be transferred to a different host. This migration is in addition to any virtual machines that were proactively migrated by administrators because the host may have disk objects for virtual machines that reside on other hosts. This transfer can be verified by using the vsan.resync_dashboard -r 0 Ruby vSphere Console (RVC) command, which shows the data being migrated as in the example in Figure 40.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 44 of 59

Figure 40.

Host Maintenance Mode: Full Data Migration

No Data Migration When No Data Migration is selected, VMware Virtual SAN does not evacuate any data from this host. If the host is powered off or removed from the cluster, some virtual machines may become inaccessible. VMware Virtual SAN Failure Simulations In some cases, during ongoing operations in a VMware Virtual SAN environment, either an individual disk failure or a host failure may affect virtual machine availability based on the storage policies applied. This section simulates these failure scenarios to demonstrate how VMware Virtual SAN maintains storage data that is highly available under different conditions. Magnetic Disk Failure Simulation In a VMware Virtual SAN environment, if a magnetic disk storing any component of any object fails, it is marked as “Degraded,” and Virtual SAN immediately begins to rebuild components from that disk on other disks. This action is usually triggered when a drive or controller reports some kind of physical hardware failure. However, if a magnetic disk goes offline, it is marked as “Absent.” In this case, VMware Virtual SAN does not immediately rebuild components. Instead, it waits a default time of 60 minutes for the drive to be replaced or restored. This response is usually triggered by pulling a drive from its slot. During this time period, virtual machines continue to run using replicas of their components that exist on other drives. The only virtual machines that cease functioning are those that have a failure policy of FTT=0 and that have the sole copy of their data stored on the offline drive.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 45 of 59

If the drive is replaced within 60 minutes, VMware Virtual SAN simply updates the data on that drive to synchronize it with the live data from the rest of the cluster. If the drive has not been replaced after 60 minutes, VMware Virtual SAN changes the state of the drive to “Degraded” and then begins to rebuild the data on other drives. Note that the VMware Virtual SAN default 60-minute repair-delay time can be modified. For more information, see Changing the Default Repair-Delay Time for a Host Failure in VMware Virtual SAN. For this simulation, object placements for the replica virtual machine are configured with FTT=1 and use the default storage policies. The magnetic disk is removed from the disk group as indicated by the “Object not found” status in Figure 41. After the default wait time has passed, the state of the drive changes from “Absent” to “Degraded.” Figure 41.

Magnetic Disk Failure Simulation: Degraded Disk

Another way to check the disk object information is by using the RVC command vsan.disk_object_info. In this case, one of the disks is not found, as shown in the example in Figure 42.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 46 of 59

Figure 42.

Magnetic Disk Failure Simulation: Degraded Disk in VMware Virtual SAN Observer

After the repair-delay time is reached, VMware Virtual SAN rebuilds the disk objects from the replica and then uses a different disk, as shown in Figure 43. Figure 43.

Magnetic Disk Failure Simulation: Repair Delay Time Reached

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 47 of 59

By using the vsan.disk_object_info RVC command on the new disk, the virtual machine object constructs are found, as shown in Figure 44. Figure 44.

Magnetic Disk Failure Simulation: Repair Delay Time Reached

SSD Failure Simulation If an SSD in a VMware Virtual SAN disk group fails, the disk group becomes inaccessible, and the magnetic disks in the disk group do not contribute to the VMware Virtual SAN storage. As in the magnetic disk failure simulation, when an SSD fails, the VMware Virtual SAN waits through a 60-minute default repair delay time before it rebuilds the virtual machine objects from a different SSD: for example, in the event of a nontransient failure. The absent SSD makes the entire disk group unavailable, and after the default wait time the individual components are rebuilt across the other available disk groups. In the SSD failure test, an SSD was removed from a disk group, as shown in Figure 45. The SSD state is displayed as “Degraded” because the disk was manually removed from a disk group. For an actual disk failure, the state is displayed as “Missing.”

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 48 of 59

Figure 45.

SSD Failure Simulation: Disk Removed

After the repair delay time is reached, if the SSD failure continues to exist, VMware Virtual SAN rebuilds the virtual machine layout using a different SSD, as shown in Figure 46.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 49 of 59

Figure 46.

SSD Failure Simulation: Repair Delay Time Reached

Network Failure Simulation The VMware Virtual SAN VMkernel network is configured with redundant virtual networks connected to Cisco UCS fabric interconnects A and B. To verify that the VMware Virtual SAN traffic is not disrupted, the physical port was disabled from Cisco UCS Manager to display a continuous vmkping to the VMware Virtual SAN IP address on the dedicated network, as shown in Figure 47.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 50 of 59

Figure 47.

Network Failure Simulation

Similar redundancy for the management network in a VMware Virtual SAN environment is anticipated.

Test Methodology The reference architecture for this solution uses VMware View Planner as the benchmarking tool, and it uses VMware Virtual SAN Observer and vCenter Operations Manager for Horizon as the performance monitoring tools.

VMware View Planner 3.5 VMware View Planner is a VDI workload generator that automates and measures a typical office worker’s activity: use of Microsoft Office applications, web browsing, reading a PDF file, watching a video, etc. Each VMware View Planner operation runs iteratively. Each iteration is a randomly sequenced workload consisting of these applications and operations. The results of a run consist of latency statistics collected for the applications and operations for all iterations. In addition to VMware View Planner scores, VMware Virtual SAN Observer and VMware vCenter Operations Manager for Horizon are used as monitoring tools (Figure 48).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 51 of 59

Figure 48.

VMware View Planner Components

The standardized VMware View Planner workload consists of nine applications performing a total of 44 user operations (Table 20). These user operations are separated into three groups: interactive operations (Group A), I/O operations (Group B), and background load operations (Group C). The operations in Group A are used to determine quality of service. QoS is determined separately for Group A user operations and Group B user operations and is the 95th percentile of latency for all the operations in a group. The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. The operations in Group C are used to generate additional load. Table 20.

VMware View Planner Operations

Group A

Group B

Group C

AdobeReader: Browse

AdobeReader: Open

7zip: Compress

AdobeReader: Close

Excel_Sort: Open

Outlook: Restore

AdobeReader: Maximize

Excel_Sort: Save

PowerPoint: SaveAs

AdobeReader: Minimize

Firefox: Open

Video: Play

Excel_Sort: Close

IE_ApacheDoc: Open

Excel_Sort: Compute

IE_WebAlbum: Open

Excel_Sort: Entry

Outlook: Attachment-Save

Excel_Sort: Maximize

Outlook: Open

Excel_Sort: Minimize

PowerPoint: Open

Firefox: Close

Video: Open

IE_ApacheDoc: Browse

Word: Open

IE_ApacheDoc: Close

Word: Save

IE_WebAlbum: Browse

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 52 of 59

Group A

Group B

Group C

IE_WebAlbum: Close Outlook: Close Outlook: Read PowerPoint: AppendSlides PowerPoint: Close PowerPoint: Maximize PowerPoint: Minimize PowerPoint: ModifySlides PowerPoint: RunSlideShow Video: Close Word: Close Word: Maximize Word: Minimize Word: Modify

For the testing, VMware View Planner performed a total of five iterations: ●

Ramp up (first iteration)



Steady state (second, third, and fourth iterations)



Ramp down (fifth iteration)

During each iteration, VMware View Planner reports the latencies for each operation performed in each virtual machine.

VMware Virtual SAN Observer VMware Virtual SAN Observer is designed to capture performance statistics for a VMware Virtual SAN cluster and provide access through a web browser for live measurements. It also can generate a performance bundle over a specified duration. VMware Virtual SAN Observer is part of Ruby vSphere Console (RVC) which is a Linux console user interface for VMware ESXi and vCenter. RVC is installed on VMware vCenter and is required for running VMware Virtual SAN Observer commands. Following best practices, an out-of-band VMware vCenter appliance is used in this reference architecture to run VMware Virtual SAN Observer commands. This setup helps ensure that the production VMware vCenter instance is not affected by the performance measurements.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 53 of 59

The VMware Virtual SAN Observer commands that were used for this solution are shown in Table 21. Table 21.

VMware Virtual SAN Observer Commands

VMware Virtual SAN Observer Command

Description

vsan.resync_dashboard 10.0.115.72.54 -r 0

Observe data migration while placing hosts in Full Migration maintenance mode.

vsan.disk_object_info

Verify disk object information.

vsan.vm_object_info

Verify virtual machine object information.

vsan.disks_info hosts/10.0.115.72.54

Obtain a list of disks on a specific host.

vsan.obj_status_report

Obtain health information about VMware Virtual SAN objects. This command is helpful in identifying orphaned objects.

vsan.reapply_vsan_vmknic_config

Re-enable VMware Virtual SAN on VMkernel ports while troubleshooting the network configuration.

vsan.observer {cluster name} -r -o -g /tmp -i 30 -m 1

Enable and capture performance statistics used for benchmark testing. For more information, see Enabling or Capturing Performance Statistics Using VMware Virtual SAN Observer.

For a more comprehensive list of VMware Virtual SAN Observer commands, see the VMware Virtual SAN Quick Monitoring and Troubleshooting Reference Guide.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 54 of 59

System Sizing The reference architecture used the sizing specifications described in this section.

Virtual Machine Test Image Builds Two different virtual machine images were used to provision desktop sessions in the VMware View environment: one for linked clones and one for full clones (Table 22). Both conformed to testing tool standards and were optimized in accordance with the VMware View Optimization Guide for Windows 7 and Windows 8. The VMware OS Optimization Tool was used to make the changes. Table 22.

Virtual Machine Test Image Builds

Attribute

Linked Clones

Full Clones

Desktop operating system

Microsoft Windows 7 Enterprise SP1 (32-bit)

Microsoft Windows 7 Enterprise SP1 (32-bit)

Hardware

VMware Virtual Hardware Version 10

VMware Virtual Hardware Version 10

CPU

1

2

Memory

1536 MB

2048 MB

Memory reserved

0 MB

0 MB

Video RAM

35 MB

35 MB

3D graphics

Off

Off

NICs

1

1

Virtual network adapter 1

VMXNet3 adapter

VMXNet3 adapter

Virtual SCSI controller 0

Paravirtual

Paravirtual

Virtual disk VMDK 1

24 GB

40 GB

Virtual disk VMDK 2

1 GB

1 GB

Virtual floppy drive 1

Removed

Removed

Virtual CD/DVD drive 1

Removed

Removed

Applications

Adobe Acrobat 10.1.4

Adobe Acrobat 10.1.4

Firefox 7.01

Firefox 7.01

Internet Explorer 10

Internet Explorer 10

Microsoft Office 2010

Microsoft Office 2010

Microsoft Windows Media Player

Microsoft Windows Media Player

7Zip

7Zip

VMware tools

9.4.10, build-2068191

9.4.10, build-2068191

VMware View Agent

6.0.1-2089044

6.0.1-2089044

The Microsoft Windows 7 golden image was modified to meet VMware View Planner 3.5 requirements. See the VMware View Planner Installation and User’s Guide.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 55 of 59

Management Blocks Table 23 shows the sizing of the management blocks. Table 23.

Management Block Sizing

Server Role

VCPU

RAM (GB)

Storage (GB) Operating System

Software Version

Domain controller

2

6

40

Server 2012 64-bit

Microsoft SQL Server

2

8

140

Server 2012 64-bit

Microsoft SQL Server 2012 64-bit

VMware vCenter Server

4

10

70

Server 2012 64-bit

VMware vCenter 5.5.0 build 1178595

VMware vCenter appliance for VMware Virtual SAN Observer (out of band)

4

8

100

SUSE Linux Enterprise Server (SLES) 11 64-bit

VMware vCenter 5.5 U2 build 2063318

VMware View Connection Server

4

10

60

Server 2012 64-bit

VMware View Connection Server 6.0.1 build 2088845

VMware View Composer Server

4

10

60

Server 2012 64-bit

VMware View Composer 6.0.1 build 2078421

VMware vCenter Operations Manager Analytics Server

4

9

212

SLES 11 64-bit

3.5 build 2061132 (beta)

VMware vCenter Operations Manager UI Server

4

7

132

SLES 11 64-bit

3.5 build 2061132

VMware View Planner Server

2

4

60

Server 2012 64-bit

3.5 build 2061132

Host Configuration Table 24 summarizes the host configuration. Table 24.

Host Configuration

Component

Value

CPU

● Intel Xeon processor E5-2680 v2 at 2.80 GHz ● Hyperthreading: Enabled

RAM

● 256 GB (16 x 16 GB)

NICs

● Cisco UCS VIC 1225 converged network adapter (2 x 10-Gbps ports) ● Firmware version 2.2(2c) ● Driver version enic -1.4.2.15c

BIOS

● C240M3.1.5.7.0.042820140452

Disks

● 2 x 400-GB 2.5-inch enterprise performance SAS SSDs (1 SSD for linked clones and 2 SSDs for full clones) ● 12 x 900-GB 6-Gbps SAS 10,000-rpm drives (4 disks per host used for linked clones, and 12 disks per host used for full clones)

VMware ESXi version

● VMware ESXi 5.5.0 build 2068190

Storage adapter

● Firmware package version 23.12.0-0021 ● Firmware version 3.240.95-2788 ● Driver version 00.00.05.34-9vmw, build 2068190 interface 9.2

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 56 of 59

Bill of Materials Table 25 provides the bill of materials for the reference architecture. Table 25.

Bill of Materials

Area

Component

Host hardware

Quantity

● Cisco UCS C240 M3

8

● Intel Xeon processor E5-2680 v2 at 2.80 GHz

16

● 16-GB DDR3 1600-MHz RDIMM, PC3-12800, dual rank

128

● LSI 9207-8i RAID controller

8

● Cisco VIC 1225 dual-port 10-Gbps SFP+ converged network adapter

8

● 16-GB SD card

16

● 400-GB 2.5-inch enterprise performance SAS SSD

8 (for linked clones) 16 (for full clones)

Network switch

Software

● 300-GB SAS 15,000-rpm 6-Gbps 2.5-inch drive ● 900-GB SAS 10,000-rpm 6-Gbps 2.5-inch drive

32 (for linked clones)

● Cisco UCS 6248 Fabric Interconnect

2

● Cisco Nexus 5548UP

2

● VMware ESXi 5.5.0 build 2068190

8

● VMware vCenter Server 5.5.0, build 1623101

1

● VMware Horizon 6.0.1, build 2088845

1

● VMware vCenter Operations for View 1.5.1, build 1286478

1

● Microsoft Windows 2008 R2

4

● Microsoft SQL Server 2008 R2

1

● Microsoft SQL Server 2008 R2

4

96 (for full clones)

Conclusion Implementing VMware Horizon 6 with View with VMware Virtual SAN on Cisco UCS provides linear scalability with exceptional end-user performance and a simpler management experience, with Cisco UCS Manager centrally managing the infrastructure and VMware Virtual SAN integrated into VMware vSphere. This solution also provides cost-effective hosting all sizes of virtual desktop deployments. The reference architecture demonstrates the following main points: ●

Linear scalability is achieved with VMware Virtual SAN as the storage solution on Cisco UCS for hosting VMware View virtual desktops. The reference architecture successfully scaled from 400 desktops on four Cisco UCS C240 M3 nodes to 800 desktops on eight nodes, keeping all aspects of end-user performance consistently acceptable with less than 15 ms of disk latency and 3-ms application response times.



Optimal performance is achieved while performing all virtual desktop operations such as refresh, recompose, deploy, power-on, and power-off operations. Times measured for these operations fall within industry-measured benchmarks and demonstrate the joint solution’s scalability.



VMware Virtual SAN provides highly available and resilient storage for hosting VMware View virtual desktops. Multiple maintenance and failure scenarios tested provide confidence in the resiliency of the joint solution.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 57 of 59

For More Information ●

VMware Virtual SAN Ready Nodes



What’s New in VMware Virtual SAN



Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers



VMware Virtual SAN Compatibility Guide



LSI



Changing the Default Repair-Delay Time for a Host Failure in VMware Virtual SAN



I/O Analyzer



Ruby vSphere Console (RVC)



Enabling or Capturing Performance Statistics Using VMware Virtual SAN Observer



VMware View Optimization Guide for Microsoft Windows 7 and Windows 8



VMware View Planner Installation and User’s Guide



VMware Virtual SAN Quick Monitoring and Troubleshooting Reference Guide



Cisco UCS C240 M3 High-Density Rack Server (SFF Disk-Drive Model) Specification Sheet



Working with VMware Virtual SAN



VMware Virtual SAN Ready System Recommended Configurations



Enabling or Capturing Statistics Using VMware Virtual SAN Observer for VMware Virtual SAN Resources

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 58 of 59

Acknowledgements The following individuals contributed to the creation of this paper: ●

Balayya Kamanboina, Validation Test Engineer, VMware



Bhumik Patel, Partner Architect, VMware



Chris White, End User Computing Architect, VMware



Hardik Patel, Technical Marketing Engineer, Cisco Systems



Jim Yanik, End User Computing Architect, VMware



Mike Brennan, Technical Marketing Manager, Cisco Systems



Jon Catanzano, Senior Technical Writer/Editor, Consultant, VMware



Nachiket Karmarkar, Performance Engineer, VMware

Printed in USA

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

C11-733480-00

12/14

Page 59 of 59

Suggest Documents