Cisco Unified Computing System with VMware Horizon 6 with View ...

7 downloads 312 Views 4MB Size Report
virtual desktop environment, while maintaining superior performance and manageability. ... accelerates enterprise-class
White Paper

Cisco Unified Computing System with VMware Horizon 6 with View and Virtual SAN Reference Architecture December 2014

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 1 of 59

Contents Executive Summary ................................................................................................................................................. 3 Solution Overview.................................................................................................................................................... 3 Cisco Unified Computing System .......................................................................................................................... 4 VMware vSphere .................................................................................................................................................. 7 VMware Virtual SAN ............................................................................................................................................. 7 VMware Horizon 6 with View ................................................................................................................................ 8 System Configuration (Design) ............................................................................................................................ 11 Cisco UCS Configuration .................................................................................................................................... 13 VMware Virtual SAN Configuration ..................................................................................................................... 18 VMware Horizon with View Configuration ........................................................................................................... 21 Test Results ........................................................................................................................................................... 23 Test Summary..................................................................................................................................................... 23 Test 1: 400 VMware View Linked Clones on Four Cisco UCS C240 M3 Servers in VMware Virtual SAN Cluster ........................................................................................................................................................ 24 Test 2: 800 VMware View Linked Clones on Eight Cisco UCS C240 M3 Servers in a VMware Virtual SAN Cluster ........................................................................................................................................................ 28 Test 3: 800 VMware View Full Clones on Eight Cisco UCS C240 M3 Servers on a VMware Virtual SAN Cluster ........................................................................................................................................................ 31 Test 4: Mixed 400 VMware View Linked Clones and 400 Full Clones on Eight Cisco UCS C240 M3 Servers ... 35 VMware View Operations Tests .......................................................................................................................... 39 VMware Virtual SAN Availability and Manageability Tests .................................................................................. 42 Test Methodology .................................................................................................................................................. 51 VMware View Planner 3.5 ................................................................................................................................... 51 VMware Virtual SAN Observer............................................................................................................................ 53 System Sizing ........................................................................................................................................................ 55 Virtual Machine Test Image Builds ...................................................................................................................... 55 Management Blocks ........................................................................................................................................... 56 Host Configuration .............................................................................................................................................. 56 Bill of Materials ...................................................................................................................................................... 57 Conclusion ............................................................................................................................................................. 57 For More Information ............................................................................................................................................. 58 Acknowledgements ............................................................................................................................................... 59

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 2 of 59

Executive Summary The reference architecture described in this document uses VMware Horizon 6 with View hosted on the Cisco Unified Computing System™ (Cisco UCS®) with VMware Virtual SAN as the hypervisor-converged storage solution. The purpose of this reference architecture is to provide guidance about the following aspects of deploying this joint solution: ●

Scalability and performance results while hosting 800 VMware Horizon 6 with View virtual desktops using industry-standardized benchmarking of real-world workloads



Design and implementation best practices covering Cisco UCS configurations, VMware Virtual SAN storage policies, and

6.

Reboot the hosts to make the changes effective.

Service Profile Configuration The main configurable parameters of a Cisco UCS service profile are summarized in Table 3. Table 3.

Service Profile Parameters

Parameter Type

Parameter

Server hardware

● UUID

● Obtained from defined UUID pool

● MAC addresses

● Obtained from defined MAC address pool

● Worldwide port name (WWPN) ● Worldwide node name (WWNN)

● Obtained from defined WWPN and WWNN pools

● Boot policy

● Boot path and order

● Disk policy

● RAID configuration

● LAN

● Virtual NICs (vNICs), VLANs, and maximum transmission unit (MTU)

● SAN

● Virtual host bus adapters (vHBAs) and virtual SANs (VSANs)

● Quality-of-service (QoS) policy

● Class of service (CoS) for Ethernet uplink traffic

● Firmware policy

● Current and backup versions

● BIOS policy

● BIOS version and settings

● Statistics policy

● System data collection

● Power-control policy

● Blade server power allotment

Fabric

Operation

Description

For Cisco UCS service profiles for hosts in a VMware Virtual SAN cluster, the policy configuration shown here is recommended. This configuration does not include all Cisco UCS service profile settings. The settings shown here are specific to an implementation of Cisco UCS with VMware Virtual SAN for VMware Horizon with View.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 14 of 59

BIOS Policy The BIOS policy configured for the VMware Virtual SAN environment is aimed at achieving high performance, as shown in the example in Figure 8 and in Table 4. Figure 8.

BIOS Policy Configuration for the VMware Virtual SAN Environment

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 15 of 59

Table 4.

BIOS Policy Settings for the VMware Virtual SAN Environment

Policy

Settings ● Turbo Boost = Enabled

Processor

● Enhanced Intel Speedstep = Enabled ● Hyperthreading = Enabled ● Virtualization Technology (VT) = Enabled ● Direct Cache Access = Enabled ● CPU Performance = Enterprise ● Power Technology = Performance ● Energy Performance = Enterprise Intel Directed IO

● VT for Directed IO = Enabled

Memory

● Memory RAS Config = Maximum Performance ● Low-Voltage DDR Mode = Performance Mode

Boot Policy The boot policy is created with a Secure Digital (SD) card as the preferred boot option after the local CD or DVD boot option (Figure 9). Figure 9.

Boot Policy Configuration

Networking VMware vSphere Distributed Switch (VDS) is configured for all hosts in the cluster. It allows virtual machines to maintain a consistent network configuration as the virtual machines migrate across multiple hosts. A separate vNIC is created for each traffic type for virtual machine data, VMware Virtual SAN, VMware vMotion, and management. These vNICs are configured as separate vNIC templates in Cisco UCS and applied as part of the service profile (Table 5).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 16 of 59

Table 5.

vNIC Template Configuration

vNIC Template Name

Fabric ID

Comments

VM-Data_A

Fabric A

MTU = 9000; QoS policy VMData

VM-Data_B

Fabric B

MTU = 9000; QoS policy VMData

Virtual SAN

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy VSAN

vMotion

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy vMotion

MGMT

Fabric A (with Enable Failover option)

MTU = 9000; QoS policy MGMT

The network control policy is set to Cisco Discovery Protocol Enabled, and the dynamic vNIC connection policy is applied with an adapter policy of “VMware.” QoS Policies Table 6 and Figure 10 show the QoS policy and QoS system-class mappings in Cisco UCS for the vNICs. Table 6.

QoS Policy Configuration

QoS Policy Name

Priority

VMData

Gold

Virtual SAN

Platinum

vMotion

Silver

MGMT

Bronze

Figure 10.

QoS System-Class Configuration

VLANs A dedicated VLAN is recommended for the VMware Virtual SAN VMkernel NIC, and multicast is required in the Layer 2 domain. This setting is configured as part of the VLAN as a multicast policy with snooping enabled. The following VLANs were created: ●

VLAN for virtual desktops: This is a /22 subnet with 1022 IP addresses to accommodate all 800 virtual desktops.



VLAN for VMware Virtual SAN: This is a /28 subnet with 14 IP addresses to accommodate 8 hosts.



VLAN for management components: This is a /24 subnet with 254 IP addresses to accommodate all management components, plus the VMware View Planner desktops for running the test workflows.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 17 of 59

VMware Virtual SAN Configuration VMware Virtual SAN is a VMware ESXi cluster-level feature that is configured using the VMware vSphere Web Client. The first step in enabling VMware Virtual SAN is to select one of the two modes of disk-group creation: ●

Automatic: Enable VMware Virtual SAN to discover all the local disks on the hosts and automatically add the disks to the VMware Virtual SAN data store.



Manual: Manually select the disks to add to the VMware Virtual SAN shared data store.

In this setup, disk groups were created manually, and the storage policies listed in Table 7 were applied based on whether the VMware Virtual SAN configuration is for linked clones or full clones. These storage polices are tied to the storage requirements for each virtual machine and are used to provide different levels of availability and performance for virtual machines. Important: Use different policies for different types of virtual machines in the same cluster to meet application requirements. Table 7.

Storage Policies for VMware View

Policy

Definition

Default (Value Applied)

Maximum

Number of disk stripes per object

Defines the number of magnetic disks across which each replica of a storage object is distributed

1

12

Flash-memory read cache reservation

Defines the flash memory capacity reserved as the read cache for the storage object

0%

100%

0 (linked clone); 1 (full clone and replicas)

3 (in 8-host cluster)

Number of failures to tolerate

● Defines the number of host, disk, and network failures that a storage object can tolerate ● For n failures tolerated, n + 1 copies of the object are created, and 2n + 1 hosts of contributing storage are required

Forced provisioning

Determines whether the object is provisioned, even when currently available resources do not meet the virtual machine storage policy requirements

Disabled

Enabled

Object-space reservation

Defines the percentage of the logical size of the storage object that needs to be reserved (thick provisioned) upon virtual machine provisioning (the remainder of the storage object is thin provisioned)

0%

100%

Default storage policy values are configured for linked clones, full clones, and replicas.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 18 of 59

VMware View Configuration VMware Virtual SAN integrates with the VMware View pod and block design methodology, which consists of the following components: ●

VMware View Connection Server: A VMware View Connection Server supports up to 2000 concurrent connections. The tests used two VMware View Connection Servers operating in active-active mode. The two VMware View Connection Servers actively broker and possibly tunnel connections.



VMware View block: VMware View provisions and manages desktops through the VMware vCenter Server. Each VMware vCenter instance supports up to 10,000 virtual desktops. The tests used one VMware vCenter and one VMware Virtual SAN cluster with eight hosts. Note that the maximum number of VMware High Availability protected virtual machines allowed in a VMware vSphere cluster is 2048 per data store.



VMware View management block: A separate VMware vSphere cluster was used for management of servers to isolate the volatile desktop workload from the static server workload. For larger deployments, a dedicated VMware vCenter Server for the management and VMware View blocks is recommended.

VMware vSphere Clusters Two VMware Virtual SAN clusters were used in the environment: ●

An 8-node VMware Virtual SAN cluster was deployed to support 800 virtual desktops, as shown in Figure 11 and Table 8.



A 4-node VMware Virtual SAN cluster was deployed to support infrastructure, management, and VMware View Planner virtual machines used for scalability testing.

Figure 11.

VMware View Running on VMware Virtual SAN Using Cisco UCS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 19 of 59

Table 8.

VMware Virtual SAN Cluster Configuration

Property

Setting

Default

Revised

● HA



Enabled

● DRS



Enabled

● Host Monitoring Status

Enabled



● Admission Control

Enabled



● Admission Control Policy

Host failures the cluster tolerates = 1



● Virtual Machine Options > Virtual Machine Restart Priority

Medium



● Virtual Machine Options > Host Isolation Response

Leave powered on



● Virtual Machine Monitoring

Disabled



● Data Store Heartbeating

Select any, taking into account my preferences (no data store preferred)



● Automation Level

Fully automated (apply 1, 2, 3 priority recommendations)



● DRS Groups Manager





● Rules





● Virtual Machine Options





● Power Management

Off



● Host Options

Default (disabled)



Enhanced VMware vMotion capability

Disabled



Swap-file location

Store in the same directory as the virtual machine



Cluster features

VMware vSphere High Availability

VMware vSphere Storage DRS

Properties regarding security, traffic shaping, and NIC teaming can be defined for a port group. The settings used with the port group design are shown in Table 9. Table 9.

Port Group Properties: VMware dvSwitch v5.5

Property

Setting

Default

Revised

General

● Port Binding

Static



Policies: Security

● Promiscuous Mode

Reject



● MAC Address Changes

Accept

Reject

● Forged Transmits

Accept

Reject

Policies: Traffic Shaping

● Status

Disabled



Policies: Teaming and Failover

● Load Balancing

Route based on the originating virtual port ID

● Failover Detection

Caution: Link status only



● Notify Switches

Yes



Policies: Resource allocation

● Network I/O Control

Disabled

Enabled

Advanced

● Maximum MTU

1500

9000

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 20 of 59

VMware Horizon with View Configuration The VMWare Horizon with View installation included the following core systems: ●

Two connection servers (N+1 recommended for production)



One VMware vCenter Server with the following roles:

◦ VMware vCenter ◦ VMware vCenter single sign-on (SSO) ◦ VMware vCenter inventory service ●

VMware View Composer

Note that VMware View security servers were not used during this testing. VMware View Global Policies The VMware View global policy settings used for all system tests are shown in Table 10. Table 10.

VMware View Global Policies

Network Resource Pool

Host Limit (Mbps)

USB access

Allow

Multimedia redirection (MMR)

Allow

Remote mode

Allow

PCoIP hardware acceleration

Allow: Medium priority

VMware View Manager Global Settings The VMware View Manager global policy settings that were used are shown in Table 11. Table 11.

VMware View Manager Global Settings

Attribute

Specification

Session timeout

600 (10 hours)

VMware View Administrator session timeout

30 minutes

Auto-update

Enabled

Display prelogin message

No

Display warning before logout

No

Reauthenticate secure tunnel connections after network interruption

No

Enable IP Security (IPsec) for security server pairing

Yes

Message security mode

Enabled

Disable single sign-on for local-mode operations

No

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 21 of 59

VMware vCenter Server Settings VMware View Connection Server uses VMware vCenter Server to provision and manage VMware View desktops. VMware vCenter Server is configured in VMware View Manager as shown in Table 12. Table 12.

VMware View Manager: VMware vCenter Server Configuration

Attribute

Setting

Specification

Connect using SSL

vCenter Server Settings > SSL

Yes

VMware vCenter port

vCenter Server Settings > Port

443

VMware View Composer port

View Composer Server Settings > Port

18,443

Enable VMware View Composer

View Composer Server Settings > Co-Installed

Yes

Advanced settings

Maximum Concurrent vCenter Provisioning Operations

20

Maximum Concurrent Power Operations

50

Maximum Concurrent View Composer Maintenance Operations

12

Maximum Concurrent View Composer Provisioning Operations

12

Enable View Storage Accelerator



Default Host Cache Size

2048 MB

Storage settings

VMware View Manager Pool Settings The VMware View Manager pool settings were configured as shown in Tables 13 and 14. Table 13.

VMware View Manager: VMware View Manager Pool Configuration

Attribute

Specification

Pool type

Automated Pool

User assignment

Floating

Pool definition: VMware vCenter Server

Linked Clones

Pool ID

Desktops

Display name

Desktops

VMware View folder

/

Remote desktop power policy

Take no power action

Auto logoff time

Never

User reset allowed

False

Multi-session allowed

False

Delete on logoff

Never

Display protocol

PCoIP

Allow protocol override

False

Maximum number of monitors

1

Max resolution

1920 x 1200

HTML access

Not selected

Flash quality level

Do not control

Flash throttling level

Disabled

Enable provisioning

Enabled

Stop provisioning on error

Enabled

Provision all desktops upfront

Enabled

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 22 of 59

Table 14.

VMware View Manager: Test Pool Configuration

Attribute

Specification

Disposable file redirection

Do not redirect

Select separate data stores for replica and OS

Not selected

Data stores: Storage overcommit

Conservative

Use VMware View storage accelerator

Selected

Reclaim virtual machine disk space*



Disk types

OS disks

Regenerate storage accelerator after

7 days

Reclaim virtual machine disk space



Use Quickprep

Enabled

* VMware Virtual SAN does not support the space-efficient (SE) sparse disk format.

Test Results VMware View running on VMware Virtual SAN on the Cisco UCS reference architecture was tested based on realworld test scenarios, user workloads, and infrastructure system configurations. The tests performed included the following configurations: ●

Test 1: 400 VMware View linked clones on four Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 2: 800 VMware View linked clones on eight Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 3: 800 VMware View full clones on eight Cisco UCS C240 M3 servers in a VMware Virtual SAN cluster



Test 4: Mixed 400 VMware View linked clones and 400 full clones on eight Cisco UCS C240 M3 servers



VMware View operations tests



VMware Virtual SAN availability and manageability tests

All of these tests and the test results are summarized in the sections that follow

Test Summary VMware View Planner is a VDI workload generator that automates and measures a typical office worker’s activity: use of Microsoft Office applications, web browsing, reading a PDF file, watching a video, etc. The operations generated include opening a file, browsing the web, modifying files, saving files, closing files, and more. Each VMware View Planner operation runs iteratively. Each iteration is a randomly sequenced workload consisting of these applications and operations. The results of a test run consist of latency statistics collected for these applications and operations for all iterations. In addition to VMware View Planner scores, VMware Virtual SAN Observer and VMware vCenter Operations Manager for Horizon are used as monitoring tools. For more information about the applications used for this testing, see the Test Methodology section later in this document.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 23 of 59

Test 1: 400 VMware View Linked Clones on Four Cisco UCS C240 M3 Servers in VMware Virtual SAN Cluster VMware View Planner tests were run on 400 linked clones on four hosts with exceptional user performance, as represented by the VMware View Planner score and latency values. In the VMware View Planner results, QoS is determined for multiple types of applications categorized as Group A, Group B, and Group C user operations: ●

Group A applications are interactive, fast-running operations that are CPU bound: browsing a PDF file, modifying a Microsoft Word document, and so on.



Group B applications are long-running slow operations that are I/O bound: opening a large document, saving a Microsoft PowerPoint file, and so on.



Group C consists of background load operations that are used to generate additional load during testing. These operations are not used to determine QoS and hence have no latency thresholds.

The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. The test results in Figure 12 show that the latency values for the 95th percentile of applications in each group are lower than the required threshold. These results correspond to expected end-user performance while 400 linked clones are run on four hosts. Figure 12.

VMware View Planner Score: 400 Linked Clones

Test result highlights include: ●

Average of 85 percent CPU utilization



Average of up to 85 GB of RAM used out of 256 GB available



Average of 16.02 MBps of network bandwidth used



Average of 13.447 ms of I/O latency per host



Average of 1983 I/O operations per second (IOPS) per host

The specific latency values for all the applications are shown in Table 15. Table 15.

Application Latency Values: 400 Linked Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

1197

3.822986

3.530973

0.313

AdobeReader-Browse

A

23940

0.238951

0.196896

0.813

AdobeReader-Close

A

1197

0.766411

0.750155

0.053

AdobeReader-Maximize

A

2394

0.699528

0.766001

0.219

AdobeReader-Minimize

A

1197

0.312196

0.296619

0.204

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 24 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

AdobeReader-Open

B

1197

0.712551

0.582646

1.001

Excel Sort-Close

A

1197

0.307458

0.192492

1.048

Excel Sort-Compute

A

31122

0.025334

0.023062

0.426

Excel Sort-Entry

A

31122

0.179093

0.147062

0.860

Excel Sort-Maximize

A

3591

0.365166

0.323991

0.316

Excel Sort-Minimize

A

1197

0.000692

0.000657

0.678

Excel Sort-Open

B

1197

0.593777

0.515999

0.624

Excel Sort-Save

B

1197

0.578326

0.513369

0.394

Firefox-Close

A

1197

0.52622

0.513906

0.052

Firefox-Open

B

1197

1.037588

0.84357

0.805

IE ApacheDoc-Browse

A

65835

0.085855

0.068178

2.397

IE ApacheDoc-Close

A

1197

0.005479

0.001636

8.362

IE ApacheDoc-Open

B

1197

0.882902

0.468084

3.336

IE WebAlbum-Browse

A

17955

0.26255

0.159749

2.395

IE WebAlbum-Close

A

1197

0.007337

0.001726

9.868

IE WebAlbum-Open

B

1197

0.870285

0.480918

3.008

Outlook-Attachment-Save

B

5985

0.076468

0.056133

2.510

Outlook-Close

A

1197

0.619196

0.554815

0.403

Outlook-Open

B

1197

0.777402

0.703031

0.385

Outlook-Read

A

11970

0.323953

0.209812

1.951

Outlook-Restore

C

13167

0.386632

0.375205

0.594

PPTx-AppendSlides

A

4788

0.083413

0.064426

0.823

PPTx-Close

A

1197

0.548461

0.492398

0.547

PPTx-Maximize

A

4788

0.00122

0.000728

7.175

PPTx-Minimize

A

2394

0.000684

0.000616

1.263

PPTx-ModifySlides

A

4788

0.304398

0.268314

0.661

PPTx-Open

B

1197

3.062735

3.031899

0.117

PPTx-RunSlideShow

A

8379

0.341099

0.528672

0.484

PPTx-SaveAs

C

1197

3.818085

2.91416

1.148

Video-Close

A

1197

0.069317

0.038364

1.822

Video-Open

B

1197

0.155579

0.048608

7.257

Video-Play

C

1197

50.511642

50.434445

0.005

Word-Close

A

1197

0.572719

0.602094

0.307

Word-Maximize

A

3591

0.323592

0.263979

0.378

Word-Minimize

A

1197

0.000679

0.000621

2.133

Word-Modify

A

25137

0.056807

0.059311

0.434

Word-Open

B

1197

4.213084

3.775295

0.608

Word-Save

B

1197

3.44489

3.354615

0.215

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 13, 14, and 15. All hosts had similar utilization on average while hosting 100 virtual desktops each.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 25 of 59

Figure 13.

Host CPU Utilization from VMware View Planner: 400 Linked Clones, Average CPU Use in Percent

Figure 14.

Host Memory Utilization from VMware View Planner: 400 Linked Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 90 GB out of 256 GB available on the host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 26 of 59

Figure 15.

Network Utilization from VMware View Planner: 400 Linked Clones, Average Network Use

Disk latency values shown in Figure 16 are obtained from VMware Virtual SAN Observer. Average read and write latency is 14 ms for the host shown in the figure, and on average is 13.44 ms across all hosts. These values, below the target threshold of 20 ms, correlate with the low application response times measured by VMware View Planner, and the overall results of a better end user experience. In these tests, the average of 1983 IOPS is generated per host. This value is well below the maximum IOPS capacity for similar VMware Virtual SAN systems based on the Cisco UCS C240 M3 as detailed the document VMware Virtual SAN with Cisco Unified Computing System Reference Architecture. Figure 16.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 400 Linked Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 27 of 59

Test 2: 800 VMware View Linked Clones on Eight Cisco UCS C240 M3 Servers in a VMware Virtual SAN Cluster VMware View Planner tests were run on 800 linked clones on eight hosts with exceptional user performance. The tests demonstrated linear scalability from support for 400 desktops on four nodes to support for 800 desktops on eight nodes. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average of up to 82 GB of RAM used out of 256 GB available



Average of 17.12 MBps of network bandwidth used



Average of 14.966 ms of I/O latency per host



Average of 1616 IOPS per host

The QoS summary, application response times, and host utilization values are shown in Figure 17 and Table 16. Figure 17.

VMware View Planner Score: 800 Linked Clones

Table 16.

Application Latency Values: 800 Linked Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2391

3.354891

3.158507

0.303

AdobeReader-Browse

A

47820

0.223914

0.183528

0.822

AdobeReader-Close

A

2391

0.763632

0.750128

0.048

AdobeReader-Maximize

A

4782

0.697711

0.762164

0.218

AdobeReader-Minimize

A

2391

0.313204

0.301153

0.198

AdobeReader-Open

B

2391

0.665166

0.548225

1.013

Excel Sort-Close

A

2391

0.290767

0.186278

1.020

Excel Sort-Compute

A

62166

0.024431

0.022728

0.386

Excel Sort-Entry

A

62166

0.165498

0.140708

0.777

Excel Sort-Maximize

A

7173

0.364276

0.320248

0.321

Excel Sort-Minimize

A

2391

0.000661

0.000627

0.416

Excel Sort-Open

B

2391

0.548292

0.489547

0.588

Excel Sort-Save

B

2391

0.543247

0.484791

0.392

Firefox-Close

A

2391

0.526084

0.514294

0.052

Firefox-Open

B

2391

0.973367

0.785845

0.872

IE ApacheDoc-Browse

A

131505

0.082216

0.063011

2.456

IE ApacheDoc-Close

A

2391

0.005364

0.001548

8.607

IE ApacheDoc-Open

B

2391

0.782738

0.431805

3.245

IE WebAlbum-Browse

A

35865

0.250286

0.152367

2.460

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 28 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

IE WebAlbum-Close

A

2391

0.007503

0.001622

11.315

IE WebAlbum-Open

B

2391

0.805998

0.446739

2.963

Outlook-Attachment-Save

B

11955

0.068486

0.053753

2.305

Outlook-Close

A

2391

0.616925

0.554705

0.396

Outlook-Open

B

2391

0.735236

0.676026

0.336

Outlook-Read

A

23910

0.297843

0.199803

1.739

Outlook-Restore

C

26301

0.346654

0.340861

0.590

PPTx-AppendSlides

A

9564

0.078069

0.062656

0.763

PPTx-Close

A

2391

0.518743

0.461373

0.530

PPTx-Maximize

A

9564

0.001144

0.000679

7.695

PPTx-Minimize

A

4782

0.00062

0.000579

0.796

PPTx-ModifySlides

A

9564

0.291094

0.255203

0.686

PPTx-Open

B

2391

2.813034

2.8045

0.135

PPTx-RunSlideShow

A

16737

0.337466

0.527942

0.484

PPTx-SaveAs

C

2391

3.567793

2.791217

1.084

Video-Close

A

2391

0.067433

0.03201

2.166

Video-Open

B

2391

0.145677

0.045696

7.455

Video-Play

C

2391

50.486084

50.421127

0.005

Word-Close

A

2391

0.551263

0.585316

0.312

Word-Maximize

A

7173

0.321871

0.261876

0.379

Word-Minimize

A

2391

0.000609

0.000584

0.363

Word-Modify

A

50211

0.05865

0.065478

0.398

Word-Open

B

2391

3.889717

3.485008

0.578

Word-Save

B

2391

3.198789

3.167595

0.194

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 18, 19, and 20. Figure 18.

Host CPU Utilization from VMware View Planner: 800 Linked Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 29 of 59

Figure 19.

Host Memory Utilization from VMware View Planner: 800 Linked Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 82 GB out of 256 GB available on the host. Figure 20.

Network Utilization from VMware View Planner: 800 Linked Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figure 21. Combined average read and write latency is measured as 16 ms on one of the hosts shown here, and is an average of 14.96 ms for all hosts. In these tests, the average IOPS generated are 1616 IOPS per host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 30 of 59

Figure 21.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 800 Linked Clones

Test 3: 800 VMware View Full Clones on Eight Cisco UCS C240 M3 Servers on a VMware Virtual SAN Cluster In addition to the testing for linked clones, 800 full clones were tested with higher virtual machine specifications of two vCPUs and 40 GB of disk space to mimic higher desktop resources allocated to full dedicated desktops. The results show the QoS summary, application response times, and host utilization values. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average of up to 84 GB of RAM used out of 256 GB available



Average of 13.13 MBps of network bandwidth used



Average of 13.995 ms of I/O latency per host



Average of 1087.87 IOPS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 31 of 59

The QoS summary, application response times, and host utilization values are described in Figure 22 and Table 17. Figure 22.

VMware View Planner Score: 800 Full Clones

Table 17.

Application Latency Values: 800 Full Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2388

3.827199

3.525832

0.377

AdobeReader-Browse

A

47760

0.243642

0.200829

0.817

AdobeReader-Close

A

2388

0.76643

0.750171

0.058

AdobeReader-Maximize

A

4776

0.706106

0.766201

0.228

AdobeReader-Minimize

A

2388

0.313208

0.294657

0.211

AdobeReader-Open

B

2388

0.718087

0.577403

1.042

Excel Sort-Close

A

2388

0.335683

0.229137

0.927

Excel Sort-Compute

A

62088

0.026431

0.02438

0.511

Excel Sort-Entry

A

62088

0.184258

0.151464

0.901

Excel Sort-Maximize

A

7164

0.36963

0.330758

0.313

Excel Sort-Minimize

A

2388

0.000745

0.000662

3.522

Excel Sort-Open

B

2388

0.610323

0.531417

0.636

Excel Sort-Save

B

2388

0.61182

0.548862

0.380

Firefox-Close

A

2388

0.528206

0.514102

0.079

Firefox-Open

B

2388

1.070024

0.835468

0.972

IE ApacheDoc-Browse

A

131340

0.088938

0.069274

2.385

IE ApacheDoc-Close

A

2388

0.00579

0.001658

8.314

IE ApacheDoc-Open

B

2388

0.889725

0.477459

3.388

IE WebAlbum-Browse

A

35820

0.270266

0.162112

2.474

IE WebAlbum-Close

A

2388

0.007759

0.001714

10.623

IE WebAlbum-Open

B

2388

0.872419

0.484339

2.904

Outlook-Attachment-Save

B

11940

0.075901

0.057302

2.110

Outlook-Close

A

2388

0.685793

0.615624

0.394

Outlook-Open

B

2388

0.777585

0.699325

0.409

Outlook-Read

A

23880

0.333419

0.216472

1.990

Outlook-Restore

C

26268

0.388123

0.368232

0.655

PPTx-AppendSlides

A

9552

0.085008

0.06647

0.905

PPTx-Close

A

2388

0.562465

0.503776

0.535

PPTx-Maximize

A

9552

0.00135

0.000718

13.038

PPTx-Minimize

A

4776

0.000738

0.000613

6.706

PPTx-ModifySlides

A

9552

0.308703

0.269817

0.661

PPTx-Open

B

2388

3.07551

3.009046

0.147

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 32 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

PPTx-RunSlideShow

A

16716

0.341095

0.528568

0.484

PPTx-SaveAs

C

2388

4.132029

3.131973

1.178

Video-Close

A

2388

0.073594

0.037658

2.037

Video-Open

B

2388

0.15297

0.049374

6.926

Video-Play

C

2388

50.660026

50.456134

0.014

Word-Close

A

2388

0.569959

0.597739

0.319

Word-Maximize

A

7164

0.327242

0.265223

0.383

Word-Minimize

A

2388

0.000671

0.00061

1.571

Word-Modify

A

50148

0.057318

0.059561

0.461

Word-Open

B

2388

4.293781

3.73521

0.670

Word-Save

B

2388

3.635548

3.527526

0.238

The host utilization metrics for CPU, memory, network, and disk I/O values that were obtained while running the test are shown in Figures 23, 24, and 25. Figure 23.

Host CPU Utilization from VMware View Planner: 800 Full Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 33 of 59

Figure 24.

Host Memory Utilization from VMware View Planner: 800 Full Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 84 GB out of 256 GB available on the host. Figure 25.

Host CPU Utilization from VMware View Planner: 800 Full Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figure 26. Combined average read and write latency is measured as 18 ms on one of the hosts shown here, and is an average of 13.99 ms for all hosts. In these tests, an average of 1087.87 IOPS are generated per host.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 34 of 59

Figure 26.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 800 Full Clones

Test 4: Mixed 400 VMware View Linked Clones and 400 Full Clones on Eight Cisco UCS C240 M3 Servers To simulate a production environment, which would typically have a mix of linked clones and full clones, a test with 400 linked clones and 400 full clones was conducted on eight nodes. For this testing, all eight nodes were made available for provisioning linked clones and full clones. In other words, the linked clones and full clones were distributed across the entire cluster. Test result highlights include: ●

Average of 80 to 85 percent CPU utilization



Average 80 to 85 GB of RAM used out of 256 GB available



Average 11.05 MBps of network bandwidth used



Average of 7.80 ms of I/O latency



Average of 1043.37 IOPS

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 35 of 59

Figure 27 and Table 18 show the values. Figure 27.

VMware View Planner Score: 400 Linked Clones and 400 Full Clones

Table 18.

Application Latency Values: 400 Linked Clones and 400 Full Clones

Event

Group

Count

Mean

Median

Coefficient of Variation

7zip-Compress

C

2391

3.837502

3.609511

0.299

AdobeReader-Browse

A

47820

0.237904

0.199534

0.776

AdobeReader-Close

A

2391

0.76634

0.750187

0.054

AdobeReader-Maximize

A

4782

0.705092

0.765475

0.222

AdobeReader-Minimize

A

2391

0.313736

0.298537

0.206

AdobeReader-Open

B

2391

0.73929

0.601873

0.978

Excel Sort-Close

A

2391

0.326697

0.220315

0.948

Excel Sort-Compute

A

62166

0.026193

0.024479

0.390

Excel Sort-Entry

A

62166

0.180245

0.152002

0.756

Excel Sort-Maximize

A

7173

0.369944

0.334935

0.309

Excel Sort-Minimize

A

2391

0.000716

0.000687

0.462

Excel Sort-Open

B

2391

0.616223

0.543731

0.574

Excel Sort-Save

B

2391

0.616912

0.544978

0.399

Firefox-Close

A

2391

0.526329

0.51306

0.056

Firefox-Open

B

2391

1.035522

0.841609

0.820

IE ApacheDoc-Browse

A

131340

0.088842

0.069451

2.349

IE ApacheDoc-Close

A

2388

0.005519

0.001686

7.620

IE ApacheDoc-Open

B

2388

0.909516

0.496354

3.157

IE WebAlbum-Browse

A

35865

0.267615

0.162217

2.419

IE WebAlbum-Close

A

2391

0.007523

0.001767

9.844

IE WebAlbum-Open

B

2391

0.889531

0.513018

2.684

Outlook-Attachment-Save

B

11955

0.07668

0.057119

2.535

Outlook-Close

A

2391

0.686446

0.616239

0.381

Outlook-Open

B

2391

0.763189

0.69918

0.337

Outlook-Read

A

23910

0.334432

0.213805

2.002

Outlook-Restore

C

26301

0.419161

0.404693

0.596

PPTx-AppendSlides

A

9564

0.083011

0.066234

0.775

PPTx-Close

A

2391

0.558762

0.507701

0.492

PPTx-Maximize

A

9564

0.001278

0.000723

7.788

PPTx-Minimize

A

4782

0.000684

0.000624

0.681

PPTx-ModifySlides

A

9564

0.30651

0.268399

0.658

PPTx-Open

B

2391

3.094825

3.05699

0.126

PPTx-RunSlideShow

A

16737

0.340805

0.528658

0.483

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 36 of 59

Event

Group

Count

Mean

Median

Coefficient of Variation

PPTx-SaveAs

C

2391

3.937301

3.066142

1.046

Video-Close

A

2391

0.073392

0.038045

1.943

Video-Open

B

2391

0.145744

0.048822

6.829

Video-Play

C

2391

50.537753

50.442256

0.007

Word-Close

A

2391

0.56236

0.591607

0.314

Word-Maximize

A

7173

0.325756

0.265219

0.377

Word-Minimize

A

2391

0.000666

0.000622

0.956

Word-Modify

A

50211

0.058245

0.062483

0.431

Word-Open

B

2391

4.33827

3.819501

0.627

Word-Save

B

2391

3.620773

3.54622

0.191

The host utilization metrics for CPU, memory, network, and disk I/O values obtained while running the test are shown in Figures 28, 29, and 30. Figure 28.

Host CPU Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average CPU Use in Percent

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 37 of 59

Figure 29.

Host Memory Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average Memory Use in GB

Note that the Y-axis memory (average) value in gigabytes ranges up to 88 GB out of 256 GB available on the host. Figure 30.

Host CPU Utilization from VMware View Planner: 400 Linked Clones and 400 Full Clones, Average Network Use

Disk latency values are obtained from VMware Virtual SAN Observer, as shown in Figures 31 and 32. Combined average read and write latency is measured as 8 ms on one of the hosts shown here, and is an average of 7.80 ms for all hosts. In these tests, average IOPS generated are 1043.37.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 38 of 59

Figure 31.

Host IOPS and Latency Graph from VMware Virtual SAN Observer: 400 Linked Clones and 400 Full Clones

Figure 32.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: 400 Linked Clones and 400 Full Clones

VMware View Operations Tests In addition to running VMware View Planner tests, VMware View operations tests were conducted to measure the effect of these administrative tasks on the environment, as shown in Table 19. Table 19.

VMware View on Cisco UCS C240 M3: Operations Test Results

Details

400 Linked Clones

800 Linked Clones

800 Full Clones

Mixed (400 Linked Clones and 400 Full Clones)

Hosts

4

8

8

8

VMware Virtual SAN disk groups

Single disk group per host: 1 SSD and 4 HDDs

Single disk group per host: 1 SSD and 4 HDDs

Two disk groups per host: 2 SSDs and 12 HDDs

Single disk group per host: 2 SSDs and 12 HDDs

Provisioning time

42 minutes

80 minutes

9 hours and 29 minutes

4 hours and 15 minutes

Recompose time

60 minutes

121 minutes



60 minutes for 400 linked clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 39 of 59

Details

400 Linked Clones

800 Linked Clones

800 Full Clones

Mixed (400 Linked Clones and 400 Full Clones)

Refresh time

36 minutes

72 minutes



36 minutes for 400 linked clones

Power-on time

4 minutes

8 minutes

8 minutes

8 minutes

Delete time

22 minutes

44 minutes

47 minutes

41 minutes

Times for these VMware View operations is measured through log entries found at C:\Program Data\VMware\VDM\logs\log-YEAR—MONTH—DAY for the VMware vCenter Server. In addition, CPU utilization during these operations is shown in Figures 33 through 39. Figure 33.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 400 Linked Clones and 400 Full Clones

Figure 34.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 800 Full Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 40 of 59

Figure 35.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Power-on Operation for 800 Linked Clones

Figure 36.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Recomposition Operation for 800 Linked Clones

Figure 37.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Desktop Refresh Operation for 800 Linked Clones

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 41 of 59

Figure 38.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deployment Operation for 800 Linked Clones

Figure 39.

VMware vCenter Operations Manager for Horizon CPU Utilization Graph: Deletion Operation for 400 Linked Clones

VMware Virtual SAN Availability and Manageability Tests VMware Virtual SAN is fully integrated with VMware vSphere advanced features, including VMware vMotion, DRS, and High Availability, to provide the best level of availability for the virtualized environment. For redundancy, VMware Virtual SAN uses a distributed RAID architecture, which enables a VMware vSphere cluster to accommodate the failure of a VMware vSphere host or a component within a host. For example, a VMware cluster can accommodate the failure of magnetic disks, flash memory–based devices, and network interfaces, while continuing to provide complete capabilities for all virtual machines. In addition, availability is defined for each virtual machine through the use of virtual machine storage policies. These policies, along with the VMware Virtual SAN distributed RAID architecture, virtual machines, and copies of the virtual machine contents, are distributed across multiple VMware vSphere hosts in the cluster. In the event of a failure, a failed node does not necessarily need to migrate data to a surviving host in the cluster.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 42 of 59

The VMware Virtual SAN data store is based on object-oriented storage. In this approach, a virtual machine on the VMware Virtual SAN is made up of these VMware Virtual SAN objects: ●

The virtual machine home or namespace directory



A swap object (if the virtual machine is powered on)



Virtual disks or virtual machine disks (VMDKs)



Delta disks created for snapshots (each delta disk is an object)

The virtual machine namespace directory holds all the virtual machine files (.vmx files, log files, and so on). It excludes VMDKs, delta disks, and swap files, which are maintained as separate objects. This approach is important because it determines the way in which objects and components are built and distributed in VMware Virtual SAN. For instance, there are soft limitations, and exceeding those limitations can affect performance. In addition, witnesses are deployed to arbitrate between the remaining copies of data in the event of a failure within the VMware Virtual SAN cluster. The witness component helps ensure that no split-brain scenarios occur. Witness deployment is not predicated on any failures-to-tolerate (FTT) or stripe-width policy settings. Rather, witness components are defined as primary, secondary, and tie-breaker and are deployed based on a defined set of rules, as follows: ●

Primary witnesses: Primary witnesses require at least (2 x FTT) + 1 nodes in a cluster to tolerate the FTT number of node and disk failures. If the configuration does not have the required number of nodes after all the data components have been placed, the primary witnesses are placed on exclusive nodes until the configuration has (2 x FTT) + 1 nodes.



Secondary witnesses: Secondary witnesses are created to help ensure that each node has equal voting power in its contribution to a quorum. This capability is important because each node failure needs to affect the quorum equally. Secondary witnesses are added to allow each node to receive an equal number of components, including the nodes that hold only primary witnesses. The total count of data components, plus witnesses on each node, is equalized in this step.



Tie-breaker witnesses: After primary witnesses and secondary witnesses have been added, if the configuration has an even number of total components (data and witnesses), then one tie-breaker witness is added to make the total component count an odd number.

The following sections describe the VMware Virtual SAN data store scenarios for maintaining resiliency and availability while performing day-to-day operations. Planned Maintenance For planned operations, the VMware Virtual SAN provides three host maintenance mode options: Ensure Accessibility, Full Data Migration, and No Data Migration. Each is described in the sections that follow.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 43 of 59

Ensure Accessibility The Ensure Accessibility option is the default host maintenance mode. With this option, VMware Virtual SAN helps ensure that all accessible virtual machines on the host remain accessible, either when the host is powered off or when it is removed from the cluster. In this case, VMware Virtual SAN copies just enough data to other hosts in the cluster to help ensure the continued operation of all virtual machines, even if this process results in a violation of the FTT policy. Use this option only when the host will remain in maintenance mode for only a short period of time. During this time period, the system cannot guarantee resiliency after failures. Typically, this option requires only partial data evacuation. Select Ensure Accessibility to remove the host from the cluster temporarily, such as to install upgrades, and then return the host to the same cluster. Do not use this option to permanently remove the host from the cluster. Full Data Migration When Full Data Migration is selected, the VMware Virtual SAN moves all its data to other hosts in the cluster. Then it maintains or fixes availability compliance for the affected components in the cluster. This option results in the largest amount of data transfer, and this migration consumes the most time and resources. Select the Full Data Migration option only when the host needs to be migrated permanently. When evacuating data from the last host in the cluster, be sure to migrate the virtual machines to another data store, and then put the host in maintenance mode. The testing described in this document included a Full Data Migration test. With VMware Virtual SAN, placing a host in maintenance mode with the Full Data Migration option causes the virtual machine objects to be transferred to a different host. This migration is in addition to any virtual machines that were proactively migrated by administrators because the host may have disk objects for virtual machines that reside on other hosts. This transfer can be verified by using the vsan.resync_dashboard -r 0 Ruby vSphere Console (RVC) command, which shows the data being migrated as in the example in Figure 40.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 44 of 59

Figure 40.

Host Maintenance Mode: Full Data Migration

No Data Migration When No Data Migration is selected, VMware Virtual SAN does not evacuate any data from this host. If the host is powered off or removed from the cluster, some virtual machines may become inaccessible. VMware Virtual SAN Failure Simulations In some cases, during ongoing operations in a VMware Virtual SAN environment, either an individual disk failure or a host failure may affect virtual machine availability based on the storage policies applied. This section simulates these failure scenarios to demonstrate how VMware Virtual SAN maintains storage data that is highly available under different conditions. Magnetic Disk Failure Simulation In a VMware Virtual SAN environment, if a magnetic disk storing any component of any object fails, it is marked as “Degraded,” and Virtual SAN immediately begins to rebuild components from that disk on other disks. This action is usually triggered when a drive or controller reports some kind of physical hardware failure. However, if a magnetic disk goes offline, it is marked as “Absent.” In this case, VMware Virtual SAN does not immediately rebuild components. Instead, it waits a default time of 60 minutes for the drive to be replaced or restored. This response is usually triggered by pulling a drive from its slot. During this time period, virtual machines continue to run using replicas of their components that exist on other drives. The only virtual machines that cease functioning are those that have a failure policy of FTT=0 and that have the sole copy of their data stored on the offline drive.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 45 of 59

If the drive is replaced within 60 minutes, VMware Virtual SAN simply updates the data on that drive to synchronize it with the live data from the rest of the cluster. If the drive has not been replaced after 60 minutes, VMware Virtual SAN changes the state of the drive to “Degraded” and then begins to rebuild the data on other drives. Note that the VMware Virtual SAN default 60-minute repair-delay time can be modified. For more information, see Changing the Default Repair-Delay Time for a Host Failure in VMware Virtual SAN. For this simulation, object placements for the replica virtual machine are configured with FTT=1 and use the default storage policies. The magnetic disk is removed from the disk group as indicated by the “Object not found” status in Figure 41. After the default wait time has passed, the state of the drive changes from “Absent” to “Degraded.” Figure 41.

Magnetic Disk Failure Simulation: Degraded Disk

Another way to check the disk object information is by using the RVC command vsan.disk_object_info. In this case, one of the disks is not found, as shown in the example in Figure 42.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 46 of 59

Figure 42.

Magnetic Disk Failure Simulation: Degraded Disk in VMware Virtual SAN Observer

After the repair-delay time is reached, VMware Virtual SAN rebuilds the disk objects from the replica and then uses a different disk, as shown in Figure 43. Figure 43.

Magnetic Disk Failure Simulation: Repair Delay Time Reached

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 47 of 59

By using the vsan.disk_object_info RVC command on the new disk, the virtual machine object constructs are found, as shown in Figure 44. Figure 44.

Magnetic Disk Failure Simulation: Repair Delay Time Reached

SSD Failure Simulation If an SSD in a VMware Virtual SAN disk group fails, the disk group becomes inaccessible, and the magnetic disks in the disk group do not contribute to the VMware Virtual SAN storage. As in the magnetic disk failure simulation, when an SSD fails, the VMware Virtual SAN waits through a 60-minute default repair delay time before it rebuilds the virtual machine objects from a different SSD: for example, in the event of a nontransient failure. The absent SSD makes the entire disk group unavailable, and after the default wait time the individual components are rebuilt across the other available disk groups. In the SSD failure test, an SSD was removed from a disk group, as shown in Figure 45. The SSD state is displayed as “Degraded” because the disk was manually removed from a disk group. For an actual disk failure, the state is displayed as “Missing.”

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 48 of 59

Figure 45.

SSD Failure Simulation: Disk Removed

After the repair delay time is reached, if the SSD failure continues to exist, VMware Virtual SAN rebuilds the virtual machine layout using a different SSD, as shown in Figure 46.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 49 of 59

Figure 46.

SSD Failure Simulation: Repair Delay Time Reached

Network Failure Simulation The VMware Virtual SAN VMkernel network is configured with redundant virtual networks connected to Cisco UCS fabric interconnects A and B. To verify that the VMware Virtual SAN traffic is not disrupted, the physical port was disabled from Cisco UCS Manager to display a continuous vmkping to the VMware Virtual SAN IP address on the dedicated network, as shown in Figure 47.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 50 of 59

Figure 47.

Network Failure Simulation

Similar redundancy for the management network in a VMware Virtual SAN environment is anticipated.

Test Methodology The reference architecture for this solution uses VMware View Planner as the benchmarking tool, and it uses VMware Virtual SAN Observer and vCenter Operations Manager for Horizon as the performance monitoring tools.

VMware View Planner 3.5 VMware View Planner is a VDI workload generator that automates and measures a typical office worker’s activity: use of Microsoft Office applications, web browsing, reading a PDF file, watching a video, etc. Each VMware View Planner operation runs iteratively. Each iteration is a randomly sequenced workload consisting of these applications and operations. The results of a run consist of latency statistics collected for the applications and operations for all iterations. In addition to VMware View Planner scores, VMware Virtual SAN Observer and VMware vCenter Operations Manager for Horizon are used as monitoring tools (Figure 48).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 51 of 59

Figure 48.

VMware View Planner Components

The standardized VMware View Planner workload consists of nine applications performing a total of 44 user operations (Table 20). These user operations are separated into three groups: interactive operations (Group A), I/O operations (Group B), and background load operations (Group C). The operations in Group A are used to determine quality of service. QoS is determined separately for Group A user operations and Group B user operations and is the 95th percentile of latency for all the operations in a group. The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. The operations in Group C are used to generate additional load. Table 20.

VMware View Planner Operations

Group A

Group B

Group C

AdobeReader: Browse

AdobeReader: Open

7zip: Compress

AdobeReader: Close

Excel_Sort: Open

Outlook: Restore

AdobeReader: Maximize

Excel_Sort: Save

PowerPoint: SaveAs

AdobeReader: Minimize

Firefox: Open

Video: Play

Excel_Sort: Close

IE_ApacheDoc: Open

Excel_Sort: Compute

IE_WebAlbum: Open

Excel_Sort: Entry

Outlook: Attachment-Save

Excel_Sort: Maximize

Outlook: Open

Excel_Sort: Minimize

PowerPoint: Open

Firefox: Close

Video: Open

IE_ApacheDoc: Browse

Word: Open

IE_ApacheDoc: Close

Word: Save

IE_WebAlbum: Browse

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 52 of 59

Group A

Group B

Group C

IE_WebAlbum: Close Outlook: Close Outlook: Read PowerPoint: AppendSlides PowerPoint: Close PowerPoint: Maximize PowerPoint: Minimize PowerPoint: ModifySlides PowerPoint: RunSlideShow Video: Close Word: Close Word: Maximize Word: Minimize Word: Modify

For the testing, VMware View Planner performed a total of five iterations: ●

Ramp up (first iteration)



Steady state (second, third, and fourth iterations)



Ramp down (fifth iteration)

During each iteration, VMware View Planner reports the latencies for each operation performed in each virtual machine.

VMware Virtual SAN Observer VMware Virtual SAN Observer is designed to capture performance statistics for a VMware Virtual SAN cluster and provide access through a web browser for live measurements. It also can generate a performance bundle over a specified duration. VMware Virtual SAN Observer is part of Ruby vSphere Console (RVC) which is a Linux console user interface for VMware ESXi and vCenter. RVC is installed on VMware vCenter and is required for running VMware Virtual SAN Observer commands. Following best practices, an out-of-band VMware vCenter appliance is used in this reference architecture to run VMware Virtual SAN Observer commands. This setup helps ensure that the production VMware vCenter instance is not affected by the performance measurements.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 53 of 59

The VMware Virtual SAN Observer commands that were used for this solution are shown in Table 21. Table 21.

VMware Virtual SAN Observer Commands

VMware Virtual SAN Observer Command

Description

vsan.resync_dashboard 10.0.115.72.54 -r 0

Observe data migration while placing hosts in Full Migration maintenance mode.

vsan.disk_object_info

Verify disk object information.

vsan.vm_object_info

Verify virtual machine object information.

vsan.disks_info hosts/10.0.115.72.54

Obtain a list of disks on a specific host.

vsan.obj_status_report

Obtain health information about VMware Virtual SAN objects. This command is helpful in identifying orphaned objects.

vsan.reapply_vsan_vmknic_config

Re-enable VMware Virtual SAN on VMkernel ports while troubleshooting the network configuration.

vsan.observer {cluster name} -r -o -g /tmp -i 30 -m 1

Enable and capture performance statistics used for benchmark testing. For more information, see Enabling or Capturing Performance Statistics Using VMware Virtual SAN Observer.

For a more comprehensive list of VMware Virtual SAN Observer commands, see the VMware Virtual SAN Quick Monitoring and Troubleshooting Reference Guide.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 54 of 59

System Sizing The reference architecture used the sizing specifications described in this section.

Virtual Machine Test Image Builds Two different virtual machine images were used to provision desktop sessions in the VMware View environment: one for linked clones and one for full clones (Table 22). Both conformed to testing tool standards and were optimized in accordance with the VMware View Optimization Guide for Windows 7 and Windows 8. The VMware OS Optimization Tool was used to make the changes. Table 22.

Virtual Machine Test Image Builds

Attribute

Linked Clones

Full Clones

Desktop operating system

Microsoft Windows 7 Enterprise SP1 (32-bit)

Microsoft Windows 7 Enterprise SP1 (32-bit)

Hardware

VMware Virtual Hardware Version 10

VMware Virtual Hardware Version 10

CPU

1

2

Memory

1536 MB

2048 MB

Memory reserved

0 MB

0 MB

Video RAM

35 MB

35 MB

3D graphics

Off

Off

NICs

1

1

Virtual network adapter 1

VMXNet3 adapter

VMXNet3 adapter

Virtual SCSI controller 0

Paravirtual

Paravirtual

Virtual disk VMDK 1

24 GB

40 GB

Virtual disk VMDK 2

1 GB

1 GB

Virtual floppy drive 1

Removed

Removed

Virtual CD/DVD drive 1

Removed

Removed

Applications

Adobe Acrobat 10.1.4

Adobe Acrobat 10.1.4

Firefox 7.01

Firefox 7.01

Internet Explorer 10

Internet Explorer 10

Microsoft Office 2010

Microsoft Office 2010

Microsoft Windows Media Player

Microsoft Windows Media Player

7Zip

7Zip

VMware tools

9.4.10, build-2068191

9.4.10, build-2068191

VMware View Agent

6.0.1-2089044

6.0.1-2089044

The Microsoft Windows 7 golden image was modified to meet VMware View Planner 3.5 requirements. See the VMware View Planner Installation and User’s Guide.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 55 of 59

Management Blocks Table 23 shows the sizing of the management blocks. Table 23.

Management Block Sizing

Server Role

VCPU

RAM (GB)

Storage (GB) Operating System

Software Version

Domain controller

2

6

40

Server 2012 64-bit

Microsoft SQL Server

2

8

140

Server 2012 64-bit

Microsoft SQL Server 2012 64-bit

VMware vCenter Server

4

10

70

Server 2012 64-bit

VMware vCenter 5.5.0 build 1178595

VMware vCenter appliance for VMware Virtual SAN Observer (out of band)

4

8

100

SUSE Linux Enterprise Server (SLES) 11 64-bit

VMware vCenter 5.5 U2 build 2063318

VMware View Connection Server

4

10

60

Server 2012 64-bit

VMware View Connection Server 6.0.1 build 2088845

VMware View Composer Server

4

10

60

Server 2012 64-bit

VMware View Composer 6.0.1 build 2078421

VMware vCenter Operations Manager Analytics Server

4

9

212

SLES 11 64-bit

3.5 build 2061132 (beta)

VMware vCenter Operations Manager UI Server

4

7

132

SLES 11 64-bit

3.5 build 2061132

VMware View Planner Server

2

4

60

Server 2012 64-bit

3.5 build 2061132

Host Configuration Table 24 summarizes the host configuration. Table 24.

Host Configuration

Component

Value

CPU

● Intel Xeon processor E5-2680 v2 at 2.80 GHz ● Hyperthreading: Enabled

RAM

● 256 GB (16 x 16 GB)

NICs

● Cisco UCS VIC 1225 converged network adapter (2 x 10-Gbps ports) ● Firmware version 2.2(2c) ● Driver version enic -1.4.2.15c

BIOS

● C240M3.1.5.7.0.042820140452

Disks

● 2 x 400-GB 2.5-inch enterprise performance SAS SSDs (1 SSD for linked clones and 2 SSDs for full clones) ● 12 x 900-GB 6-Gbps SAS 10,000-rpm drives (4 disks per host used for linked clones, and 12 disks per host used for full clones)

VMware ESXi version

● VMware ESXi 5.5.0 build 2068190

Storage adapter

● Firmware package version 23.12.0-0021 ● Firmware version 3.240.95-2788 ● Driver version 00.00.05.34-9vmw, build 2068190 interface 9.2

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 56 of 59

Bill of Materials Table 25 provides the bill of materials for the reference architecture. Table 25.

Bill of Materials

Area

Component

Host hardware

Quantity

● Cisco UCS C240 M3

8

● Intel Xeon processor E5-2680 v2 at 2.80 GHz

16

● 16-GB DDR3 1600-MHz RDIMM, PC3-12800, dual rank

128

● LSI 9207-8i RAID controller

8

● Cisco VIC 1225 dual-port 10-Gbps SFP+ converged network adapter

8

● 16-GB SD card

16

● 400-GB 2.5-inch enterprise performance SAS SSD

8 (for linked clones) 16 (for full clones)

Network switch

Software

● 300-GB SAS 15,000-rpm 6-Gbps 2.5-inch drive ● 900-GB SAS 10,000-rpm 6-Gbps 2.5-inch drive

32 (for linked clones)

● Cisco UCS 6248 Fabric Interconnect

2

● Cisco Nexus 5548UP

2

● VMware ESXi 5.5.0 build 2068190

8

● VMware vCenter Server 5.5.0, build 1623101

1

● VMware Horizon 6.0.1, build 2088845

1

● VMware vCenter Operations for View 1.5.1, build 1286478

1

● Microsoft Windows 2008 R2

4

● Microsoft SQL Server 2008 R2

1

● Microsoft SQL Server 2008 R2

4

96 (for full clones)

Conclusion Implementing VMware Horizon 6 with View with VMware Virtual SAN on Cisco UCS provides linear scalability with exceptional end-user performance and a simpler management experience, with Cisco UCS Manager centrally managing the infrastructure and VMware Virtual SAN integrated into VMware vSphere. This solution also provides cost-effective hosting all sizes of virtual desktop deployments. The reference architecture demonstrates the following main points: ●

Linear scalability is achieved with VMware Virtual SAN as the storage solution on Cisco UCS for hosting VMware View virtual desktops. The reference architecture successfully scaled from 400 desktops on four Cisco UCS C240 M3 nodes to 800 desktops on eight nodes, keeping all aspects of end-user performance consistently acceptable with less than 15 ms of disk latency and 3-ms application response times.



Optimal performance is achieved while performing all virtual desktop operations such as refresh, recompose, deploy, power-on, and power-off operations. Times measured for these operations fall within industry-measured benchmarks and demonstrate the joint solution’s scalability.



VMware Virtual SAN provides highly available and resilient storage for hosting VMware View virtual desktops. Multiple maintenance and failure scenarios tested provide confidence in the resiliency of the joint solution.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 57 of 59

For More Information ●

VMware Virtual SAN Ready Nodes



What’s New in VMware Virtual SAN



Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers



VMware Virtual SAN Compatibility Guide



LSI



Changing the Default Repair-Delay Time for a Host Failure in VMware Virtual SAN



I/O Analyzer



Ruby vSphere Console (RVC)



Enabling or Capturing Performance Statistics Using VMware Virtual SAN Observer



VMware View Optimization Guide for Microsoft Windows 7 and Windows 8



VMware View Planner Installation and User’s Guide



VMware Virtual SAN Quick Monitoring and Troubleshooting Reference Guide



Cisco UCS C240 M3 High-Density Rack Server (SFF Disk-Drive Model) Specification Sheet



Working with VMware Virtual SAN



VMware Virtual SAN Ready System Recommended Configurations



Enabling or Capturing Statistics Using VMware Virtual SAN Observer for VMware Virtual SAN Resources

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

Page 58 of 59

Acknowledgements The following individuals contributed to the creation of this paper: ●

Balayya Kamanboina, Validation Test Engineer, VMware



Bhumik Patel, Partner Architect, VMware



Chris White, End User Computing Architect, VMware



Hardik Patel, Technical Marketing Engineer, Cisco Systems



Jim Yanik, End User Computing Architect, VMware



Mike Brennan, Technical Marketing Manager, Cisco Systems



Jon Catanzano, Senior Technical Writer/Editor, Consultant, VMware



Nachiket Karmarkar, Performance Engineer, VMware

Printed in USA

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

C11-733480-00

12/14

Page 59 of 59

Suggest Documents