Automated Provisioning with the VMware® Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.0
Automated Provisioning with the VMware® Software-Defined Data Center REFERENCE ARCHITECTURE VERSION 1.0
Automated Provisioning with the VMware Software-Defined Data Center
Table of Contents Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 VMware Software Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Architectural Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Virtual Management Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Management Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Edge Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Compute Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Physical Component Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Software-Defined Data Center Component Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 vSphere Data Center Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 vRealize Orchestrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 vSphere Data Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 NSX for vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 vRealize Automation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Load-Balanced vRealize Automation Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 22 vRealize Automation Appliances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 vRealize Automation Infrastructure as a Service Web Servers . . . . . . . . . . . . . . . . . 22 vRealize Automation Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Distributed Execution Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 vSphere Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 vRealize Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 vRealize Log Insight. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 SDDC Operational Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 NSX for vSphere Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Tenants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Endpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Fabric Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Business Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Network Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
T E C H N I C A L W H I T E PA P E R / 2
Automated Provisioning with the VMware Software-Defined Data Center
Reservation Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Reservations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Blueprints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 The VMware Validated Design Team. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
T E C H N I C A L W H I T E PA P E R / 3
Automated Provisioning with the VMware Software-Defined Data Center
Overview IT operations teams are being driven to be more efficient and more responsive to the needs of their business users. It no longer is acceptable for IT provisioning requests to take days or weeks to fulfill. Demand is always increasing, as is the expectation of immediate on-demand delivery for repeatable “commodity-type” requests. An automation cloud based on VMware® IT technology enables an IT operations team to automate the provisioning of common repeatable requests and to respond to business needs with more agility and predictability. This reference architecture describes the implementation of a software-defined data center (SDDC) that leverages the latest VMware components to create an SDDC with automated provisioning. The reference architecture is built on the VMware Validated Design for a single-site IT automation cloud. The VMware Validated Design process gathers data from customer support, VMware IT, and VMware and partner professional services to create a standardized configuration that meets the majority of customer requirements. Internally, VMware engineering teams test new product capabilities, installations, upgrades, and more, against the standardized configuration. VMware and partner professional services teams build delivery kits based on this design, knowing that they are deploying with the best possible configuration. Customers planning “do it yourself” deployments also benefit from following this architecture, confident that future product upgrades, patches, and so on, have already been tested against a configuration identical to theirs.
Audience This document will assist those who are responsible for infrastructure services, including enterprise architects, solution architects, sales engineers, field consultants, advanced services specialists, and customers. This guide provides an example of a successful deployment of an SDDC.
T E C H N I C A L W H I T E PA P E R / 4
Automated Provisioning with the VMware Software-Defined Data Center
VMware Software Components This architecture uses the following VMware software components. PRODUCT
VERSION
DESCRIPTION
VMware vCloud Suite Enterprise Edition
6.0
VMware vCloud Suite® Enterprise Edition is a comprehensive suite of products used to deliver the SDDC. In this architecture, users leverage the following components of the vCloud Suite Enterprise Edition 6.0: VMware vSphere® Enterprise Plus Edition™ 6.0 U1, VMware vRealize™ Automation™ 6.2.2, VMware vRealize Orchestrator™ 6.0.3, VMware vSphere Data Protection™ 6.1, and VMware vRealize Operations Manager™ 6.1.
VMware vCenter Server
6.0 U1
VMware vCenter Server™ is a central platform for managing and configuring the VMware ESXi™ hypervisor. VMware vSphere Web Client is the centralized point of administration for compute clusters and all networking services provided by VMware NSX™ for vSphere®.
VMware Virtual SAN
6.1
VMware Virtual SAN™ is radically simple, hypervisor-converged storage for virtual machines. It delivers enterprise-class, high-performance storage for virtualized applications, including business-critical applications.
NSX for vSphere
6.2
NSX for vSphere exposes a complete suite of simplified logical networking elements and services including virtual switches, routers, firewalls, load balancers, virtual private network (VPN), QoS, monitoring, and security.
VMware vRealize Log Insight
2.6
VMware vRealize Log Insight™ is real-time log management and log analysis with machine learning–based intelligent grouping, high-performance search, and better troubleshooting across physical, virtual, and cloud environments.
Table 1. Components
T E C H N I C A L W H I T E PA P E R / 5
Automated Provisioning with the VMware Software-Defined Data Center
Architectural Overview This design uses three cluster types, each with its own distinct function. It provides a management plane that is separate from the user workload (compute) virtual machines. It also leverages an edge cluster, which provides dedicated resources for network services such as VMware NSX Edge™ devices, which provide access to the corporate network and the Internet.
Virtual Management Networks Table 2 provides a quick introductory overview of VMware NSX and networking terms and acronyms. TERM
DEFINITION
VLAN
A VLAN is used to partition a physical network into multiple distinct broadcast domains.
NSX for vSphere
NSX for vSphere is virtual networking and security software that enables the software-defined network.
Virtual switch
The NSX virtual switch abstracts the physical network and provides access-level switching in the hypervisor. It is central to network virtualization because it enables logical networks that are independent of physical constructs such as VLANs.
VPN
A VPN can be used to connect networks to each other or a single machine to a network.
MPLS
Multiprotocol Label Switching (MPLS) is a scalable, protocol-independent transport mechanism. In an MPLS network, data packets are assigned labels. Packet-forwarding decisions are made solely on the contents of this label, without the need for examination of the packet itself. This enables users to create end-to-end circuits across any type of transport medium, using any protocol.
Table 2. VMware NSX Overview
This design utilizes NSX for vSphere virtual switches in the management cluster for automation and monitoring components. Each management solution—vRealize Automation, vRealize Operations, and vRealize Log Insight— has its own NSX virtual switch and corresponding NSX Edge device. The grouping of these virtual machines onto a virtual switch is referred to as a virtual management network. Virtual management networks are connected to both the vSphere management VLAN and the external VLAN via an NSX Edge device. Routing is configured between the virtual management network and the vSphere management VLAN. This enables the solution and virtual machines on the vSphere management VLAN, such as the vCenter Server instance, to communicate without exposing the management servers directly to end users. External connectivity to the business function of the virtual management network is provided by a load balancer’s virtual IP on the external VLAN. External connectivity to the management function of the virtual management network is provided, as needed, by the method—routing to the virtual management network using Ethernet, MPLS, VPN, jump hosts, or other means—that best fits the customer’s environment. Another option is to deploy the VMware NSX distributed logical router and to enable a dynamic routing protocol, such as OSPF, between the router and the top-of-rack switches. This enables access to all virtual machines on the virtual switches by advertising their IP addresses to the rest of the network. Virtual management networks still require NSX Edge devices to provide load balancer functionality.
T E C H N I C A L W H I T E PA P E R / 6
Automated Provisioning with the VMware Software-Defined Data Center
Virtual management networking isolates management solutions from each other and from compute workloads. More important, it enables disaster recovery of the automation and monitoring stacks without having to change the IP addresses of the virtual machines. The virtual machines can be moved to precreated virtual switches in another site that has been configured the same as the primary site, enabling quick recovery of the solutions.
Top-of-Rack Switches VLAN 1701 SVI - 10.155.170.1 VLAN 1680 SVI - 10.155.168.1 Static Routes 192.168.20.0/24 - 10.155.168.75 192.168.21.0/24 - 10.155.168.76 192.168.22.0/24 - 10.155.168.77
vSphere Management VLAN (1680) – 10.155.168.0/24
External VLAN (1701) – 10.155.170.0/24
vRealize Automation Virtual Switch 192.168.21.0/24
VM
VM
vRealize Operations Virtual Switch 192.168.20.0/24
VM
VM
VM
VM
NSX Edge 192.168.21.1 (Internal) 10.155.168.75 (vSphere) 10.155.170.150 (External)
NSX Edge 192.168.21.1 (Internal) 10.155.168.76 (vSphere) 10.155.170.152 (External)
vRealize Log Insight Virtual Switch 192.168.22.0/24
VM
VM
VM
NSX Edge 192.168.21.1 (Internal) 10.155.168.77 (vSphere) 10.155.170.151 (External)
Figure 1. Example Using Static Routing
T E C H N I C A L W H I T E PA P E R / 7
Automated Provisioning with the VMware Software-Defined Data Center
Top-of-Rack Switches
vSphere Management VLAN (1680) – 10.155.168.0/24
External VLAN (1701) – 10.155.170.0/24
OSPF
Distributed Logical Router
vRealize Automation Virtual Switch 192.168.21.0/24
VM
VM
vRealize Operations Virtual Switch 192.168.20.0/24
VM
VM
VM
VM
NSX Edge 192.168.20.1 (Internal)
NSX Edge 192.168.21.1 (Internal)
vRealize Log Insight Virtual Switch 192.168.22.0/24
VM
VM
VM
NSX Edge 192.168.22.1 (Internal)
Figure 2. Example Using Dynamic Routing
Management Cluster The management cluster contains the management and monitoring solutions for the entire design. A single management cluster can support multiple pods of edge and compute clusters. The minimum number of hosts required in the management cluster is three, although four hosts are recommended for availability and performance. The management cluster can scale out as the number of edge and compute pods increases. With the exception of vSphere Data Protection, which stores all backups on NFS storage, all virtual machines utilize Virtual SAN storage. A single vCenter Server instance manages the resources in the management cluster. Additional vCenter Server instances manage edge and compute clusters. A Platform Services Controller™ is deployed for each vCenter Server instance. The Platform Services Controllers are joined to the same VMware vCenter Single Sign-On domain, enabling features such as Enhanced Linked Mode and cross vCenter Server VMware vSphere vMotion®. For more information on the Platform Services Controller and new enhancements in vSphere 6.0, see the VMware vCenter Server 6.0 Deployment Guide and the What’s New in vSphere 6.0 white papers.
T E C H N I C A L W H I T E PA P E R / 8
Automated Provisioning with the VMware Software-Defined Data Center
The management cluster also contains common core infrastructure. This includes vRealize Operations Manager and vRealize Log Insight. VMware NSX Manager™ instances, one for each vCenter Server instance, are deployed into the management cluster. NSX for vSphere components, such as VMware NSX Controller™ instances, are also deployed for and in the management cluster. VMware NSX Controller instances for the edge and compute clusters are deployed into the edge cluster. All vRealize Automation components are also deployed in the management cluster.
External Access
Internal Access
Spine
Virtual SAN
Leaf
Management
External Storage
Figure 3. Management Cluster
T E C H N I C A L W H I T E PA P E R / 9
Automated Provisioning with the VMware Software-Defined Data Center
Edge Cluster The edge cluster simplifies the physical network configuration. It is used to deliver networking services to the compute cluster virtual machines. All external networking, including corporate and Internet, for user-workload virtual machines is accessed via the edge cluster. The minimum edge cluster size is three hosts, but it can scale depending on the volume of services required by the compute cluster virtual machines.
External Access
Internal Access
Spine
Virtual SAN
Leaf
Edge
Figure 4. Edge Cluster
T E C H N I C A L W H I T E PA P E R / 1 0
Automated Provisioning with the VMware Software-Defined Data Center
Compute Clusters The compute clusters are the simplest of the three types; they run user-workload virtual machines. Compute cluster networking is completely virtualized using NSX for vSphere. A single transport zone exists between all compute clusters and the edge cluster. Virtual switches are created for user-workload virtual machines. The minimum compute cluster size is 4 hosts; the maximum is 64 hosts. Additional compute clusters can be created until the maximum number of either hosts (1,000) or virtual machines (10,000) for vCenter Server is reached. Additional vCenter Server instances can be provisioned in the management cluster to facilitate more compute clusters.
Spine
Compute
Virtual SAN
Virtual SAN
Leaf
External Storage
Figure 5. Compute Clusters
T E C H N I C A L W H I T E PA P E R / 1 1
Automated Provisioning with the VMware Software-Defined Data Center
Physical Component Details Compute Table 3 lists the recommended minimum physical server configuration. COMPONENT
S P E C I F I C AT I O N
CPU
24GHz – Two 2GHz six-core CPUs (12 total cores)
Memory
256GB ECC RAM
SD
6GB SD card boot device
HDD controller
Virtual SAN certified controller*
Flash
500GB Virtual SAN certified Flash device*
HDD
Two 1TB Virtual SAN certified HDDs*
Network interface cards
Two 10Gb network adapters
Power supplies
Redundant
Fans
Redundant
Table 3. Minimum Physical Server Configuration
* Virtual SAN certified devices can be found at http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan. VMware recommends the use of Virtual SAN Ready Nodes for maximum compatibility and supportability. For ease of management and to guarantee resource availability as the solution grows, all physical server hardware, regardless of cluster, utilizes the same configuration.
T E C H N I C A L W H I T E PA P E R / 1 2
Automated Provisioning with the VMware Software-Defined Data Center
Storage The management cluster utilizes Virtual SAN in addition to NFS datastores. Virtual machines reside on Virtual SAN; vSphere Data Protection backups reside on NFS datastores. The edge cluster utilizes Virtual SAN, which serves the VMware NSX Controller instances for the edge and compute clusters as well as NSX Edge devices. The compute clusters utilize Virtual SAN, NFS, and VMware vSphere VMFS datastores. The size and number, if any, of datastores other than Virtual SAN depend on available capacity, redundancy requirements, and application I/O needs. Table 4 presents some guidelines for sizing storage. S TO R AG E CLASS
IOPS (PER 10 0G B)
M B /S E C (PER 1TB)
R E P L I C AT I O N
D E D U P L I C AT I O N
Gold
400
32
Yes
Yes*
Silver
400
32
No
Yes*
Bronze
25
2
No
Yes*
Table 4. Storage Sizing Guidelines
* Deduplication is enabled only on storage systems that support this feature. Performance values are based on 100 percent random I/O with 70 percent read and 30 percent write rate at 8KB block size.
Network Each rack contains a pair of multichassis link aggregation–capable 10 Gigabit Ethernet (10GbE) top-of-rack switches. Each host has one 10GbE network adapter connected to each top-of-rack switch. The vSphere hosts utilize the VMware vSphere Distributed Switch™ configured with an 802.3ad Link Aggregation Control Protocol (LACP) group that services all port groups. 802.1Q trunks are used for carrying a small number of VLANs—for example, NSX for vSphere, management, storage, and vSphere vMotion traffic. The switch terminates and provides default gateway functionality for each respective VLAN; that is, it has a switch virtual interface (SVI) for each VLAN. Uplinks from the top-of-rack switches leaf layer to the spine layer are routed point-to-point links. VLAN trunking on the uplinks—even for a single VLAN—is not allowed. A dynamic routing protocol (OSPF, ISIS, or BGP) is configured between the top-of-rack and spine layer switches. Each top-of-rack switch advertises a small set of prefixes, typically one per VLAN or subnet that is present. In turn, it calculates equal cost paths to the prefixes received from other top-of-rack switches.
T E C H N I C A L W H I T E PA P E R / 1 3
Automated Provisioning with the VMware Software-Defined Data Center
Spine
OSPF
Leaf
(Top-of-Rack)
LACP
VMware ESXi Host
Figure 6. ESXi Host Network Connectivity
T E C H N I C A L W H I T E PA P E R / 1 4
Automated Provisioning with the VMware Software-Defined Data Center
Software-Defined Data Center Component Details In this section, we will define the VMware software components and their configuration in enabling this solution.
COMPONENT
NUMBER D E P LOY E D
D E P LOY E D LO C AT I O N
CONNECTED N E T WOR K
Platform Services Controller
2
Management cluster
vSphere management VLAN
vCenter Server
2
Management cluster
vSphere management VLAN
ESXi hosts
Minimum of 1 1 (varies based on compute cluster requirements)
4 – Management cluster 3 – Edge cluster 4 – Compute cluster
ESXi management VLAN
vRealize Automation
1 (in a redundant distributed configuration comprising 10 virtual machines)
Management cluster
vRA
vCenter Orchestrator
2 (clustered)
Management cluster
vRA
Microsoft SQL Server 2012
1
Management cluster
vRA
vSphere Data Protection
1
Management cluster
vSphere management VLAN
NSX Manager
2
Management cluster
vSphere management VLAN
VMware NSX Controller™
6
3 – Management cluster 3 – Edge cluster
vSphere management VLAN
vRealize Operations Manager
4 (1 master, 1 master replica, 2 data nodes)
Management cluster
vROps
vRealize Log Insight
3 (1 master, 2 workers)
Management cluster
vLI
Table 5. SDDC Component Details
T E C H N I C A L W H I T E PA P E R / 1 5
Automated Provisioning with the VMware Software-Defined Data Center
vSphere Data Center Design The vSphere Enterprise Plus Edition is the core that enables the SDDC. All vSphere hosts are stateful installs— that is, the ESXi hypervisor is installed to a local SD card. AT T R I B U T E
S P E C I F I C AT I O N
ESXi version
6.0 U1
Number of hosts
11
Number of CPUs per host
2
Number of cores per CPU
6
Core speed
2GHz
Memory
256GB
Number of network adapters
2 x 10Gb
Table 6. ESXi Host Details
The solution uses VMware vSphere High Availability (vSphere HA) and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS). vSphere HA is set to monitor both hosts and virtual machines. Its admission control policy utilizes a percentage of cluster resources reserved—25 percent in a four-node cluster—guaranteeing sustainability with one node failure. To calculate the percentage, divide 100 by the number of hosts in a cluster; then, multiply the result by the number of hosts that can be eliminated while still guaranteeing resources for the virtual machines in the cluster. For example, an eight-host cluster for which sustainability for a two-host failure is the goal would be 100 ÷ 8 = 12.5 x 2 = 25 percent. vSphere DRS is set to fully automated mode. VLAN ID
FUNCTION
14
VXLAN (management cluster)
24
VXLAN (edge cluster)
34
VXLAN (compute clusters)
970
ESXi management
980
vSphere vMotion
1020
IP storage (NFS)
1680
vSphere management
1701
External
3002
Virtual SAN
Table 7. VLAN IDs and Functions
T E C H N I C A L W H I T E PA P E R / 1 6
Automated Provisioning with the VMware Software-Defined Data Center
PORT GROUP
VDS
VLAN ID
vDS-Mgmt-ESXi
vDS-Mgmt
970
vDS-Mgmt-Ext
vDS-Mgmt
1701
vDS-Mgmt-NFS
vDS-Mgmt
1020
vDS-Mgmt-vMotion
vDS-Mgmt
980
vDS-Mgmt-VSAN
vDS-Mgmt
3002
vDS-Mgmt-vSphere-Management
vDS-Mgmt
1680
VXLAN (VMware NSX autocreated)
vDS-Mgmt
14
vDS-Edge-ESXi
vDS-Edge
970
vDS-Edge-Ext
vDS-Edge
1701
vDS-Edge-vMotion
vDS-Edge
980
vDS-Edge-VSAN
vDS-Edge
3002
vDS-Edge-vSphere-Management
vDS-Edge
1680
VXLAN (VMware NSX autocreated)
vDS-Edge
24
vDS-Comp-ESXi
vDS-Comp
970
vDS-Comp-NFS
vDS-Comp
1020
vDS-Comp-vMotion
vDS-Comp
980
vDS-Comp-VSAN
vDS-Comp
3002
VXLAN (VMware NSX autocreated)
vDS-Comp
34
DATA S TO R E
TYPE
FUNCTION
DS-VSAN-MGMT01
Virtual SAN
Management cluster virtual machine datastore
DS-NFS-MGMT01
NFS
vSphere Data Protection backups
DS-VSAN-EDGE01
Virtual SAN
Edge cluster virtual machine datastore
DS-VSAN-COMP01
Virtual SAN
Gold-tier storage
DS-NFS-COMP01
NFS
Silver-tier storage
DS-NFS-COMP02
NFS
Silver-tier storage
Table 8. Port Groups
Table 9. Datastores
T E C H N I C A L W H I T E PA P E R / 17
Automated Provisioning with the VMware Software-Defined Data Center
AT T R I B U T E
S P E C I F I C AT I O N
Number of CPUs
Four
Processor type
VMware virtual CPU
Memory
16GB
Number of network adapters
One
Network adapter type
VMXNET3
Number of disks
One 100GB (C:\) – VMDK
SQL server
Microsoft SQL Server 2012 SP2
Operating system
Windows Server 2012 R2
Table 10. SQL Server Configuration
AT T R I B U T E
S P E C I F I C AT I O N
vCenter version
6.0 U1 (appliance)
Quantity
Four (two Platform Services Controllers, two vCenter Server instances)
Appliance size
Small for management vCenter Server, large for compute vCenter Server
Table 11. VMware vCenter Configuration
AT T R I B U T E
S P E C I F I C AT I O N
Data center object
WDC01
Enhanced Linked Mode
Automatic by joining same vCenter Single Sign-On domain
Table 12. VMware vCenter Data Center Configuration
T E C H N I C A L W H I T E PA P E R / 1 8
Automated Provisioning with the VMware Software-Defined Data Center
vRealize Orchestrator vRealize Orchestrator is deployed using the vRealize Orchestrator appliance. For resiliency, it is set up in a cluster in which the database resides on the SQL Server. NSX for vSphere and vRealize Automation plug-ins are installed in both instances. vRealize Orchestrator is a critical component in the SDDC. All vRealize Automation to NSX for vSphere communication is handled via vRealize Orchestrator NSX plug-in and workflows.
vSphere Data Protection vSphere Data Protection is deployed in the management cluster and is responsible for backups and restores of the virtual machines residing in the management cluster. A backup policy was created for each management application, such as vCenter Server, vRealize Automation, and so on. In addition to the full virtual machine backups, the SQL Server vSphere Data Protection agent is installed on the SQL Server, and all databases are backed up as well. Backup frequency and retention periods vary depending on organizational requirements. Nightly backup of all virtual machines and databases is recommended.
Figure 7. vSphere Data Protection
NSX for vSphere NSX for vSphere provides the virtual switches, routing, and load balancer services used to create the SDDC. All virtual machine traffic, excluding that on the vSphere management VLAN, is encapsulated using NSX for vSphere. All virtual machine–to–virtual machine, or east–west, traffic is encapsulated and then routed between the virtual tunnel endpoints (VTEPs) of the host, where it is decapsulated and delivered to the virtual machine. When a request to or from the external network is serviced, it travels through the NSX Edge device in the edge cluster, which provides all north–south routing—that is, routing to and from external networks. NSX for vSphere has a one-to-one relationship with vCenter Server, so two NSX Manager instances are deployed, one for the management cluster vCenter Server instance and the other for the edge and compute cluster vCenter Server instance. These are both deployed in the management cluster. NSX for vSphere utilizes controller virtual machines to implement the network control plane. The NSX Controller instances must be deployed in odd numbers to avoid a split-brain scenario. As such, three controllers per NSX for vSphere instance are deployed. The NSX Controller instances for the management cluster are deployed into the management cluster itself; the instances for the edge and compute clusters are deployed into the edge cluster.
T E C H N I C A L W H I T E PA P E R / 1 9
Automated Provisioning with the VMware Software-Defined Data Center
The ESXi hosts must be prepared for NSX for vSphere. The following values are used: S P E C I F I C AT I O N
VA LU E
MTU
9000
Teaming mode
LACP V2
VLAN
14 (management) 24 (edge) 34 (compute)
Segment IDs
5000–5200
Transport zones
Management (management cluster) Compute (edge and compute clusters)
Table 13. NSX for vSphere Values
To enable external (north–south) connectivity for the compute workloads, an NSX Edge router is deployed in HA mode. This NSX Edge instance is referred to as the provider edge. One interface is connected to the external network; another is connected to an NSX virtual switch, which is also connected to an NSX Edge router. The NSX Edge routers are configured with OSPF to facilitate the exchange of routing information. This enables the virtual machines on the NSX virtual switch to communicate with the external network and vice versa as long as firewall rules permit the communication.
T E C H N I C A L W H I T E PA P E R / 2 0
Automated Provisioning with the VMware Software-Defined Data Center
External Network/Internet
Spine
Leaf
Compute Transport Zone
Virtual SAN
Virtual SAN
External VLAN
VXLAN 5001
VM
Edge
VM
Compute
Figure 8. NSX for vSphere North–South Routing
T E C H N I C A L W H I T E PA P E R / 2 1
Automated Provisioning with the VMware Software-Defined Data Center
vRealize Automation vRealize Automation empowers IT to accelerate the delivery and ongoing management of personalized, business-relevant infrastructure, application, and custom services while improving overall IT efficiency. Policy-based governance and logical application modeling ensure that multivendor, multicloud services are delivered at the right size and service level for the business need. Full life cycle management ensures that resources are maintained at peak operating efficiency. And release automation enables multiple application deployments to be kept in sync through the development and deployment process. vRealize Automation provides the portal to request services such as virtual machines. This configuration utilizes the distributed architecture of vRealize Automation and uses NSX for vSphere to create a highly available environment by load-balancing multiple instances of the vRealize Automation components. Load-Balanced vRealize Automation Configuration
External VLAN
vSphere VLAN
VM
NSX Edge 192.168.21.1 (Internal) 10.155.168.76 (vSphere) 10.155.170.152 (External)
VM
vRealize Automation Appliances
Load-Balanced IP 10.155.170.10
VM
VM
vRealize Automation laaS Web 10.155.170.11
VM
VM
vRealize Automation Managers 10.155.170.12
VM
VM
vRealize Automation vSphere Agents
VM
VM
vRealize Automation DEMs
VM
VM
vRealize Orchestrator Appliances
VM
Microsoft SQL
10.155.170.13
Figure 9. NSX for vSphere Load Balancer Configuration for vRealize Automation
To achieve the architecture shown in Figure 9, we deploy a single NSX Edge device in HA mode. It is configured to load-balance vRealize Automation appliance, vRealize Automation Manager, vRealize Orchestrator, and Web traffic. Because we are load-balancing Web servers that make requests that are routed back to themselves, a registry setting that disables Windows loopback checking was created. See VMware Knowledge Base article 2053365 for more information. vRealize Automation Appliances vRealize Automation is distributed as a prepackaged appliance in OVA format. For increased redundancy, two of these appliances are deployed and configured for clustering; the internal PostgreSQL database is also clustered. The NSX Edge device shown in Figure 9 is configured to load-balance the traffic to the vRealize Automation appliances. vSphere DRS rules were created to ensure that the vRealize Automation appliances run on different hosts. vRealize Automation Infrastructure as a Service Web Servers The vRealize Automation infrastructure as a service (IaaS) Web servers utilize Microsoft Internet Information Services (IIS) on Windows. For redundancy, two IaaS Web servers are deployed. Both are active and are load balanced by the NSX Edge device shown in Figure 9. The model manager data is deployed to the first IaaS Web server only.
T E C H N I C A L W H I T E PA P E R / 2 2
Automated Provisioning with the VMware Software-Defined Data Center
vRealize Automation Managers The vRealize Automation managers utilize IIS on Windows. For redundancy, two managers are deployed. Only one is set to active; the other is passive. The NSX Edge device shown in Figure 9 is configured to load-balance the traffic, but only the currently active manager is active on the load balancer. During an outage, manual steps are taken to make the passive server active and to update the load balancer configuration to use the now-active server. Distributed Execution Managers The distributed execution manager (DEM) runs on Windows. For redundancy, two virtual machines are deployed, each configured with three execution managers. The DEM doesn’t support a load-balanced configuration, but deploying two virtual machines with three DEM instances each provides redundancy of the service. vSphere Agent The vSphere agent runs on Windows. For redundancy, two virtual machines are deployed, each configured with the same vSphere agent name. The vSphere agent doesn’t support a load-balanced configuration, but deploying two virtual machines configured with the same vSphere agent name provides redundancy. AT T R I B U T E
S P E C I F I C AT I O N
Number of CPUs
Two
Processor type
VMware virtual CPU
Memory
4GB, 6GB for DEMs
Number of network adapters
One
Network adapter type
VMXNET3
Number of disks
One 50GB (C:\) – VMDK
Operating system
Windows Server 2012 R2
Table 14. Recommended Minimum Windows Server Configuration
T E C H N I C A L W H I T E PA P E R / 2 3
Automated Provisioning with the VMware Software-Defined Data Center
Monitoring Monitoring the performance, capacity, health, and logs in any environment is critical. Using vRealize Operations along with vRealize Log Insight is a unified management solution for performance management, capacity optimization, and real-time log analytics. Predictive analytics leverages both structured and unstructured data to enable proactive issue avoidance and faster problem resolution. The solution extends intelligent operations management beyond vSphere to include operating systems, physical servers, and storage and networking hardware. It is supported by a broad marketplace of extensions for third-party tools. vRealize Operations vRealize Operations provides operations dashboards, performance analytics, and capacity optimization capabilities needed to gain comprehensive visibility, proactively ensure service levels, and manage capacity in dynamic virtual and cloud environments. vRealize Operations is deployed as a virtual appliance and is distributed in the OVA format. In this architecture, vRealize Operations is deployed on an NSX virtual switch. Four vRealize Operations appliances are deployed. The first is configured as the master node, the next as the master replica, and the last two as data nodes. The four appliances access the vSphere management VLAN via the NSX Edge device configured with either static or dynamic routing. The NSX Edge device also load-balances the four virtual appliances on port 443, providing access to the vRealize Operations cluster via a single FQDN.
External VLAN
vSphere VLAN
NSX vSwitch 192.168.21.0/24
VM
NSX Edge 192.168.21.1 (Internal) 10.155.168.76 (vSphere) 10.155.170.152 (External)
VM
VM
VM
vRealize Operations Appliances Load-Balanced IP 10.155.170.10
Figure 10. vRealize Operations
T E C H N I C A L W H I T E PA P E R / 24
Automated Provisioning with the VMware Software-Defined Data Center
To ensure a complete picture of how the environment is running, vRealize Operations is configured to monitor the management, edge, and compute vCenter Server instances. Additionally, the NSX for vSphere and vRealize Automation content pack is installed and configured to provide insight into the virtualized networking and automated provisioning environments.
Figure 11. vRealize Operations Dashboard
vRealize Operations requires updates to the default monitoring settings for most organizations. For more information on how to customize vRealize Operations for a specific environment, see the vRealize Operations documentation. vRealize Log Insight vRealize Log Insight provides in-depth log analysis in an easy-to-query Web interface. It collects syslog data from ESXi hosts or any other server or device that supports syslog. There is also an installable agent for Windows and Linux that enables the collection of event logs and custom logs such as vCenter Server and vRealize Automation server log files. vRealize Log Insight is deployed as a virtual appliance and is distributed in the OVA format. In this architecture, vRealize Log Insight is deployed on an NSX virtual switch. Three vRealize Log Insight appliances are deployed. The first is configured as the master node, and the other two are configured as data nodes. The appliances access the vSphere management VLAN via the NSX Edge device configured with either static or dynamic routing. vRealize Log Insight includes its own load balancer, which is configured to provide a single IP address for access.
T E C H N I C A L W H I T E PA P E R / 2 5
Automated Provisioning with the VMware Software-Defined Data Center
External VLAN
vSphere VLAN
NSX vSwitch 192.168.21.0/24
VM
NSX Edge 192.168.21.1 (Internal) 10.155.168.76 (vSphere) 10.155.170.152 (External)
VM
VM
VM
vRealize Log Insight Appliances Load-Balanced IP 10.155.170.15
Figure 12. vRealize Log Insight
The syslogs of all appliances and ESXi hosts are configured to send to the vRealize Log Insight load-balanced IP address. Additionally, the vRealize Automation, NSX for vSphere, and vRealize Operations content packs were installed and are configured to create dashboards to easily monitor the entire environment.
Figure 13. vRealize Log Insight Dashboard
T E C H N I C A L W H I T E PA P E R / 2 6
Automated Provisioning with the VMware Software-Defined Data Center
SDDC Operational Configuration When all components have been installed, they must be brought together to enable the creation of blueprints and to provision services by authorized users. To unify the components to operate as a solution, the following configuration steps are required.
NSX for vSphere Configuration First we provision the common network resources for use within vRealize Automation. An NSX Edge device in the edge cluster was created for north–south routing; the OSPF dynamic routing protocol is configured between the NSX Edge device and the external physical switches. NSX virtual switches were precreated for use in vRealize Automation for single-machine blueprints; they are connected to NSX Edge devices to provide routing and load-balancing services. vRealize Automation can dynamically create NSX virtual switches in multimachine blueprints. Tenants A tenant is an organizational unit in a vRealize Automation deployment. A tenant can represent the entire organization or specific business units. A default vsphere.local tenant is created during the installation. This tenant is used for administration purposes. A tenant for the IT administrators was created and is pointed to the Microsoft Active Directory environment for authentication.
Figure 14. vRealize Automation Tenants
T E C H N I C A L W H I T E PA P E R / 27
Automated Provisioning with the VMware Software-Defined Data Center
Endpoints Endpoints are the infrastructure sources that vRealize Automation consumes. In vCloud Suite and in this architecture, the endpoint is vCenter Server—more specifically, the vCenter Server instance that manages the edge and compute clusters.
Figure 15. vRealize Automation Endpoints
Fabric Groups Fabric groups are groups of compute resources that the endpoints discover; they define the organization of virtualized compute resources. In most single-site environments, a single fabric group is created that contains all nonmanagement clusters.
Figure 16. vRealize Automation Fabric Groups
T E C H N I C A L W H I T E PA P E R / 2 8
Automated Provisioning with the VMware Software-Defined Data Center
Business Groups Business groups define the users and machine prefixes and are used later to grant access to a percentage of resources. Users assigned the group manager role can generate blueprints and see all machines created in the group. Support users can work for another user, and users can be entitled to request blueprints in the catalog. In most environments, business groups are established for department or business units in an organization.
Figure 17. vRealize Automation Business Groups
Network Profiles Network profiles define the type of connection—external, private, NAT, or routed—that a resource has. NAT and routed profiles require an external profile. External profiles connect resources to an existing network. Reservation Policies Reservation policies enable a user to associate one or more reservations into a policy that can be applied to a blueprint. Multiple reservations can be added to a reservation policy, but a reservation can belong to only one policy. A single reservation policy can be assigned to more than one blueprint. A blueprint can have only one reservation policy. Reservations A virtual reservation is a share of the memory, CPU, networking, and storage resources of one compute resource allocated to a particular business group. To provision virtual machines, a business group must have at least one reservation on a virtual compute resource. Each reservation is for one business group only, but a business group can have multiple reservations on a single compute resource or on compute resources of different types.
T E C H N I C A L W H I T E PA P E R / 2 9
Automated Provisioning with the VMware Software-Defined Data Center
Figure 18. vRealize Automation Reservations
Blueprints A machine blueprint is the complete specification for a virtual, cloud, or physical machine. Blueprints determine a machine’s attributes, the manner in which it is provisioned, and its policy and management settings. In this architecture, blueprints can be either vSphere based—that is, single machine—or multimachine, which requires one or more vSphere blueprints and provisions and manages them together as a single entity. Multimachine blueprints also enable the dynamic provisioning of networks through the use of network profiles.
Figure 19. vRealize Automation Blueprints
T E C H N I C A L W H I T E PA P E R / 3 0
Automated Provisioning with the VMware Software-Defined Data Center
After blueprints have been created and published to the catalog, authorized users can log in to the vRealize Automation portal and request these resources.
Figure 20. vRealize Automation Service Catalog
Conclusion An automation cloud based on VMware IT technology enables a more agile and predictable IT response to business needs. The reference architecture presented in this paper describes the implementation of a software-defined data center that uses the latest VMware components to create a single-site IT automation cloud. Customers following this architecture can be confident that they will have the best possible supported configuration, one that is fully backed by the VMware Validated Design process. For a guided tutorial that shows step-by-step instructions for deploying this configuration, see http://featurewalkthrough.vmware.com/#!/defining-the-sddc/itac.
T E C H N I C A L W H I T E PA P E R / 3 1
Automated Provisioning with the VMware Software-Defined Data Center
The VMware Validated Design Team Blaine Christian, Scott Faulkner, Phil Weiss, Christian Elsen, Nik Gibson, Randy Jones, William Lam, Nick Marshall, Paudie O’Riordan, Kamu Wanguhu, Steven Ching, Michelle Gress, Christine Zak, Yu-Shen Ng, Bob Perugini, Justin King, Karthik Narayan, Sunny Bhatia, Mandar Dhamankar, Olga Efremov, David Gress, Kristopher Inglis, Rama Maram, Hari Krishna Meka, Arvind Patil, Venkat Rangarajan, Lakshmanan Shanmugam, Todor Spasov, Georgi Staykov, Antony Stefanov, Kevin Teng, Todor Todorov, Tuan Truong, Randy Tung, Shivaprasad Adampalli Venkateshappa, Lap Vong, Zhuangqian Zhang, and Mike Brown
About the Author Mike Brown is a senior technical marketing architect in the Integrated Systems Business Unit. Mike’s focus is on reference architectures for VMware vCloud Suite and the software-defined data center. He has multiple industry certifications, including VMware Certified Design Expert (VCDX), VMware Certified Advanced Professional – Cloud, and VMware Certified Professional – Network Virtualization. Follow Mike on the vSphere Blog and on Twitter @vMikeBrown.
T E C H N I C A L W H I T E PA P E R / 3 2
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright © 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-RA-Sngl-Ste-Auto-Cloud-USLET-101 Docsource: OIC-FP-1378