Document not found! Please try again

Reference architecture on 500 server data center with ...

49 downloads 481 Views 2MB Size Report
11 fully populated rack for hosting c7000 enclosures .... In this case, each virtual server operates as an independent entity on ..... operators with the basic and advanced functionality needed to manage IMC and the devices managed by IMC.
Reference architecture on 500 server data center with HPE FlexFabric

Technical white paper

Technical white paper

Contents Introduction ....................................................................................................................................................................................................................................................................................................................................................3 Design guidelines......................................................................................................................................................................................................................................................................................................................................3 1-tier design for blade enclosure ......................................................................................................................................................................................................................................................................................... 4 Spine and leaf rack mount servers design details ............................................................................................................................................................................................................................................... 5 Key considerations for current scenario ............................................................................................................................................................................................................................................................................. 5 Details about the network ................................................................................................................................................................................................................................................................................................................ 7 Internal server zone......................................................................................................................................................................................................................................................................................................................... 7 Demilitarized zone ..........................................................................................................................................................................................................................................................................................................................10 Security infrastructure ........................................................................................................................................................................................................................................................................................................................ 11 Proposed product details ............................................................................................................................................................................................................................................................................................................... 12 Simplifying the data center network core ................................................................................................................................................................................................................................................................. 12 Access layer/edge switches .......................................................................................................................................................................................................................................................................................................... 13 HPE 5900AF and 5920AF Switch Series ................................................................................................................................................................................................................................................................... 13 Physical security devices................................................................................................................................................................................................................................................................................................................. 15 Internal firewall................................................................................................................................................................................................................................................................................................................................... 15 Network virtualization and F5.....................................................................................................................................................................................................................................................................................................18 Business continuity and disaster recovery ............................................................................................................................................................................................................................................................... 19 Enhancing application security with F5 .....................................................................................................................................................................................................................................................................20 Indicative bill of quantity of suggested architecture ............................................................................................................................................................................................................................................. 21 Network infra ....................................................................................................................................................................................................................................................................................................................................... 21 Security infra ....................................................................................................................................................................................................................................................................................................................................... 23 Additional links ........................................................................................................................................................................................................................................................................................................................................ 24

Technical white paper

Page 3

Introduction This document is intended for technology decision-makers, solution architects, and other experts tasked with improving data center networking. It can serve as a baseline for network planning and design projects where the number of server count is around 500 physical servers (distributed among blade and rack form factor). This document frequently references technology trends in the data center, which have and are being driven through virtualization standards. It also introduces issues that confront data center architects in this fast-paced, results-driven, and security-minded industry. This reference architecture on medium-sized data center with HPE FlexFabric is complemented by other documents which, when referenced all together can provide a clear view of today’s data centers and HPE solutions within the data center: • HPE FlexFabric Reference Architecture Guide: Focuses on discussing the current technologies, topologies, and architectures which are used in today’s data centers • HPE FlexFabric Reference Architecture-Data Center Trends: Describes the trends and business technology drivers that are changing the shape of data centers. • HPE FlexFabric Reference Architecture-Building data center networks using HPE and F5: Incorporates, this dual vendor strategy specifically with F5. • HPE FlexFabric Reference Architecture-500 Server: Provides reference architecture design examples for a 500-physical server data center. The primary driver for today’s enterprise data center is efficiently deploying resources to support business applications. Consolidation and distribution are major factors used to address this goal. In fact, enterprise data center network architects and managers are now expected to build networks that can concurrently consolidate and geographically distribute resources. These include physical and virtual application servers, data storage, management platforms, and knowledge workers at the same time. This evolution did not happen overnight. It has been fueled by the accelerating needs of businesses to be more agile, to do more with less, and to increase their IT efficiency. With HPE’s approach you can reduce the complexity of data center networks, especially now that the primary requirement is virtual networking instances.

Design guidelines Flexible network design is at the core to build data center networking solutions. HPE Networking platforms are built using Open Standards technologies. They are built to interoperate with the entire range of third-party server interfaces and standards-based switches and routers across Layer 2 (L2), Layer 3 (L3), IPv4, IPv6, MPLS, and VPLS protocol deployments. This compatibility provides cohesion with existing network infrastructure and the flexibility to integrate third-party capabilities. Our flexible solutions follow multiple designs to augment the right performance and to provide maximum cost benefit ratio. They also deliver high-performance, server-to-server connectivity at the server edge. These solutions can directly interconnect hundreds of virtual machines at the edge of the network, eliminating unnecessary network hops, reducing latency, and optimizing performance for high-volume, server-to-server traffic flows. For traditional, ToR server-edge installations, HPE leaf switches can be deployed with IRF virtualization technology to provide high-throughput, low-latency server-to-server connectivity at the server edge. With IRF, multiple switches can be virtualized and logically combined to enable low-latency, ultra-resilient, virtual switching fabrics comprising hundreds of 10GbE or 40GbE switch ports—all managed via a single IP address. For blade chassis ; in the core of the network, HPE spine switches can be deployed in conjunction with IRF to completely eliminate the aggregation layer found in conventional three-tier data center networks. IRF can provide rapid failover to dramatically improve network utilization and performance in the network spine layer.

Technical white paper

Page 4

A collapsed, 1-tier/2-tier (spine and leaf model) data center network architecture enables optimized server-to-server performance; requires significantly fewer connections and port counts (no aggregation layer); streamlines provisioning and network management; and reduces capital expense and energy consumption. In the example scenario listed in this document, we have taken both approaches using a 1-tier blade server solution combined with a 2-tier rack server solution. • Entire blade servers, housed in HPE c7000 Blade enclosures, connect to HPE FlexFabric 7900 switches acting as the spine layer. This helps to create very low latency connectivity between the blade enclosure and the network. These types of 1-tier solutions are dramatically increasing network performance requirements for high levels of East-West traffic either for virtualization and vMotion/Live Migration or for moving virtual servers or horizontal traffic flow between server-to-server resources, which require high-volume, machine-to-machine traffic flows. • The rack mount servers connect to HPE FlexFabric 5900 Switch Series top-of-rack (ToR) Leaf switches with 10GbE connectivity. From the ToR. inter-rack traffic flow would be done at the leaf switch level. For intra-rack traffic flow; the leaf switch connects to the spine through multiple 40GbE links. Proper buffering design of a switch can impact performance significantly. In this reference architecture, we are utilizing the power of a deep buffered spine layer and light buffered leaf layer. The light buffer/cut through switch provides wire speed East-West traffic between rack servers, which gives a significant performance boost for applications. In the event of any buffer overrun effect caused by misbehaving applications, it effects the local leaf switch only and does not impact the spine layer. On the other hand, high buffered spine switches can absorb the burst on uplink ports and provide a good performance to North-South traffic.

1- tier design for blade enclosure When looking at flattening a network and providing substantial support for virtualization and East-West traffic flows, blade server 1-tier topologies provide many advantages. Server blade enclosures allow for substantial compute density per rack, row, and HPE has optimized the HPE c-Class BladeSystem Server portfolio to support the vision and reality of virtualization. This network topology optimizes and combines the reality of high-performance networking with simplicity. It allows flexibility in networking by supporting Intelligent Resilient Fabric (IRF) and a variety of interconnect options, this specific solution utilizes the 40GbE performance benefit of HPE Virtual Connect modules (FlexFabric 20/40-F8 Module), which requires a less cable overhead. Also by creating single tire design with multiple link; we can achieve a wider failure domain along with very low network latency. From the platform virtualization angle, L2 deployments allow VMs in the same L2 network to freely migrate without having to change IP addresses. This solution is also capable of using L3 overlay solutions such as Virtual Extensible LAN (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE), or Shortest Path Bridging (SPB). Additionally, VMware® now officially supports VM migration over L3 routed networks in vSphere 6.0. The specific solution detailed here consists of two HPE FlexFabric 7910 Switches at the core. These switches are IRF'ed together to provide a highly resilient core that allows LAGs from the servers to different modules in different chassis. The 7910 switches are very low-profile module switches taking up only 5RU each. The 7910 should be loaded with redundant management/fabric modules, redundant power supplies, and redundant fan trays for achieving proper device level redundancy. The configuration below utilizes 22 c7000 blade enclosures each with 16 blade servers for a total of around 350 physical servers. A typical 350 blade server deployment using 7910 switches will use 44 of 40GbE server uplink ports (one from each Virtual Connect module mounted into enclosure, total two from each enclosure) to the spine layer, which provides for a 2:1 oversubscription ratio. In this specific deployment, each 7910 would be equipped with 36 of 40GbE ports.

Technical white paper

Page 5

Figure 1. Physical rack layout

Spine and leaf rack mount servers design details Similar to the 1-tier design, 2-tier topologies, commonly referred to as spine and leaf architectures, can provide a balanced network that can be optimized for virtualization while still providing for great scale and flexibility so the data center can adapt to changing application requirements. Spine and leaf designs are also used when the physical cabling plant cannot support a 1-tier design. Leaf and spine design can provide ToR redundancy and resiliency using IRF. This also uses 10G Base-T from ToR to servers and a 40GbE aggregated link to spine. A 2-tier design allows growth well beyond what a single chassis can support. It can also reduce the total port count in the chassis provided it meets the customer's oversubscription ratio. Cable complexity would be minimized since all the servers would terminate at local leaf switches. The solution here uses the HPE 5900AF-48XG-4QSFP+ leaf switches as HPE DC ToR switch and provides 10GbE and 40GbE interface flexibility. This configuration positions 20 rack servers in six separate racks for a total of 120 rack servers. Each rack would be paired with another rack to make three rack pairs. The 40 servers in each pair of racks would be connected to a pair of ToR 5900 leaf switches, combined using IRF. These IRF’ed ToR switches would then be directly connected to the core over 40GbE connectivity. This design provides for added resiliency and bandwidth to each server. Deployments will vary depending on oversubscription requirements, however, this deployment typically would utilize two 40GbE uplinks from each pair (spread across two racks) of leaf switches to the spine layer, providing for a 5:1 oversubscription ratio using active/active uplink to spine layer.

Key considerations for current scenario As described earlier, this document is detailing a mixed 1-tier and 2-tier solution that is designed for a realistic scenario in the data center consisting of a variety of blade and rack servers. The below mentioned parameters were followed when designing the network: • At data center; 500 physical production servers in mixed environment along with an additional 50 (10%) demilitarized zone (DMZ) servers. This solution utilized the below method for compute deployment.

Technical white paper

Description

Page 6

Percentage

Count

Total servers for solution

100%

500

Total server count at DMZ (Blade)

~10%

48

Total servers at DC zone (Rack+Blade)

~90%

452

Blade server count at DC zone

~75%

352

Rack server count at DC zone

~25%

120

Server Distribution matrix • All rack mount servers use dual 10GbE connectivity (one each to separate leaf switch). • All c7000 blade enclosures connect to network through numbers of 40GbE port (2+2 for each spine switch form each rack). Multiple 40GbE would be concentrated at the spine layer. • This specific solution does not include storage options, hence all Fiber Channel connectivity and associated details are not covered in this document. • For security infrastructure we are utilizing 2X HPE Tipping Point S8005 Firewall in high availability mode as internal firewall, and two 10GbE links for connecting spine switch and 2X 1GbE port internal router interface. This solution utilizing internal firewall for protecting spine switch from MPLS and internal access traffic. • Perimeter firewall is not included into this solution and referring to utilize third-party solution. By using different firewalls from different image (described as a component of a “defense in depth” security strategy) the proposed solution can achieve higher level of security. This solution also does not consider virtual security and multitenancy and assumes it would be controlled at virtualization layer (NSX/DCN, etc.). • At the DMZ, there would be two pairs of HPE 5920 Switches with 2X 10GbE SR uplink per switch connecting into the perimeter firewall. This solution requires sufficient ports at perimeter firewall to terminate the DMZ switch. • This solution utilizes the standard power consumption rating and each rack would carry maximum 12 KW load and based on the analogy, the maximum equipment per rack would be as below: – Fully populated c7000 blade enclosure per rack (16 blades per enclosure)—2 – Rack servers per rack—20

Technical white paper

Page 7

Details about the network Internal server zone Rack mount server This solution utilizes 20 rack servers per rack in six separate racks for a total of 120 rack servers. Each rack would be paired with another rack to form three rack pairs. The 40 servers in each pair of racks would be connected to a pair of ToR 5900 leaf switches, combined using IRF. These IRF’ed ToR switches would then be directly connected to the core over 40GbE connectivity. 10GbE port count calculation based on below assumptions: • 113 nos. of rack server with 40GbE uplink in redundancy mode • 6 full rack for hosting the rack server • Inter-rack IRF would be through multimode cable • Connectivity between spine switch and server would be through multimode cable Leaf switch for internal zone port calculation Leaf switch (ToR) (5900AF-48XG-4QSFP+)

Multimode transceiver (for intra-rack IRF)-2X10GbE

40GbE SR4 for uplink to spine (40GbE (Considering multimode)

Down link-10GbE multimode Transceiver per rack ( considering 20 server per rack)

1

1

2

1

40

2

1

2

1

40

3

1

2

1

40

4

1

2

1

40

5

1

2

1

40

6

1

2

1

40

Total

6

12

6

240

Rack#No

c7000 blade enclosure In blade server placement scenario; the intra-enclosure server communication would be done through the mezzanine plane of the blade enclosure. We have removed the ToR layer and considering a direct connectivity between Flex 20-40/module and data center spine switch. Each enclosure would connect directly to data center spine switch through 1x40GbE link per module (2X 40GbE link per enclosure). 40GbE port count calculation based on below assumptions: • Fully populated enclosure with 16 blades • 11 fully populated rack for hosting c7000 enclosures • 22 blade enclosures; total to host 340 servers • At blade enclosure, FlexConnect/Blade Switch would be part of server BoM and not considered into this scope Internal Server Zone #Rack

c7000 Enclosure

Half-height server

40GbE to spine switch 1&2/enclosure

Rack-1

2

32

4

Rack-2

2

32

4

2

32

4

.. Rack-11

Technical white paper

Page 8

At spine switch level, we are considering some additional ports for connecting firewalls.

Spine Switch Port calculation Spine switch (16 half-height server per c7000 enclosure and two enclosure per rack for internal servers, 2X 40GbE uplink per enclosure per spine switch and 1X40GbE uplinks from leaf switch from rack mount server rack). 40Gbe from blade enclosure to spine switch 1&2

40GbE link from ToR switch (rack server) for spine switch 1&2

IRF-40GbE between the switch

44

6

4

Ports (40GbE)

Up link to perimeter firewall 10G

10GbE from internal firewall

(provisioned)

Ports (10GbE)

4

4

10GbE from external firewall

2

Internet

DMZ

Routing MSR Routing HSR

DC Exterior

Firewall NGFW

Blade Server (3 Racks)

Firewall 3rd Party

Int FW

WAN

Ext FW Int FW

Ext FW

Spine 7900

Hypervisor APP OS

Leaf 5900

APP OS

VPC-1

APP OS

APP OS

APP OS

Leaf 5900 Rack Server (6 Racks) APP OS

Hypervisor

DC Internal

VPC-n

APP OS

APP OS

APP OS

VPC-1

1 Gig

Figure 2. Architecture diagram of mid-sized data center

Blade Server (11 Racks)

10 Gig

40 Gig

APP OS

APP OS

VPC-n

APP OS

Total

Port to be utilized per spine switch

54

27

10

5

Technical white paper

Page 9

B2B Interface

Internet

B2C Interface

Secure VPN Tunnel

Secure VPN Tunnel Internet Routers

Offshore Dev Center

DMZ Switches

DMZ and Web Servers

DMZ Switches

Other Offices

3rd Party Parameter Firewall

Hosted / Aps. Server

Intranet FW IRF

MPLS Cloud

Offshore Dev Center

DCI DC Spine Switch

WAN Router

To DR Site Up to 70 KM

iMC

NoC/SoC

Network Monitoring Servers

Prod. Servers

DB Servers

QA Servers

Non-Prod. Servers

Prod. Servers

DB Servers

QA & Non-Prod Servers

SAN Switch Storage

10G Connectivity

Keepalive Link

1G Connectivity

WAN/Internet Link

8G FC Connectivity

LH Fiber Links Nx40 Gig Connectivity

Figure 3. Architectural view of mid-sized data center

Other Servers

Technical white paper

Page 10

Demilitarized zone c7000 blade enclosure In DMZ, there would be 10Gig connectivity. Uplink would connect to perimeter firewall over 10Gig Fiber Optic connectivity. 10GbE port count calculation based on the below assumptions: • Fully populated enclosure with 16 blades • 2 fully populated racks for DMZ • 3 numbers of blade enclosure, total to host 48 blades • At blade enclosure, FlexConnect/Blade Switch would be part of server BoM and not considered into this scope DMZ server zone #Rack

c7000 enclosure

Half-height server

10GbE to DMZ switch 1&2/Enclosure

Rack-1

2

32

8

Rack-2

1

16

4

DMZ switch (ToR) (HPE 5920AF-24XG Switch)

Multimode transceiver (for intrarack IRF)-2X10GbE

10GbE SR for uplink to parameter firewall

Down link-4X10GbE multimode transceiver per enclosure

1

1

2

2

6

2

1

2

2

6

Total

2

4

4

12

#Rack

Details about network The suggested infrastructure has virtualization layer that will encompass the ability to operate multiple servers concurrently on top of a single server hardware platform, sharing CPU, disk, interface, and network services. In this case, each virtual server operates as an independent entity on a single physical server that require low-latency network. In order to support high-performance applications (like ERP, SRM) on virtual servers, the solution should operate on collapsed L2 networks with L3 routing in the core or aggregation layers. In addition, maintaining a larger L2 domain provides the maximum flexibility to allow VM mobility with technologies like vMotion or live migration. To support the virtualized environment, following are the key considerations of network architecture. • 2-tier designs should use leaf switches, which connect directly to each core switch • All connectivity between blade enclosures and leaf switches to the spine switches should be through redundant 40GbE or 100GbE connectivity

Technical white paper

Page 11

These considerations help to provide for redundancy and efficient East-West traffic flows. • Router for data center exterior layer This layer consists of redundant gateway router for terminating the Internet. The router can be configured as a single logical router with redundancy through VRRP/HSRP. The HPE MSR3000 Router Series is made up of high-performance services WAN routers that are ideal for small-to medium-sized Internet edge. The MSR3000 Routers use the latest multicore CPUs, offer GbE switching, provide an enhanced PCI bus, and ship with the latest version of HPE Comware software to help ensure high performance with concurrent services. The MSR3000 series provides a full-featured, resilient routing platform, including IPv6 and MPLS, with up to 5 Mpps forwarding capacity and 3.3 Gbps of IPSec VPN encrypted throughput. • Router for data center internal layer (for MPLS WAN interface) This layer consist of redundant WAN router for terminating MPLS network for internal communication. The router can be configured as a single logical router with redundancy through VRRP/HSRP. The HPE HSR6600 Router Series is made up of high-performance services WAN routers that are ideal for small-to medium campus WAN edge or medium sized and aggregation. These routers are built with a multicore distributed processing architecture that scales up to 420 Mpps forwarding and up to 2 Tbps switch capacity. They deliver robust routing (MPLS, IPv4, IPv6, dynamic routing, nested QoS), security (stateful firewall, IPsec/Dynamic VPN, DoS protection, NAT), full L2 switching, traffic analysis capabilities, and high-density 10GbE (and 40/100GbE-ready) WAN interface options, all integrated in a single powerful routing platform. In addition, the HSR6800 Router Series are the first service aggregation routers in the industry to support system virtualization by taking advantage of innovative IRF technology from HPE.

Security infrastructure • Data center exterior layer At DC exterior layer, we are proposing third-party firewall (checkpoint/Fortinet/Palo Alto, etc.) for protecting internal and as well as demilitarized zone. • Data center internal firewall For securing data center servers from MPLS WAN, we are proposing 2X S8005 firewall in HA mode. HPE TippingPoint Next-Generation Firewall (NGFW) enables you to take back control and proactively improve your organization's security posture, without compromising network availability. This new product line is developed on the proven HPE TippingPoint NGIPS engine and delivers reliability, security effectiveness, ease-of-use, and application visibility and control to address today's advanced threats. • Management platform For managing the entire infrastructure through element manager; we are proposing IMC with all required models. The HPE Intelligent Management Center (IMC) Enterprise Software Platform is a standalone, comprehensive management solution that delivers next-generation, integrated modular network management capabilities that efficiently meet the needs of advanced heterogeneous enterprise networks. It is a highly flexible and scalable deployment model, modular in nature, having EAPI library and third-party applications interface. HPE IMC software is one of the first management tools to integrate management and monitoring of both virtual and physical networks and supports VMware, Hyper-V, and KVM. Also supports automatic tracking of the network access port of VMs.

Technical white paper

Page 12

Proposed product details Simplifying the data center network core These data center core/distribution switches provide unprecedented levels of performance, scalability, HA, density, and flexible deployment options validated by independent testing. They drive down data center operation costs while enabling new service levels and delivering the resiliency and low latency required for mission-critical networking. The HPE FlexFabric 7900 is HPE’s small form factor, low latency switching platform for next-generation software defined data centers (SDDC). The 7900 delivers unprecedented levels of performance, buffering, scale, and availability with high density 10GbE, 40GbE, and 100GbE interfaces. The 7900 features the next generation CLOS based architecture. This distributed architecture is perfectly suited for the data center where large buffers, and consistent performance are standard features. At the core of the platform is a three-stage CLOS fabric with two main benefits. The CLOS architecture provides us with an active midplane. This provides an N+1 redundant switch fabric and supports hot-swapping of failed modules. The CLOS architecture provides N*N paths for traffic to cross the fabric delivering faster performance and reducing the chance of hash collisions. The CLOS fabric is cell based, which gives a better distribution of load across the fabric. The Virtual Output Queue technology removes the risk of Head of Line blocking (HoLB) from causing congestion across the fabric.

Figure 4. Front view of 7910 switch

The switch supports full L2 and L3 features along with advanced data center features including TRILL, IRF, VXLAN, and Open Standards based programmability with OpenFlow support. Key features and benefits: • Nonblocking and lossless CLOS architecture • Large L2 scaling with TRILL and HPE IRF • VXLAN support for virtualized and cloud deployments • SDN enabled with OpenFlow 1.3 support • High 10GbE, 40GbE and 100GbE density across 9.6 Tbps switch fabric For more information regarding the HPE FF 7900 Switch Series, visit the HPE 7900 product page.

Technical white paper

Page 13

Access layer/edge switches HPE 5900AF and 5920AF Switch Series With the increase in virtualized applications and server-to-server traffic, customers now require ToR switch innovations that will meet their needs for higher-performance server connectivity, convergence of Ethernet and storage traffic, capability to handle virtual environments, and ultra-low latency all in a single device. The two new models, 5900AF and 5920AF switches, are ideally suited for deployment at the server access layer in large virtualized enterprise data centers. They are also designed for deployment at the core layer of data centers at medium-sized enterprises.

Figure 5. Front View of 5900 switch

The HPE 5900AF and 5920AF series usher in the arrival of the world’s first virtualized server access network. These 10GbE ToR switches are built on open industry standards and set new benchmarks for performance, low latency, reliability, scalability, and greener data centers with a simpler network architecture. These switches deliver high 10GbE port density, deep packet buffers, and ultra-low latency (~1 microsecond) performance, with a choice of front to back (port side to power side) or back to front (power side to port side) airflow. These switches also futureproof network investments by providing support for full L2 and L3, IPv4 and IPv6 dual-stack support. The HPE 5900AF and 5920AF ToR series are fully ready for virtualized data center and cloud computing environments. Key features and benefits • Industry-leading HPE IRF technology radically simplifies the architecture of server access networks and enables massive scalability—this provides up to 300 percent higher scalability as compared to other ToR products in the market • Industry’s only support for multiple use cases in a single ToR switch—up to 50 percent device reduction. Ultra-low-latency (~1 microsecond) IP switching with all features enabled—server-edge storage + Ethernet convergence with DCB today and FCoE (future) • Industry’s only ToR switch in its class with IPv6 routing and IPv4/IPv6 dual-stack support for advanced future networks and future-proof investment • TRILL supported—combines the simplicity and flexibility of L2 switching with the stability, scalability, and rapid convergence capability of L3 routing • 48 port 10GbE/1 Gbps ToR options with 40GbE uplinks for high performance and scalable networking with full L2/L3 features • Lower OPEX and greener data centers with reversible air flow and advanced chassis power management • Full L2 and L3, IPv4 and IPv6 dual-stack support • Choice of front to back (port side to power side) or back to front (power side to port side) airflow with dual fan trays and redundant internal power supplies For more information regarding the HPE 5900AF and 5920AF Switch Series, visit the HPE 5900, 5920 and 5930 product pages. HPE data center routing For more information regarding the HPE HSR6600 routers, visit the HPE HSR6600 router product page and for MSR 3000 routers, Please visit MSR 3000 Product Page. Virtual Connect—the best of both worlds (For blade server) HPE Virtual Connect (VC) technology is a hardware-based solution that enables users to partition a 10GbE/40GbE (with HPE FlexFabric Virtual Switch) connection and control the bandwidth of each partition in increments of 100 Mb.

Technical white paper

Page 14

Table 1. Probable module configuration at c7000 Blade Enclosure [Bay 1] VC FlexFabric-20/40 F8

[Bay 2]

[Bay 3]

[Bay 4]

[Bay 5]

[Bay 6]

[Bay 7]

[Bay 8]

VC FlexFabric-20/40 F8

VC-FC

VC-FC

Empty

Empty

Empty

Empty

Administrators can configure a single BladeSystem 20/40GbE network port to represent multiple physical network interface controllers (NICs), also called FlexNICs, with a total bandwidth of 10/40 Gbps. These FlexNICs appear to the OS as discrete NICs, each with its own driver. While the FlexNICs share the same physical port, traffic flow for each one is isolated with its own MAC address and VLAN tags between the FlexNIC and VC interconnect module. Using the VC interface, an administrator can set and control the transmit bandwidth available to each FlexNIC. Data center administrators can use HPE VC technology to aggregate multiple low bandwidth connections into a single high bandwidth connection. For example, VC allows partitioning of a single 20/40GbE connection into up to four lower bandwidth connections required on the blade. FlexNIC 1

40 GbE Port 1

40 GbE

FlexNIC 2 FlexNIC 3

40 GbE Port 1

2.0 8.0 14.0 16.0

40 GbE Port 2

2.0 8.0 14.0 16.0

FlexNIC 4 FlexNIC 1

40 GbE Port 1

40 GbE

FlexNIC 2 FlexNIC 3 FlexNIC 4

Figure 6. Flex-20-40 logical

c-Class Server Blades Blade 1

NIC1

Mezz 1

Mezz 2

c-Class Interconnect Bays 1/2

NIC2 1

11

22

3

4

5

6

7

8

3/4

2 1 2 3 4

Blade NIC1 NIC1 9

5/6 7/8

1/2 NIC2

Mezz 1

Mezz 2

1

3/4

2 1 2 3 4

5/6 7/8

Figure 7. Connectivity diagram of an x7000 blade enclosure and server blade

VC management tools allow the administrator to assign unique MAC addresses to each of the FlexNICs. All network and storage connections can be defined at deployment and do not need to change if the servers are changed. The MAC address is assigned to a particular bay enclosure in the blade chassis. Once the server blade powers on, the connection profile for that bay is transferred to the server. If a physical server is changed,

Technical white paper

Page 15

the MAC assignment for the bay remains unchanged, and a replacement server blade assumes the same connection characteristics as the original one.

Figure 8. Sample connectivity schema of blade enclosure

Physical security devices Internal firewall HPE TippingPoint Next-Generation Firewall (NGFW) enables you to take back control and proactively improve your organization's security posture, without compromising network availability. This product line is developed on the proven HPE TippingPoint Next-Generation Intrusion Prevention System (NGIPS) engine and delivers reliability, security effectiveness, ease-of-use, and application visibility and control to address today's advanced threats. Key features and benefits • Advanced security protection—with over 7,400 filters—blocks vulnerabilities without impacting network performance • Gain broader visibility and granular control over authorized and unauthorized applications running on your network • Enhanced and flexible policy setting reduces operational expenses and enables a consistent security posture across your HPE network security devices • Industry-leading weekly DVLabs threat protection services offers protection from the latest threats and attacks. (need additional subscription service and not a part of this solution) • IPv6 ready networking features including link aggregation, OSPF, RIP, BGP and multicast dynamic routing, and VLAN support • Easy deployment in transparent or routed modes • 2-node active-passive HA with state synchronization • IPSec site-to-site and client-to-site VPN connectivity • Role-based access control (RBAC) gives control on who can affect what parts of the device configuration

Technical white paper

Page 16

HPE Intelligent Management Center HPE IMC unifies physical and virtual network management and helps IT overcome the challenges of administering the new virtual server edge. The solution provides a unified view into the virtual and physical network infrastructure that accelerates application and service delivery, simplifies operations and management and boosts network availability. Capabilities include the following: • Automatic discovery of VMs, virtual switches, and their relationships with the physical network • VM and virtual switch resource management, including creation of virtual switches and port groups • Automatic and transparent configuration of virtual and physical network infrastructure • Unified performance and alarm monitoring of hosts, workloads, and virtual switches • Topology views and status indicators for networks, workloads, and virtual switches • Automatic reconfiguration of network policies as workloads migrate across the data center IMC offers a new proactive, dynamic, application-aware provisioning model with comprehensive solution integration to align with end-to-end IT operations. This new paradigm shifts businesses to a more agile model by eliminating unnecessary steps in virtualizing business environments. Time to deployment has been accelerated through the upfront definition of profiles with VM connectivity characteristics, which are filed in a library, rather than the use of an iterative manual process for defining network connectivity characteristics that cannot be leveraged and repurposed. These ready-to-use profiles allow for rapid deployment and follow the workload if it is moved, paused, and/or resumed. Simplified

Proactive

Automated

Consistent visibility

Remediation

Connection activation

Actionable information

Accessible management

Enabling SDN

Flexible deployment

Security driven

Service orchestration

TextServices here

Advanced implementation

Figure 9. HPE Intelligent Management Center

HPE IMC is a next-generation management software, which provides the data center operations team with a comprehensive platform that integrates network technologies and provides full fault, configuration, accounting, performance, and security management functionality. Built from the ground up to support the ITIL® operational center of excellence IT practices model, IMC’s single-pane-of-glass management paradigm enables efficient end-to-end business management to address the stringent demands of today’s mission-critical enterprise IT operations. Configuration management—backup Configuration management can be defined as the ability to control changes to the operating status of a managed device, as well as the ability to detect and/or prevent unauthorized changes in the operational status of a device. Maintaining an accurate inventory of last known hardware, software, and configuration information enhances this function. Traffic analysis and capacity planning HPE IMC NTA is a graphical network monitoring tool that utilizes industry-supported flow standards to provide real-time information about the top users and applications consuming network bandwidth. IMC NTA statistics help network administrators better understand how network bandwidth and resources are being used, as well as which source hosts carry the heaviest traffic. This information is invaluable in network planning, monitoring, optimizing, and troubleshooting—IMC NTA identifies network bottlenecks and applies corrective measures to help ensure efficient throughput.

Technical white paper

Page 17

HPE IMC base platform features HPE IMC consists of a base platform and service modules that offer additional functionalities. The base platform provides administrators and operators with the basic and advanced functionality needed to manage IMC and the devices managed by IMC. The IMC base platform provides the following functions: • Administrative controls for managing IMC and access to it. This includes granting or restricting operator access to IMC features through operator and operator group management. The base platform also includes features for the system-wide management of device data collection and information shared by all IMC modules including the creation and maintenance of device, user, and service groups, and device vendor, series, and device model information. It also includes SNMP Management Information Base (MIB) management and other system-wide settings and functions. • A broad feature set for network device management, from the ability to manage SNMP, Telnet, and SSH configurations on a device to configuring Spanning Tree and PoE energy management for managed switches and much more. • Management of the configuration and system software files on devices managed by IMC. This includes storing, backing up, baselining, comparing, and deploying configuration and software files. • Real-time management of events and the translation of events into faults and alarms in IMC. This includes creating, managing, and maintaining alarm lists, trap and syslog filters and definitions, and configurations for notifications of alarms. • Monitoring, reporting, and alarming on the performance of the network and the devices that comprise it. This includes managing global and device-specific monitors and thresholds as well as creating views and reports for displaying performance information. • ACL management includes creating and maintaining ACL templates, resources, and rule sets and deploying ACL rule sets to devices managed by IMC. It also includes monitoring and leveraging ACLs that exist on devices for deployment to other network devices. • Monitoring and managing security attacks and the alarms they generate. • Global management of VLANs on all devices that support VLANs, managed by IMC. HPE IMC service modules HPE IMC’s modular and scalable SOA architecture supports extension of IMC’s scope of coverage beyond the functionality of the base platform. The following optional service modules are available: • Extended API software (eAPIs): The eAPIs are an extension of IMC's open and extensible platform and can be used to share this SOA platform with an organization's homegrown and in-house applications. By integrating with IMC, developers can ensure their applications will work with all the aggregated network data collected by IMC. Developers can write their programs only once to interface with IMC, instead of many times to integrate with the OS of each third-party device on their network. IMC is built upon an open and extensible architectural platform that leverages representational state transfer (REST-style) Web services as one of the three Web services in IMC. These REST-style Web services enable third-party developers to create applications, which interface and leverage IMC services. IMC extended APIs includes over 200 APIs that provide access to core platform services. The extended APIs are included with the enterprise platform and are an optional license upgrade for the standard platform. • Application performance manager (APM): APM is an IMC module that allows administrators to visualize and measure the health of critical business applications and their impact on network performance. With the available data, you can easily determine which business process is affected and which application issues to prioritize—all leading to quick and effective troubleshooting. The comprehensive monitoring and management that APM provides includes fault management, and performance monitoring of application servers, servers, and databases. Applications can easily be discovered by APM, and administrators can be informed of application issues through generated alarms. As with many of IMC modules, APM provides comprehensive reporting features.

Technical white paper

Page 18

• Service health manager (SHM): IMC SHM is an IMC module that provides end-to-end service monitoring and service assurance through the visualization of infrastructure or network variance/factors that are in the service path. SHM leverages data derived from other IMC components to yield critical performance metrics. SHM then aggregates key performance indicators (KPIs) to generate key quality indicator (KQI) metrics. KQIs can be modeled to provide a visual representation of service-level agreement (SLA) obligations. With SHM, administrators can visually determine the level of quality for defined services and take proactive measures to maintain SLAs. • Intelligent analysis reporter (IAR): IAR extends the reporting capabilities within IMC to include customized reporting. These extended reporting capabilities enable network administrators to perform proper analysis and make informed decisions. IAR makes customized reporting easy by including a report designer, which can save designs into report templates. Report outputs include a variety of formats, including charts. Reports can be automatically generated at specified intervals and distributed to key stakeholders. • QoS Manager (QoSM): The QoSM is the core component of IMC’s QoS solution. QoSM provides operators with a common set of QoS device and configuration management features for easily managing QoS for different device types. IMC’s QoSM straightforward implementation of QoS management enables operators to focus on most critical aspects of QoS management, that is service planning. • Network traffic analyzer (NTA): The NTA provides operators with real-time traffic analysis. NTA is a graphical network monitoring tool that leverages industry standard sources of network traffic data to generate real-time displays of TopN users and applications. Routers and switches that support NetFlow provide NTA with the data that feeds NTA reports. NTA analysis and reports support operators in understanding how network bandwidth and resources are being used as well as with information on which hosts and uses are consuming network resources. NTA also supports operators in identifying network bottlenecks, with support in taking corrective measures. The information provided by NTA supports mission-critical network management activities such as network planning, monitoring, optimization, and troubleshooting.

Network virtualization and F5 This solution does not include F5 as base solution and is required additionally. The benefits of virtualization with VMware are clear. F5, an HPE AllianceONE partner, provides virtualization optimized solutions that can be placed at various locations in the HPE FFRA data center architecture to help support the virtualization scale-out requirements. Specifically, F5 has worked closely with VMware to integrate solutions such as: vSphere

Local Traffic Manager (LTM)

vCenter Server

Local Traffic Manager Virtual Edition (LTM VE)

vCloud Director

Global Traffic Manager (GTM)

vCenter Site Recovery Manager

WAN optimization Manager

VMware View

WAN Optimization manager

Application/VM, server optimization for virtualized/cloud data centers reduces the CPU load on Web application servers in virtualized environments through a combination of intelligent caching, connection pipelining, and exploitation of browser behavior. The F5 Management Plug-In for VMware vSphere allows virtualization administrators to more easily manage their BIG-IP Application Delivery Networking policies as they relate to VMware-virtualized applications. The F5 Management Plug-In for VMware vSphere eliminates manual synchronization of information between BIG-IP devices and the vSphere consoles. It also helps automate common networking tasks involved in routine VM maintenance and administration. Finally, it can automatically apply Application Delivery Networking policies to newly provisioned VMs and ease the process of deprovisioning VMs. Overall, these features simplify and automate many of the networking tasks common to VMs, thereby improving the agility of the overall infrastructure.

Technical white paper

Page 19

Business continuity and disaster recovery F5 and VMware have developed a complete solution for running vMotion and Storage vMotion events together, between vSphere environments and over long distances. The solution components enable vMotion migration between data centers without downtime or user disruption. Key solution components include: • Encryption and compression of vMotion traffic between sites using BIG-IP LTM Sessions feature • Byte-level data deduplication of vMotion traffic between sites using BIG-IP WAN Optimization Manager • Client traffic management with BIG-IP LTM to direct user traffic to the correct VM • Data center traffic management with BIG-IP GTM One example is a Windows Server® guest vMotion event across a 622 Mbps link with 40 ms of round-trip time and zero packet loss, which would normally take more than five minutes to complete. With BIG-IP WAN Optimization Manager, it takes less than 30 seconds. The worse the WAN conditions, the greater the potential for improvement. When the vMotion event acceleration is combined with dynamic global and traffic management, newly migrated VMs are recognized quickly, without disrupting existing user sessions. The integration of BIG-IP GTM and VMware Site Recovery Manager (SRM) provides a complete solution for automated DR between two data centers, or to the cloud. In the event of disaster, SRM automatically orchestrates the failover of VM guests and virtual infrastructure between the two sites, while BIG-IP GTM redirects all incoming client application traffic to the secondary site. BIG-IP GTM and SRM are easily integrated via the F5 iControl API. In addition, F5 BIG-IP WAN Optimization Manager improves the transfer of data over the WAN during a failover. This module enables large volumes of data to be transferred from a source to a target data center quickly using compression and deduplication. BIG-IP WAN Optimization Manager encrypts traffic before transmission and decreases bandwidth requirements. F5 also provides a number of solutions that enable organizations to leverage public or private cloud solutions from VMware easily, securely, and with maximum application performance and availability. • BIG-IP GTM is used to direct traffic between multiple data centers in cases where the application may be running in more than one location at times (for example, cloudbursting). • BIG-IP LTM enables organizations to retain authentication and authorization locally, when running applications in the cloud, by redirecting incoming authentication requests to the home data center. • BIG-IP LTM Virtual Edition enables clouds to provide full BIG-IP LTM services as VMs, which can be provisioned and configured on-demand. • BIG-IP Application Security Manager can provide application firewall security to a wide variety of applications running in the cloud.

Technical white paper

Page 20

Virtualization optimized data center networking HP ProLiant rack server simplified, high performance 2-tier design

Integrated HP BladeSystem virtualization-optimized 1-tier design

HP/F5 solutions areas Application/VM, server optimization • For virtualized/cloud data centers HPN 5XXX 1/10 ToR

Virtual Connect Flex-20/40 I/O 40 GbE, IRF Active/Active resilient, scalable design

Intelligent Management Center (IMC)

Business continuity/disaster recovery • Across private and public clouds • Long distance live migration Microsoft Exchange Server 2010 • Optimized performance

FS ADC 10G

FS ADC 10G

TCP, SSL offload/optimization, virtual CMP, health monitoring, load balancing, optimized vMotion from server to core to other racks GTM—Global traffic manager ADC—Application delivery controller

FS GTM

DMZ

Virtual CMP—Virtual cluster multi-processing BGP & OSPF MPLS, IP peering VPLS dark fiber

Figure 10. F5 in the data center

Enhancing application security with F5 Providing security specific to an application deployment is fast becoming an essential component of launching and maintaining a new application. Security personnel must work closely with the network and application teams to help ensure the successful and secure deployment of an application, especially one like Microsoft® Exchange, which is often used by all employees, every day. F5 has a number of ways to help increase the security of Exchange 2011 deployments. F5 message security offering provides an additional layer of protection for Exchange 2011 deployments. spam email can contain virus attachments and other malicious content, like phishing attempts and trojan attacks. The F5 solution leverages reputation data from the McAfee® TrustedSource multi-identity reputation engine to accurately filter email. By eliminating 70 percent of unwanted email before it even reaches the Exchange Servers, F5 greatly reduces the chance that an unwanted and potentially dangerous email gets through to the Exchange 2011 servers. All data can now be symmetrically encrypted between local and remote F5 devices, providing a new way to help ensure site-to-site data security by preventing clear text from being passed on the wire. This secure connection, or tunnel, also improves transfer rates, reduces bandwidth, and offloads applications for more efficient WAN communication. As mentioned previously, F5 can perform DAG replication across data centers inside this encrypted tunnel for secure mailbox replication for the entire mailbox store. For remote users who might be trying to access Microsoft Office Outlook or Outlook Web App from an airport kiosk or other unknown device, F5’s comprehensive Endpoint Security provides the best possible protection for remote users. F5 technology prevents infected PCs, hosts, or users from connecting to your network and the applications inside, and delivers a secure virtual workspace, pre-login endpoint integrity checks, and endpoint trust management. And when the remote user has finished their session with Outlook or Outlook Web App, F5’s post-logon security protects against sensitive information being left on the client. F5 can impose a cache-cleaner to eliminate any user residue such as browser history, forms, cookies, autocomplete information, and more. Post-logon security can also be configured to close desktop search applications so nothing is indexed during the session. Post-logon actions are especially important when allowing non-trusted machines access without wanting them to take any data with them after the session.

Technical white paper

Page 21

F5 security devices report previously unknown threats—such as brute force attacks and zero-day Web application attacks—and mitigate Web application threats, shielding the organization from data breaches. Our full inspection and event-based policies deliver a greatly enhanced ability to search for, detect, and apply numerous rules to block known L7 attacks. F5 makes security compliance easy and saves valuable IT time by enabling the exporting of policies for use by offsite auditors. Auditors working remotely can view, select, review, and test policies without requiring critical time and support from the Web application security administrator. Not only does F5 provide comprehensive application security, but produce extremely secure devices. We help make sure your Microsoft Exchange Server deployment, and the information it contains, remains secure.

Indicative bill of quantity of suggested architecture Network infra Internet Router Part code

Description

JG405A

HPE MSR3044 Router

2

JG527A

HPE X351 300W AC Power Supply

4

Included: HPE X351 300W AC Power Supply U.S. - English localization

4

JG527A

ABA

Quantity

Out of Band Management Switch Part code

Description

Quantity

JE066A

HPE 5120-24G EI Switch

2

JE066A ABA

Included: HPE 5120-24G EI Switch U.S. - English localization

2

DMZ switches Part code

Description

Quantity

JG296A

HPE 5920AF-24XG Switch

2

JC680A ABA

Included: HPE 58x0AF 650W AC Power Supply U.S. - English localization

4

JC680A

HPE 58x0AF 650W AC Power Supply

4

JG298A

HPE 5920AF-24XG Frt(prt)-Bk(pwr) Fn Tray

4

JD092B

HPE X130 10G SFP+ LC SR Transceiver

20

Technical white paper

Page 22

Spine switch Part code

Description

JG841A

HPE FF 7910 Switch Chassis

2

JC665A

HPE X421 Chassis Universal Rck Mntg Kit

2

JH041A

HPE FF 7910 Cable Management Frame

2

JH042A

HPE FF 7910 Bottom-Support Rails

2

JG840A

HPE FF 7900 1800w AC F-B PSU

8

Included: HPE FF 7900 1800w AC F-B PSU U.S. - English localization

8

JG843A

HPE FF 7910 Frt(Prt)-Bck(Pwr) Fan Tray

4

JG683B

HPE FF 7900 12p 40GbE QSFP+ FX Mod

6

JG845A

HPE FF 7900 24p 1/10GbE SFP+ FX Mod

2

JG842A

HPE FF 7910 7.2Tbps Fabric/MPU

4

JG325B

HPE X140 40G QSFP+ MPO SR4 Transceiver

54

JD092B

HPE X130 10G SFP+ LC SR Transceiver

10

JG840A

ABA

Quantity

Leaf Switch Part code

Description

JC772A

HPE 5900AF-48XG-4QSFP+ Switch

6

JC680A

HPE 58x0AF 650W AC Power Supply

12

Included: HPE 58x0AF 650W AC Power Supply U.S. - English localization

12

JC683A

HPE 58x0AF Frt(ports)-Bck(pwr) Fan Tray

12

JG325B

HPE X140 40G QSFP+ MPO SR4 Transceiver

6

JD092B

HPE X130 10G SFP+ LC SR Transceiver

JC680A

ABA

Quantity

252

MPLS core routers Part code

Description

JG354A

HPE HSR6602-XG Router

2

JC087A

HPE 5800 300W AC Power Supply

4

Included: HPE 5800 300W AC Power Supply U.S. - English localization

4

JC665A

HPE X421 Chassis Universal Rck Mntg Kit

2

JD092B

HPE X130 10G SFP+ LC SR Transceiver

4

JC087A

ABA

Quantity

Global/Local Load Balancer Part code

Description

Quantity

F5-BIG-LTM-3600-4G-R + Optional (BIG-IP GTM VE)

Not in scope

2

Technical white paper

Page 23

Local load balancer Part code

Description

Quantity

F5-BIG-LTM-3600/3000

Not in scope

2

Part code

Description

Quantity

JG748AAE

HPE IMC Ent SW Plat w/50 Nodes E-LTU

1

JG750AAE

HPE IMC NTA SW Mod w/5-node E-LTU

1

JF408AAE

HPE IMC QoS Manager Software Module E-LTU

1

JG398AAE

HPE IMC SHM Software Module E-LTU

1

JG138AAE

HPE IMC IAR Software Module E-LTU

1

JG764AAE

HPE IMC TAM SW Mod w/50-node E-LTU

1

JG489AAE

HPE IMC APM S/W Module w/25-monitor E-LTU

1

JG771AAE

HPE Intelligent Management Center High Availability Software E-LTU

1

OEM NMS

Security infra Internal firewall Part code

Description

JC885A

HPE S8005F NGFW Appliance

JC885A ABA

Included: HPE S8005F NGFW Appliance U.S. - English localization

JC893AAE

HPE TP DVLabs DV for S8005F 1-yr E-LTU

JD092B

HPE X130 10G SFP+ LC SR Transceiver

Quantity 1

1

4 6 8

Perimeter Firewall Part code

Description

Quantity

`--Not in Scope-NIL

Checkpoint, Palo-Alto Firewall with 6X10 GbE and 4X1GbE Port

1

3 Year IPS signature subscription

2

Technical white paper

Additional links HPE Converged Infrastructure HPE Converged Infrastructure white papers and videos http://www8.hp.com/us/en/business-solutions/infrastructure/overview.html HPE Converged Infrastructure Reference Architecture Guide http://h18004.www1.hp.com/products/solutions/4AA2-6453ENW.pdf HPE FlexFabric HPE FlexFabric white papers and videos hp.com/go/flexfabric HPE Intelligent Resilient Framework HPE IRF white paper—reducing network complexity, boosting performance with HPE IRF technology http://h10144.www1.hp.com/docs/irf/irf.pdf HPE Virtual Connect HPE Virtual Connect data sheets and videos hp.com/go/virtualconnect HPE Intelligent Management Center HPE IMC data sheets and product details http://h17007.www1.hp.com/us/en/products/network-management/index.aspx HPE Networking Services HPE Networking Services brochures and videos http://www8.hp.com/us/en/business-services/index.html#/page=1&/sort=csdateweblaunch|DESC&/cc=us&/lang=en

Learn more at hpe.com/networking

Sign up for updates

Rate this document © Copyright 2015 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial errors or omissions contained herein. ITIL is a registered trademark of the Cabinet Office. McAfee is a trademark or registered trademark of McAfee, Inc. in the United States and other countries. Microsoft and Windows Server are trademarks of the Microsoft group of companies. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. 4AA6-3421ENW, December 2015

Suggest Documents