ODCA Usage Model

8 downloads 0 Views 1MB Size Report
Paul Muther – Appnomic. Ray Solnik – Appnomic. Jeff Sedayao – Intel. Dave Casper – Moogsoft. Ed Simmons – UBS. Matt Estes – The Walt Disney Company.
OPEN DATA CENTER ALLIANCE Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1 Abstract This PoC provides a method for evaluating different CIaaS providers—including internal enterprise operations. The proposed standardized framework illustrates how enterprises can measure across multiple providers to make better buying decisions, evaluate and help ensure quality of service more effectively, deliver a better end-user experience, and enhance overall cloud operations and ROI for the enterprise. The content of this PoC is also designed to communicate to service providers how they can better serve enterprise buyers and more quickly accelerate enterprise cloud adoption and market growth. This PoC also proposes the adoption of an industry-wide “Cloud Facts” label, similar in concept to the Nutrition Facts label in the food industry.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Table of Contents

Contributors

Abstract.................................................................................................................................................1 Legal Notice...........................................................................................................................................3 Executive Summary..............................................................................................................................4 Introduction: Validating an ODCA CIaaS Master Usage Model..........................................................4 PoC Executive Summary.......................................................................................................................5 Key Findings and Recommendations of the PoC .................................................................................6 PoC Objectives.....................................................................................................................................7

John Karsner – Appnomic Paul Muther – Appnomic Ray Solnik – Appnomic Jeff Sedayao – Intel Dave Casper – Moogsoft Ed Simmons – UBS Matt Estes – The Walt Disney Company Erick Wipprecht – The Walt Disney Company

PoC Application Environments: Amazon Web Services, Rackspace Cloud......................................8 Application Description: Mobile Device Profiler....................................................................................8 Cloud Service Description....................................................................................................................9 PoC Results.........................................................................................................................................11 Focus Area 1: Hybrid Cloud Operations Visibility and Manageability..................................................11 Focus Area 2: Standard Units of Measure Framework.......................................................................16 “Cloud Facts” Label ...........................................................................................................................17 Benchmarks of Industry Labelling Practices......................................................................................17 Compliance........................................................................................................................................17 Conclusions.........................................................................................................................................18 References...........................................................................................................................................19 Appendix A: Taxonomy.......................................................................................................................20 Appendix B: ODCA Infrastructure Workgroup...................................................................................21 Appendix C: The Proof-of-Concept Team..........................................................................................22 Appendix D: Standardized Cloud Facts.............................................................................................23 Appendix E: The EPA Fuel Economy Guide and Monroney Auto Window Sticker...........................24

2

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Legal Notice © 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED. This “Open Data Center AllianceSM Proof of Concept: Enterprise Cloud Service Quality” document is proprietary to the Open Data Center Alliance (the “Alliance”) and/or its successors and assigns. NOTICE TO USERS WHO ARE NOT OPEN DATA CENTER ALLIANCE PARTICIPANTS: Non-Alliance Participants are only granted the right to review, and make reference to or cite this document. Any such references or citations to this document must give the Alliance full attribution and must acknowledge the Alliance’s copyright in this document. The proper copyright notice is as follows: “© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.” Such users are not permitted to revise, alter, modify, make any derivatives of, or otherwise amend this document in any way without the prior express written permission of the Alliance. NOTICE TO USERS WHO ARE OPEN DATA CENTER ALLIANCE PARTICIPANTS: Use of this document by Alliance Participants is subject to the Alliance’s bylaws and its other policies and procedures. NOTICE TO USERS GENERALLY: Users of this document should not reference any initial or recommended methodology, metric, requirements, criteria, or other content that may be contained in this document or in any other document distributed by the Alliance (“Initial Models”) in any way that implies the user and/or its products or services are in compliance with, or have undergone any testing or certification to demonstrate compliance with, any of these Initial Models. The contents of this document are intended for informational purposes only. Any proposals, recommendations or other content contained in this document, including, without limitation, the scope or content of any methodology, metric, requirements, or other criteria disclosed in this document (collectively, “Criteria”), does not constitute an endorsement or recommendation by Alliance of such Criteria and does not mean that the Alliance will in the future develop any certification or compliance or testing programs to verify any future implementation or compliance with any of the Criteria. LEGAL DISCLAIMER: THIS DOCUMENT AND THE INFORMATION CONTAINED HEREIN IS PROVIDED ON AN “AS IS” BASIS. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE ALLIANCE (ALONG WITH THE CONTRIBUTORS TO THIS DOCUMENT) HEREBY DISCLAIM ALL REPRESENTATIONS, WARRANTIES AND/OR COVENANTS, EITHER EXPRESS OR IMPLIED, STATUTORY OR AT COMMON LAW, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, VALIDITY, AND/ OR NONINFRINGEMENT. THE INFORMATION CONTAINED IN THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND THE ALLIANCE MAKES NO REPRESENTATIONS, WARRANTIES AND/OR COVENANTS AS TO THE RESULTS THAT MAY BE OBTAINED FROM THE USE OF, OR RELIANCE ON, ANY INFORMATION SET FORTH IN THIS DOCUMENT, OR AS TO THE ACCURACY OR RELIABILITY OF SUCH INFORMATION. EXCEPT AS OTHERWISE EXPRESSLY SET FORTH HEREIN, NOTHING CONTAINED IN THIS DOCUMENT SHALL BE DEEMED AS GRANTING YOU ANY KIND OF LICENSE IN THE DOCUMENT, OR ANY OF ITS CONTENTS, EITHER EXPRESSLY OR IMPLIEDLY, OR TO ANY INTELLECTUAL PROPERTY OWNED OR CONTROLLED BY THE ALLIANCE, INCLUDING, WITHOUT LIMITATION, ANY TRADEMARKS OF THE ALLIANCE. TRADEMARKS: OPEN CENTER DATA ALLIANCESM, ODCA SM, and the OPEN DATA CENTER ALLIANCE logo® are trade names, trademarks, and/or service marks (collectively “Marks”) owned by Open Data Center Alliance, Inc. and all rights are reserved therein. Unauthorized use is strictly prohibited. This document does not grant any user of this document any rights to use any of the ODCA’s Marks. All other service marks, trademarks and trade names reference herein are those of their respective owners.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

3

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

OPEN DATA CENTER ALLIANCE Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1 Executive Summary This PoC provides a method for evaluating different CIaaS providers—including internal enterprise operations. The PoC illustrates how enterprises can measure end-user experience and cloud application stack component key performance indicators (KPIs) across multiple providers to make better buying decisions, evaluate and help ensure quality of service more effectively, deliver a better end-user experience, and enhance overall cloud operations and ROI for the enterprise. This PoC also proposes the adoption of an industry-wide “Cloud Facts” label, similar in concept to the Nutrition Facts label in the food industry. This PoC is based on these ODCA Usage Models: •• Compute Infrastructure as a Service (CIaaS) •• Standard Units of Measure for IaaS Rev 1.1

Introduction: Validating an ODCA CIaaS Master Usage Model For the benefit of our membership and to provide useful information to the broader computer industry and related industries, ODCA Technical Workgroups are defining a new class of IT requirements to transform data center computing. These requirements and usage scenarios are articulated in usage models. To move beyond an academic approach to this work, the ODCA actively pursues opportunities to work with the ODCA member community and execute proofs of concepts (PoCs) to validate the usage models in practical, real-word implementations. This PoC investigates and validates the ODCA Compute Infrastructure as a Service Master Usage Model and ODCA Standard Units of Measure Usage.

4

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

PoC Executive Summary The ODCA Compute Infrastructure as a Service Master Usage Model establishes an open, standardized framework around which Infrastructure as a Service (IaaS) can be defined, provisioned, monitored, and managed. In combination with the ODCA Standard Units of Measure for IaaS, this model provides another step toward industry evolution and growth by describing the qualitative and quantitative attributes of cloud services in terms that support direct comparisons between providers. These are the ODCA frameworks upon which this PoC is based. Consumers of cloud services in today’s cloud-computing marketplace may have a difficult time evaluating and comparing performance across service providers. A lack of standard units of measurement (SUoM) and a common framework for assessing vendor service capabilities lessens transparency, hinders visibility, and makes it more difficult to gauge the reliability of cloud services. CIaaS providers currently do provide a great deal of information, but the way the information is presented and the measurement standards vary. These differences make it difficult to directly compare and contrast competing offers. Because of this, enterprise IT professionals can become less effective in their roles and market growth is inhibited. This PoC applies the principles described in the ODCA frameworks to real-world test scenarios. The participants involved—Appnomic Systems and Intel—expressed a desire to conduct this testing to help establish a clear, equitable method for evaluating cloud service providers in respect to performance and quality of service. Ideally, this work could give enterprise decision-makers a better method for evaluating competing services when making a new purchase decision, as well as on a continuing basis for production application environments.

“A primary objective of the PoC is to demonstrate a measurement and performance management approach model that an enterprise can implement to leverage competitive pricing and performance results across cloud service providers—to the benefit of the enterprise.” —Dave Casper ODCA Infrastructure Working Group Chair

To complete the PoC in a reasonable time frame and within budget, the participants deployed a basic, easily maintainable application on a production-grade cloud environment. The findings, however, apply to both simple and complex enterprise application stacks.1 Many service providers suggest that buyers should not be concerned with the underpinnings of the cloud service (a “black-box” view of CIaaS); they should be assured that the service will just work and, in the process, saves resources and reduces costs. When operating well, this concept makes a lot of sense. However, if the CIaaS solution does not perform well, business operations can be adversely affected. Today, enterprise confidence in cloud services requires deeper insights than what the black-box model currently provides.

1

The test environment presented some constraints that affected test results. For example, the mobile device profiler application—provided by one of the ODCA member companies—helps those applications running in a data center respond to device queries. These lightweight XML transactions must be performed at massive volumes to generate a measurable load. Because of this, it was difficult for the PoC team to generate workloads substantial enough to provide clear differentiation at the lower levels of the test transaction volumes. Also, the service providers were not incorporated into the implementation, creating a “black-box” view of CIaaS solutions. As such, service providers were not available to help fix issues that arose in the PoC or optimize their platform’s performance.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

5

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Key Findings and Recommendations of the PoC

“… leading CIaaS providers who are confident in their service delivery can use the PoC recommendations to differentiate themselves in the market and leverage a competitive advantage to achieve a greater share of the market.”

The three key findings and recommendations from the PoC are: 1. CIaaS solutions and performance remain surprisingly difficult to compare and measure. 2. There are viable measurements relating to performance and capabilities —Catherine Spence, ODCA TCC Chairperson that enterprises should use to compare ongoing performance across CIaaS providers and make significant improvement gains in quality of service and cost.2 3. A “Cloud Facts” labelling model like implemented in the food products industry (“Nutrition Facts”) is recommended as an approach for the CIaaS industry to evaluate and pursue as a means to accelerate end user adoption and industry growth. The following summarizes the additional results from the PoC. Additional Findings •• Achieving equivalent environments across different cloud services proved more difficult and time-consuming than the PoC team anticipated. The end result was selecting the closest standard CIaaS configurations available to purchase without modification of memory, vCPU, network.3 –– We did not find an easy method to incrementally adjust memory, processor, or network options to tune the CIaaS platform resources and achieve desired application performance. •• When provisioning cloud services with the intention of acquiring services of similar cost and design, measurable performance differences between two external CIaaS providers were clearly noted. Differences were also evident in comparison to the internal enterprise environment behind the firewall. •• It can be very difficult to compare internal private cloud costs to external CIaaS costs, in particular on an application-by-application basis. There was not a practical way to estimate the cost of the internal enterprise environment, in particular versus the external resources, in this PoC. •• Barriers exist that make it difficult to implement consistent measurement across cloud providers. Nonetheless, the task is certainly something that an enterprise buyer can accomplish, particularly when supported by a vendor with an abstracted measurement system that can span service providers.2 Otherwise, the range and complexity of trying to define a single common unit conflicts with the ultimate goal: an equitable basis for comparison. •• The ODCA MUMs provide helpful guidance on how to evaluate cloud services. The MUMs, however, were somewhat difficult to navigate; in particular, cross-referencing the different MUMs to complete this PoC was challenging. In addition to refining select usage models and developing important new usage models, the ODCA should consider developing a meta-usage model or some other framework to help tie together the various usage models that have been issued as of the date of this publication. Additional Recommendations •• CIaaS providers should provide an easier user interface and a fixed combination of resources to help enable users to provision resources on demand. •• More work and collaboration is required from enterprises and cloud providers to achieve levels of CIaaS availability and end-user quality experience so that enterprises can treat the CIaaS provider like a utility “black box” similar to how most enterprises “plug into the electrical grid” for power.4 •• CIaaS providers should consider third-party assurance or validation of their solutions with published and ongoing disclosure of results to provide the transparency and confidence enterprises are seeking when considering CIaaS providers.5 •• Enterprises should use standard units of measure (SUoM) for application end-user experience and CIaaS infrastructure performance across application stacks and CIaaS providers. By having this resource available, enterprises can assure the best performance and value from CIaaS providers by providing a feedback loop on where the CIaaS providers are doing well and what they need to improve. Application Usage Patterns (AUPs) can help normalize measurement across providers. The use of an abstracted measurement system can lessen the variance in test results due to differences in hardware components, security features, and network operations. 2

3

4

5

6

Appnomic played this role for this PoC; however, there are a variety of options for an enterprise to consider. For example, Cloud Service Brokers (CSBs) are an emerging class of vendors that play an intermediary role between enterprise buyer and cloud vendor. The CSB makes it easier for an enterprise buyer to consume cloud services – in particular across cloud vendors. The question for the enterprise and these vendors is whether the vendors can provide adequate visibility, transparency, portability at a cost that does not outweigh the benefits of lower cloud computing costs. In addition, the ODCA Infrastructure Working Group and the PoC Team reviewed a white paper by PricewaterhouseCoopers that recommends third party assurance as a means for supporting enterprises in this area – another viable option. The white paper reference is below and copies are publicly available, provided directly from PWC or available from Appnomic Systems.PricewaterhouseCoopers, LLP, Protecting your brand in the cloud: Transparency and trust through enhanced reporting, a whitepaper, November 2011. The PoC team initially planned to provision environments that were fundamentally similar for all three CIaaS solutions. However, as we evaluated the standard options available from the two chosen service providers, achieving parity between the two external environments, for example, in terms of memory and vCPU, became difficult. We also determined that attempting to achieve more similarity would probably not change the study outcomes. Instead, we decided to compare the two environments that could be selected “out of the box” to be as reasonably close as possible and to provide a cost-benefit view, even though the providers were provisioning somewhat different services. Nicholas Carr’s book, “The Big Switch: Rewiring the World, from Edison to Google,” provides a compelling comparative analysis of how cloud computing could evolve into utilitygrade services similar to how the electric utility grid evolved.

Ibid.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

PoC Objectives This proof of concept (PoC) targets the achievements of the ODCA objectives stated in the Compute Infrastructure as a Service (CIaaS) Master Usage Model.6 Specifically, the MUM executive summary states: •• “A common set of principles, metrics and architectural frameworks can be defined resulting in consistent capabilities, service levels and service attributes across multiple providers whilst still allowing the individual providers to innovate and differentiate.” •• “The CIaaS Master Usage Model (MUM) helps facilitate a robust market place— establishing a requirements framework for open, interoperable compute infrastructure services.” By supporting the MUM objectives, the PoC demonstrates how service and technology providers can satisfy other principles of the ODCA, including interoperability and transparency. This should help service providers understand how to increase quantitative and qualitative performance of service attributes, which could attract more buyers and strengthen and expand the industry.

This PoC illustrates a technical method for enterprise cloud consumers to implement and manage a multi-vendor measurement and purchasing model. The approach maintains competitive pricing and platform performance pressures for the buyer, as well as for cloud providers.

Achieving the Objectives To achieve the PoC objectives, the team used these methods: 1. PoC participants collaborated to identify and implement an application that could be deployed across multiple CIaaS providers, as well as within an internal hosting infrastructure. 2. Three different levels of transaction volume loads were generated for each application environment over three different twenty-four hour periods.7 3. Appnomic deployed the AppsOne* IT Operations Analytics (ITOA) platform8 to monitor and measure the hybrid application and CIaaS stack performance across multiple CIaaS providers, as well as the internal hosting environment where the application operated behind the enterprise firewall. 4. Cost factors were compiled for each external environment in which the application was run to help provide insights into pricing as it relates to value. 5. Measurement and performance results were quality checked for accuracy, as well as collected, contrasted, and organized to identify key results and to develop recommendations. The monitoring system included a management server that ingested metrics from lightweight agents deployed in the CIaaS stacks where the target application operated. These metrics were sent every minute and transported securely by means of SSL encryption. The monitoring approach included real-user performance (an Apache JMeter* load-generation system), application-layer transaction performance, as well as measurements from the web server, application server, and database server layers of the CIaaS and application stack in the CIaaS environment. Network measurements were not included in the results. The work accomplished in the PoC suggests that in the future the summary results could be aggregated into a unified user interface, offering a consistent, accessible, “single-pane-of-glass” view of the platform components and the performance of end-user transactions and interactions.9

6 7

8

9

www.opendatacenteralliance.org/library The load test runs were conducted as follows. Note some charts cut off the beginning and end of the testing period as the ramp up and ramp down data showed transitions not relevant to the study: CIaaS 1: January 22, 1:00 pm – January 23, 1:00 pm; CIaaS 2: January 25, 12:00 pm – January 26, 12:00 pm; CIaaS 3: February 6, 10:00 am – February 7, 10:00 am While this proof of concept has been implemented with the Appnomic AppsOne IT Operations Analytics (ITOA) platform, the principles described and the results obtained have broad applicability regardless of the platform. Various industry technology providers can comply with the CIaaS MUM and implement a similar solution reflecting their own unique innovations and differentiators. The AppsOne tool provided at no charge for this PoC has such a dashboard available, but due to security constraints, this functionality was not enabled across the hybrid enterprise internal environment and the external cloud environment.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

7

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

PoC Application Environments: Amazon Web Services, Rackspace Cloud Application Description: Mobile Device Profiler The test application used in this PoC features a mobile device profiler that was provided by an ODCA member company. The profiler receives web queries from another application platform containing a mobile device header and matches that to a capabilities profile. The results of the query are returned to the requesting application as an XML file. The application responds appropriately for the particular end user’s requirements, modifying the user interface to present the information effectively. This application/applet must not interfere with the core application effectiveness and end-user experience. We chose a real application that represents a reasonable workload that an IT department would want to deploy among clouds. The mobile device profiler met the folllowing criteria: •• It could be reasonably easily deployed behind the firewall and on external cloud platforms. •• It was simple enough to manage in a PoC environment, relevant enough to be meaningful for enterprise readers, and not too intensive to maintain. •• It required a cloud environment that would not be too costly. •• It was measurable. •• It would pass enterprise security and legal review for sharing results externally. The application that was selected meets all of these criteria.10 System Platform Description The application system platform consists of these components (as shown in Figure 1): •• Web Server: Windows* 2008 R2 Standard Web server (Internet Information Services [IIS] for Windows Server)11 •• Application Server: CentOS*/RHEL 6.x Apache Tomcat* application server •• Database Server: Windows 2008 R2 Standard DB server (MS SQL* 2008 R2) Web Server

Load Generation Source

Application Server

Database Server

Figure 1. PoC Test Application Topology.

10 11

8

Although the application is a relatively basic solution, the PoC team did not see any significant benefit to implementing the solution on a more complex environment. The web server is only used upon turn up of the application and then approximately hourly on an ongoing basis to refresh templates. It does not participate in the actual transactions and therefore was not a focus of the measurement in the PoC.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Mobile Profiler Application Topology for User Transactions Defined for the PoC Four sample inquiries were run against the application (dev, dizzy, stack, tss). They were all similar in profile, generated essentially the same load, and followed the same path. The progression of user transactions monitored for this PoC followed this sequence: Application Server Query -> Tomcat App Server -> Database -> Tomcat App Server -> XML Result User Application Load Description To test each CIaaS environment, three load levels were established by load testing the application running on each CIaaS platform using Apache JMeter. To maintain consistency, these same levels were used to test the cloud provider and provide the best apples-to-apples comparison versus the internal infrastructure. The load levels applied during testing are listed in Table 1: Table 1. Load volumes tested in the PoC

Description

Transactions Per Minute

Equivalent Transactions Per Hour

Low

30,000

43.2 million

Medium

150,000

216 million

High

300,000

432 million

Cloud Service Description Table 2 provides a detailed view of the three environments provisioned to support the PoC testing. While the PoC team’s goal was to have similar, equivalent infrastructure “stacks,” some variability exists across the platforms due to an agreement to limit provisioning to standard instance offerings available from the CIaaS providers which were packaged differently. The more notable differences across the platforms are memory and virtual CPUs (vCPUs) provisioned. Each solution provider offers different combinations of vCPU and memory, making to difficult to provision similar infrastructures, particularly at low cost. As one of the PoC team members shares, “As far as the difference between sizes of instances, in some cases to get more RAM you have to get additional CPUs. That can work in reverse as well. This is similar to comparing U.S. and European shoe sizes, where Amazon’s size 8 is not the same as Rackspace’s size 8.”

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

9

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Cost

Database Server

Application Server

Web Server

Table 2. Application infrastructure summary details12

12

CIaaS Platform 1

CIaaS Platform 2

CIaaS Platform 3

•• OS: Windows Server* 2008 R2 Standard •• vCPU: 2 •• Memory: 4 GB

•• OS: Windows Server 2008 R2 •• vCPU: 1 •• Memory: 0.615 GB •• Approximate Cost: $0.02

•• OS: Windows Server 2008 R2 SP1 •• vCPU: 2 •• Memory: 2 GB •• Approximate Cost: $0.10

•• CentOS* 6.5 •• vCPU: 4 •• Memory: 8 GB

•• RHEL-6.4 •• vCPU: 4 •• Memory: 15 GB •• Approximate cost: $0.54 per hour

•• CentOS 6.4 •• vCPU: 8 •• Memory: 8 GB •• Approximate cost: $0.32 per hour

•• Windows Server 2008 R2 Standard with SQL Server 2008 R2 Standard •• vCPU: 2 •• Memory: 4 GB

•• Windows Server 2008 R2 with SQL 2008 R2 Web •• vCPU: 2 •• Memory: 7.5 GB •• Approximate cost: $0.50 per hour

•• Windows Server 2008 R2 with SQL Standard •• vCPU: 1 •• Memory: 1 GB •• Approximate cost: $0.70 - Infrastructure Service Level

$1.06 per hour

$1.12 per hour

This is an internal infrastructure and does not have a per-user cost structure associated with it.

This table provides a summary of what was provisioned on each platform to provide guidance on the differences or similarities and to illustrate that the cost is approximately the same for the two external cloud platforms. The intent of this study was not to rate service providers or even different provisioned IaaS solutions. It was to illustrate how standard units of measurement and a consistent measurement tool and framework can improve enterprise cloud initiative performance, service provider performance, price/value tradeoffs, and overall industry adoption and growth. As such, we have tried to abstract the service provider names from the results while still providing adequate descriptions of what was involved in this PoC. More details may be made available upon request. Prices are based on publicly available pricing presented by vendors on their websites during the PoC and do not reflect enterprise negotiated pricing that may be available. An example of an Amazon pricing can be found at http://aws.amazon.com/ec2/pricing. An example of a Rackspace server pricing can be found at www.rackspace.com/cloud/servers/pricing.

10

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

PoC Results The goal of this PoC was to establish a Service Level Agreement (SLA) performance monitoring framework and approach to set the foundation for “same-for-same” comparisons of SLAs of different CIaaS providers. The PoC team identified two areas and five measures on which to focus:13 Focus Area 1. Hybrid Cloud Operations Visibility and Manageability 2. Standard Units of Measure Framework Measurement 1. Application Usage Patterns 2. Actual End-User Application Transaction Volumes 3. Transaction Response Times 4. Server CPU Utilization 5. Server Memory Utilization The PoC summary results are organized below in alignment with those focus areas and corresponding test results.

Focus Area 1: Hybrid Cloud Operations Visibility and Manageability The section illustrates how an enterprise IT or cloud operations manager could more effectively manage vendors, service providers, and application SLAs to end users. As shown in Figure 2, on the next page, and the details that follow, an IT professional could use the following apples-to-apples comparisons to help make better choices for managing multiple service providers: •• Understand the different scale break points of two providers and possibly use one for passive back-up/disaster recovery and the other for active primary, and/or •• Select what incremental resources to provision on which service provider to achieve better performance, and/or •• Work with each service provider to enhance performance on the platform that was purchased, or •• Negotiate a lower price from a lesser-performing provider. Figure 2 illustrates how two platforms under the same load perform differently when measured in the same fashion. The arrows highlight transactions per minute (vertical axis) of load generated on two different CIaaS platforms. The horizontal axis scales are similar; they represent the time of day when load was generated.

13

Note that individual measurement may not be definitive. Performance may be traded off against other factors, such as security features, making it necessary to evaluate any unit result holistically. In this PoC, security and related overhead could not be assessed inside the CIaaS platforms; the application was deployed in an equivalent manner, so this bias was not a factor at the application layer.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

11

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

On the left side, the measurement shows that CIaaS Platform 1 is able to deliver higher levels of load (approximately 70,000 transactions per minute) compared to the environment on the right (CIaaS Platform 2). At the same time, the variability of the number of successfully completed transactions increases with load as illustrated by the jitter of the horizontal lines measuring transactions per minute. At the highest test load level, the variability is significant as illustrated by the fluctuating transaction levels. There are also transactions that timed out, as shown by the jitter that appears along the bottom of the left graph in the figure. However, CIaaS Platform 1 was able to deliver at these higher levels of load compared to the environment provisioned with the provider on the right. CIaaS Platform 2 (in the right graph) “capped out” at a maximum number of transactions and would not process any more than about 47,000 transactions per minute. CIaaS Platform 1 Transaction Processing Tier I

Tier II

Tier III

Completed TimedOut

60,000

40,000

20,000

0

Tier I

80,000

Transaction Count

Transaction Count

80,000

CIaaS Platform 2 Transaction Processing Tier II

Tier III

Completed

60,000

40,000

20,000

13 14 15 16 17 18 19 20 21 22 23 0 1 2 3 4 5 6 7 8 9 10 11 12

0

0 1

2

3

4

5

6

7

8

Hour of the Day

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0

Hour of the Day

Figure 2. CIaaS Platform 1 transaction processing versus CIaaS Platform 2 Measurement 1: Application Usage Patterns – Standardizing Measurement in Production Environments In production environments, it can be difficult to normalize SUoM across CIaaS platforms to get a close apples-to-apples comparison. To help enable more effective and closer comparisons, this PoC introduces the concept of Application Usage Patterns (AUPs) to normalize workloads across CIaaS platforms and compare performance when the same work load is creating demand across different CIaaS platforms. Figure 3 illustrates what some AUP load clusters look like for this PoC.14 Each pie chart represents a unique volume and mix of transactions (each shading represents a transaction type) that are statistically different from the others. You can see how load clusters 133 and 134 are similar in mix; however, the total volume of transactions captured in the monitoring interval of one minute varied enough for the Application Behavior Learning (ABL) technology in AppsOne to label them as different AUPs. Cluster 135 has a much greater percentage of one particular transaction type occurring during the monitoring interval, so it is identified as a separate cluster. When a load cluster is captured along with the various key performance indicator (KPI) measurements that describe the application stack environment, together, they form an AUP. When comparing AUPs, we can see how one platform measures against another in a standardized way.

Dev Dizzy Stack TSS

Cluster 133

Cluster 134

Cluster 135

Cluster 136

Figure 3. Application usage pattern clusters

14

While the PoC team did experiment with different usage patterns that incorporate transaction type mixes, in the end, the final test results all had the same transaction type mixes with the three different load levels. These AUPs that show different transaction type mixes were from earlier testing and provided here to illustrate the concept of the AUP.

12

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

In production environments, different mixes and volumes of transactions often occur. ABL analytics can identify when these loads are statistically similar and then provide a standardized basis for comparing performance across CIaaS platforms (for example, when they are running on the same or similar applications). As shown in Figure 4, ABL technology captures the four transaction types in this PoC as essentially 25 percent of the transaction volume with the following specific volumes: •• stack – 55,753 •• tss – 55,869 •• dev – 55,993 •• dizzy – 56,018 A similar mix would be expected to have a similar load or demand on the infrastructure yielding substantially similar KPI metrics.

Dev 55,993

Dizzy 56,018

TSS 55,869

Stack 55,753

Synthetic Monitoring and Measurement In some cases, synthetic tools— such as the load balancers used in this PoC—can be used in production environments to help assess CIaaS platform performance. This approach can be cost-effective and relatively simple to implement; it can be a good method for testing, in particular. However, for ongoing production management, current technology and measurement techniques can measure realuser experience without having to rely on synthetic measurement in production environments.

Figure 4. Transaction types by volume Figure 5 illustrates another way to visualize a load cluster or mix of transactions where the volumes and the transaction types are provided. Three loads were generated for each CIaaS environment and measured in the same fashion, standardizing the tests against each CIaaS platform. Each load was made up of four transaction types: dev, dizzy, stack, tss.15 The results from these load tests follow. Cluster ID: 19

start time: Jan 14 18:03 end time: Jan 14 18:08

Dizzy TSS Dev Stack

0

250

500

750

1,000

1,250

Figure 5. Alternate view of load clusters

15

Renamed from their original application names to preserve company confidentiality.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

13

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Measurement 2: End-User Application Transaction Volumes Figure 6 illustrates how measuring completed transaction counts is a good starting point for comparing one platform to another. The italicized, underlined values indicate where the platform reached its maximum transaction-volume level. In the PoC, CIaaS Platform 1 was able to complete and process volumes up to 75,000 transactions per minute which equates to approximately 3.2 billion transactions per month; this is considered a high load. CIaaS Platform 2 performed well (up to 47,000 transactions per second), but did not capably handle more than that volume. Transactions beyond that volume were not completed. Finally, CIaaS Platform 3 capped out at a maximum level of 25,500 TPM. CIaaS Platform 1 Transaction Processing Tier II 37,500

Tier III 75,000

80,000

60,000

40,000

20,000

0

Tier I 8,000

Tier II 37,500

CIaaS Platform 3 Transaction Processing Tier III 47,000

60,000

40,000

20,000

13 14 15 16 17 18 19 20 21 22 23 0 1 2 3 4 5 6 7 8 9 10 11 12 13

0

Tier I 7,800

80,000

Transaction Count

Tier I 8,000

Transaction Count

Transaction Count

80,000

CIaaS Platform 2 Transaction Processing

Tier II 25,500

Tier III 25,000

60,000

40,000

20,000

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0

Hour of the Day

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 0 1 2 3 4 5 6 7 8 9 10

Hour of the Day

Hour of the Day

CIaaS Platform 1

CIaaS Platform 2

CIaaS Platform 3

Tier I

8,000

8,000

7,800

Tier II

37,500

37,500

25,500

Tier III

75,000

47,000

25,000

Figure 6. Transaction volumes completed on each CIaaS platform as measured in transactions per minute. The figure is not calibrated for a cross-platform dashboard view.16 Therefore, we have provided the key values from the figures in the table. Measurement 3: Application Transaction Response Times Transaction response times provide a key indicator of service quality. This measure captures the end-user experience. If transactions take too much time, employees often become unproductive or customers can be lost to competitors. Figure 7, on the next page, illustrates the measurements captured throughout the 24-hour test run periods for each platform. For each run as is described above, the loads or transaction volumes were progressively increased over time. For CIaaS Platform 1, the initial response time measurement did not stabilize until about the fifth hour. However, once captured, response times remained steady at about 750 milliseconds. At approximately hour 19, the monitoring system was turned down. For CIaaS Platform 2, response times averaged 1,200 milliseconds, which is approximately 60 percent slower than both CIaaS Platform 1 and CIaaS Platform 3. The response times are slightly more variable, as is indicated by additional spikes in times compared to the CIaaS Platform 1 environment. CIaaS Platform 3 response times were more difficult to measure. These response times are faster than the two other platforms at the low- and medium-level loads; however, at the higher load the response times slowed down and ultimately could not be measured at all.17 Cloud consumers could refer to such data presented here when working with CIaaS providers (including their own internal CIaaS platform operators) on how to improve performance or assess the price value for increasing what is provisioned at the higher cost to obtain improved performance.

Note that CIaaS Platform 1 is an enterprise private cloud and did not rely heavily on the database layer or the web layer; the measurement focus was on transactions and the application server. 17 The fall off of reporting was most likely due to a network constraint which could be evaluated in a follow-on study. 16

14

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Average Transaction Response

CIaaS Platform 1 CIaaS Platform 2 CIaaS Platform 3

2,000

Time (milliseconds)

1,500

1,000

500

0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Hour(s) from the start of the test run

Figure 7. Transaction response times on each CIaaS platform as measured in seconds Measurement 4: Application Server CPU Utilization CPU utilization is a solid standard and cross-platform measurement that can provide a valuable insight into CIaaS platform performance. If oversubscription at the hypervisor layer reduces the effective CPU cycles available to the vCPUs, this value could run higher and limit platform capability. For the app used for this PoC, all the platforms’ app servers had adequate CPU capacity to perform, as shown in Figure 8, where none of the CIaaS platforms’ application server CPU utilization exceeded 13 percent. In a real-world environment, the enterprise could reduce resources provisioned to conserve budget and run at a higher (and safe) CPU utilization rate. CPU Utilization

CIaaS Platform 1 CIaaS Platform 2 CIaaS Platform 3

15% 13%

Utilization (Percent)

12

8.5%

9 6.5% 6

3

2.4%

2.25% .5%

0 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Hour(s) from the start of the test run

Figure 8. CPU capacity utilization across the three CIaaS platforms, by percentage. The figure is not calibrated for a cross-platform dashboard view.18 18

Note that CIaaS Platform 1 is an enterprise private cloud and did not rely heavily on the database layer or the web layer; the measurement focus was on transactions and the application server.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

15

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Measurement 5: Application Server Memory Utilization As illustrated in Figure 9, memory utilization across the three platforms clearly indicates where CIaaS Platform 2 starts to “sweat” the resources and potentially create problems in terms of transaction completion and performance issues. CIaaS Platform 3 has the lowest and best utilization rate and CIaaS Platform 1 is also relatively low. Both platforms 1 and 3 could probably reduce spending on memory and continue to operate well, even at the very high load test levels. Memory Utilization

CIaaS Platform 1 CIaaS Platform 2 CIaaS Platform 3

100% 99%

Utilization (Percent)

80 71% 60

40 26%

23.5% 20

11.5%

8% 0 1

2

3

4

5

6

7

8

9

10 11 12 13 14 15 16 17 18 19 20 21 22 23

Hour(s) from the start of the test run

Figure 9. Memory utilization across three platforms, by percentage

Focus Area 2: Standard Units of Measure Framework During the implementation of this PoC, the team realized the breadth and depth of details associated with different providers reporting or presenting different metrics in different ways. While there is a desire for a “black-box” mentality from services providers, it is not advisable to simply assure buyers that users should leave these details to the service providers (including buyers who prefer not to be concerned with these details and simply consider CIaaS providers like electric utility providers where no intervention is generally necessary). Rather, until the industry further matures, the ODCA recommends that enterprise buyers should continue to hold service providers accountable, transparent, and open to cross-platform SUoM as recommended by ODCA usage models and PoC results. One recommended solution is for the industry use a “Cloud Facts” labelling concept for the disclosure of “what’s inside” the CIaaS environment to help users make a purchase decision. Ideally, the same descriptors and measurements of these key elements that make up a CIaaS platform could be utilized. To be effective (similar to the food industry labelling scheme), there may be about twenty elements that should be listed in the “Cloud Facts” label. Additional, there could also be a more detailed standardized listing of other specific “ingredients.” This more abbreviated approach is recommended in contrast to other industry benchmarks such as the financial services mutual fund disclosure approach or the automobile Monroney window sticker that provide information—maybe too much—that many consumers seem to ignore or have a difficult time interpreting. A second recommendation is that buyers implement a measurement scheme and a “single-pane-of-glass” dashboard approach that spans all CIaaS providers—including internal operations—to capture an initial and ongoing view of end-user experience performance and infrastructure component behavior that affects an application’s availability and performance. By comparing different platforms, buyers gain a better understanding of the value they are receiving, obtain increased performance over lesser performing providers, or offer more business to providers who perform better. A third recommendation is for buyers to use AUPs (detailed previously and defined in Appendix A: Taxonomy) to normalize metrics over time and across providers. In particular, if an enterprise can deploy a dual- or multi-vendor approach for certain applications or services, this relatively new IT operations analytics approach could prove to be highly valuable and may be leveraged as enterprises scale into the cloud. 16

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

“Cloud Facts” Label When the team started this PoC, they realized there are two key points at which metrics should be reviewed and decisions made. The first point is upon selecting a CIaaS provider and the second is ongoing when services are up and running. For example, if the CIaaS provider makes changes over time, the enterprise buyer may want to be aware, particularly if there are changes that impact the service. Today, upon selecting a provider, IT professionals lack a consistent means for comparing CIaaS providers, making it difficult to help ensure predictable and easily managed experiences for end users. Although CIaaS providers provide lots of information about their offerings, this does not necessarily make it easy for IT professionals to compare these services. In some cases, it can be difficult to ascertain what a buyer really “gets” until they are up and running in the environment. Currently, there’s no “EPA mileage rating” equivalent for cloud services. Providing a standard “Nutrition Facts” label for Cloud Facts akin to that disclosure used in the food industry would help ensure easy comparison of offerings across providers.

Benchmarks of Industry Labelling Practices While working on this project, the PoC team noted examples in other industries where benchmarks for complex services or products have been universally applied. See Appendix D: Standardized Cloud Facts and Appendix E: The EPA Fuel Economy Guide and Monroney Auto Window Sticker for examples of the labels discussed in this section. The food industry “Nutrition Facts” label—adopted globally—is probably the best example. The cloud-computing industry could clearly benefit by developing and implementing a standardized label that makes direct comparisons of competing services transparent and understandable.

There are other industries with complex services or products where similar issues have been addressed. Maybe the best known, and one that has been adopted globally, is the food industry “Nutrition Facts” label. The cloud-computing industry can and should implement this standard.

The cloud-computing industry has an opportunity to disclose, in a standard way, what their customers are buying, and establishing this precedent would strengthen the industry. The goal of this PoC is not to develop such a framework; rather, the PoC team and the ODCA Infrastructure Working Group recommend that the ODCA or some other industry organization initiate this type of framework. An example of an organization that the industry could look to lead this effort is the Standard Performance Evaluation Corporation.19 Of course, there are others to consider as well—including the ODCA. Proactively establishing these labelling practices could eliminate the need for government agencies to impose a mandatory framework20 which could be less beneficial for the industry overall. In support of this initiative, the ODCA has already begun to publish a set of measures that should be reported or even guaranteed by services providers. This foundation work is in the Compute Infrastructure as a Service Master Usage Model21 as provided in Section 4.4, CIaaS Service Attributes and Section 7.0, Service Attribute Details. The next step would be for ODCA to be more prescriptive about the Cloud Facts label requirements for CIaaS providers to publish. This could be similar to the EPA Fuel Economy Guide Sample Vehicle Listing (see Appendix E: The EPA Fuel Economy Guide and Monroney Auto Window Sticker). Over time, the ODCA could also help define how these metrics are standardized, measured, and reported or work with other industry standards defining organizations to do so. For example, in the food Nutrition Facts label there is a defined list of elements including how they are measured (e.g., fats, saturated fat, and trans fat as measured in grams, and percentages of recommended Daily Values).

Compliance For service providers who participated in this PoC, or on subsequent PoCs, there may be an opportunity for the ODCA to verify service providers’ compliance with ODCA service usage model tiers. Alternatively, the ODCA could sanction another company to do the same. The credit card industry used this approach to audit and certify PCI Data Security Standard (DSS) compliance by sanctioning a group of firms to enforce the data security standards. In 2006, the PCI Security Standards Council22 was established by the top five global payment brands to improve oversight.

www.spec.org The FCC (Federal Communications Commission) and NTIA (National telecommunications & Information Administration in the Federal Department of Commerce) have begun to provide a resource for consumers to be able to make better decisions about broadband services purchase with an online publication called “Broadband Service to the Home: A Consumer’s Guide,” a broadband speed test, and a national broadband map that inventories what speeds are available in what geographies. 21 www.opendatacenteralliance.org/library 22 www.pcisecuritystandards.org/organization_info/index.php 19

20

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

17

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Conclusions This PoC achieved what the PoC team set out to accomplish and showed how an analytics system on the market today can be used by an enterprise to measure CIaaS solutions across providers and make changes using this information to improve their operation and that of their service providers. Table 3 provides a summary of the findings from this study for the high transaction load test run scenario and illustrates what a “single-pane-ofglass” dashboard could tell an IT Operations team working to make the most of a cloud solution.23 The table illustrates how CIaaS Platform 1 performed well for all metrics.24 It sets the bar for achievable performance – at least among these three options. In addition, opportunities for optimizing cost are captured with the low-capacity utilization of the cloud stack resources. For CIaaS Platform 2, increasing memory could enhance the performance of the environment and that cost could be captured in the bottom row to maintain that comparative cost/value measure as the Application Operations or IT Operations team works through adjustments based on the standardized and comparative measures provided in this dashboard and achieves improved end-user performance. For CIaaS Platform 3, we believe network issues were the root cause of the end-user experience problems and could be evaluated in a broader deployment of the measurement system that captured network metrics which were not part of the scope of this study. Table 3 shows SUoM across three CIaaS platforms in a hybrid operating model where CIaaS Platform 1 was inside the enterprise firewall and all three platforms were measured under the same AUP and load.

CIaaS Measures

End-User Metrics

Table 3. Single-Pane-of-Glass Dashboard Metric

CIaaS CIaaS CIaaS Platform 1 Platform 2 Platform 3 Comments

Peak Transaction Volume (TPM)

75,000

47,000

25,000

Peak Load Transaction Response Time (milliseconds)

750

1,200

720

Peak App Server CPU Utilization

2.4%

13.0%

8.5%

Peak App Server Memory Utilization

26.0%

99.0%

11.5%

While CIaaS Platform 1 is higher (worse than) CIaaS Platform 3, it remains in an acceptable range.

$1.06

$1.12

CIaaS Platform 1 is an internal infrastructure and does not have a per-user cost structure associated with it.

Cost Per Hour

See Comment

The difference between 75,000 and the values in this table are transaction requests that were not completed.

The high load scenario was chosen versus the low or middle load because it is not uncommon to see 5,000 transactions per second at a peak load for applications operated or supported by the parties to this PoC. The PoC team felt it reasonable to expect this application stack to be able to sustain this load at peak demand. While additional resources could be provisioned to support this load, the expectation for this budget level was also that it could handle this peak level at this cost level and without any auto scaling required. 24 Even though one of the metrics is worse than CIaaS Platform 3, the metric is still well within an acceptable range and is rated as green status (Peak App Server Memory Utilization of 26.0% versus 11.5%). 23

18

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Work over the course of this PoC confirmed our initial concern about the lack of standardized, comparative measures hindering the adoption and deployment of cloud services. Refining the techniques and processes that we used in testing could help to create an industrywide “Cloud Facts” label to streamline and standardize comparisons of competing services. We believe this is a worthwhile goal and that our discoveries provide initial practical steps in devising a standard methodology for measurement that draws from ODCA Compute Infrastructure as a Service Master Usage Model and the ODCA Standard Units of Measure for IaaS.

References •• Intel Cloud Finder: Making it Easier for IT to Find Cloud Services. •• Introducing CSMIC SMI: Defining Globally Accepted Measures for Cloud Services. •• The NIST Definition of Cloud Computing, National Institute of Standards and Technology, U.S. Department of Commerce, September 2011 (PDF). •• Open Data Center Alliance Usage Model: Standard Units of Measure for IaaS Rev 1.1. •• Open Data Center Alliance Master Usage Model: Compute Infrastructure as a Service Master Usage Model Rev 1.0. •• Protecting your brand in the cloud: Transparency and trust through enhanced reporting. PricewaterhouseCoopers, November 2011. •• TrendSpotting in Tech Conference, The Ritz-Carlton, San Francisco, California, 2012, Marvin Wheeler, ODCA Chairman, Cowen and Company.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

19

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Appendix A: Taxonomy The following are definitions of terms used throughout this document. Actor

Description

Application

A program or system that has not been specifically designed (or remediated) to transparently leverage the unique capabilities of cloud computing.

Application Behavior Learning A technology that helps to enable IT professionals to identify patterns of concurrent transactions (mix of types and (ABL) and Application Usage volumes) and data center component KPIs referred to as AUPs. Pattern (AUP) Application Component (aka Infrastructure or Data Center Component)

One of the items, like hardware or software appliance, that operates below the application layer in an application stack and contributes to the overall operation of a data center or application stack. Examples include server, load balancer, and firewall.

CIaaS Stack

A CIaaS stack consists of one or multiple compute layers providing web, application, or database services. Examples include Amazon Web Services, Rackspace cloud, and an internal data center that supports enterprise applications.

Cloud Infrastructure Environment

A cloud provider’s specific implementation of hardware, software, management infrastructure, and business processes and practices to implement a provider’s service catalog.

Cloud Provider

An organization that provides cloud services and charges cloud subscribers. A cloud provider provides services over the Internet. A cloud subscriber could be its own cloud provider, such as for private clouds.

Cloud Subscriber

A person or organization that has been authenticated to a cloud and maintains a business relationship with a cloud.

Infrastructure as a Service (IaaS)

A model of service delivery whereby the basic computing infrastructure of servers, software, and network equipment is provided as virtualized objects, controllable via a service interface. Organizations can use this infrastructure to build a platform for developing and executing applications, while avoiding the cost of purchasing, housing, and managing their own components.

Key Performance Indicator (KPI)

AKPI is an IT term for a type of performance measurement. A very common way of choosing KPIs is to apply a management framework (for example, the balanced scorecard) and consolidate a number of SLA perspectives and metrics into a smaller set of overall indicators. Some KPI types include: • Quantitative indicators; potentially anything numeric and relevant to business objectives and service contract • Practical indicators that interface or align with enterprise processes • Directional indicators specifying whether a service or an organization is improving or not

Metering

The monitoring, controlling, and reporting of resource usage, at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, or active user accounts). Metering helps to enable both the provider and consumer of the service to control and optimize usage.

Private

Within this PoC, a “private” implementation is considered a CIaaS stack hosted on a VM environment within an enterprise, behind the firewall, or hosted with a service provider.

Public

A “public” implementation is considered a CIaaS stack hosted on a public cloud. This includes Amazon, Rackspace, and similar services.

“Single Pane of Glass”

As used in the context of this paper, a “single pane of glass” refers to one view on a computer screen where an IT professional can view essentially all the key information required to do her or his job.

UI

The user interface (UI) is a display presentation that serves as the primary means for viewing program results and performing operations within the program.

Virtual CPU (vCPU)

vCPU stands for Virtual Central Processing Unit. One or more vCPUs are assigned to every Virtual Machine (VM) within a cloud environment. Each vCPU is seen as a single physical CPU core by the VM’s operating system.

Workload

The workload is defined as a Java or .Net application transactional multi-tier application with substantial end-user activity. If a production environment is not available, then the workload can be a test environment workload with the capability to generate and/or simulate user load and transactions.

20

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Appendix B: ODCA Infrastructure Workgroup The ODCA Infrastructure Workgroup handles the technical aspects of virtual/physical design, component management, storage, network, compute, facilities, platforms, backup, availability, and appliance construction. To learn more about the membership benefits, visit ODCA Membership Levels and Benefits.

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

21

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Appendix C: The Proof-of-Concept Team Appnomic Systems Appnomic Systems provides innovative analytics software and services that simplify the complexities of managing IT. Appnomic specializes in serving businesses with high transaction volume operations, such as banks, large online businesses, and manufacturers. Appnomic has customers around the world including top banks and e-commerce portals. The company’s global headquarters are in Bangalore, India, and the North American headquarters are in Santa Clara, CA. Appnomic Systems is backed by Norwest Venture Partners. For more information please visit www.appnomic.com.

Intel Intel is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the many of the world’s computing devices. Additional information about Intel is available at newsroom.intel.com and blogs.intel.com.

Perceived Benefits for Participants The participants in the development of this PoC document anticipated a number of benefits from the testing and data collection, as described in the following sections. Benefits from the Perspective of Appnomic Systems Appnomic expected to obtain the following benefits from the PoC and associated usage model: •• Validate Appnomic and its solutions from a reputed organization and community of enterprises •• Educate the market about IT Operations Analytics (ITOA) •• Generate awareness, leads, and sales for products and services •• Develop content that can be valuable to other Appnomic clients and prospects •• Demonstrate industry leadership – thought leadership and product/solution leadership Benefits from the Perspective of ODCA Consumer Companies These companies anticipated that the PoC would: •• Demonstrate thought leadership for evaluating and selecting cloud computing services •• Help formulate and shape the standards by which cloud computing services could be evaluated by businesses •• Gain first-hand experience of a new solution to help move the industry forward •• Identify a reliable proof point for measuring CIaaS provider performance •• Help influence the maturity of cloud solutions using real-world data •• Introduce the company to new markets through this leadership and formulation ODCA Perspective ODCA solicited participants and initiated this PoC testing to: •• Enhance interoperability among service and technology providers •• Advocate transparency by establishing standards and a consistent framework for measuring and evaluating competing services in the market •• Strengthen industry advocacy of the ODCA usage model and requirements to encourage providers offering cloud-computing services to compete across a level playing field In addition to these objectives, ODCA also hoped to: •• Demonstrate how cloud service providers and data center operators can respond to RFPs and accommodate customer requirements •• Benchmark the cloud services and operations for efficiency, cost, and other KPIs •• Demonstrate effective management of data center operations, highlighting performance versus cost comparisons

22

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Appendix D: Standardized Cloud Facts The basic ideas behind the concept of a “Cloud Facts” label are presented in this Appendix. The approaches that are used in other industries (for example, the food “Nutrition Facts” label) are examined in terms of their usefulness as benchmarks for comparison.

The Food Nutrition Facts Label25 The food Nutrition Facts label is required on most packaged food in many countries; it is also known as the nutrition information panel, and various other slight variations Most countries also release overall nutrition guides for general educational purposes. In some cases, the guides are based on different dietary targets for various nutrients than the labels on specific foods. In the United States, the Nutrition Facts label (Figure 10) lists the percentage supplied that is recommended to be met, or to be limited, in one day of human nutrients based on a daily diet of 2,000 Calories (kcal). With certain exceptions, such as foods meant for babies, the following Daily Values are used. These are called Reference Daily Intake (RDI) values and were originally based on the highest 1968 Recommended Dietary Allowances (RDA) for each nutrient in order to assure that the needs of all age and sex combinations were met. These are older than the current Recommended Dietary Allowances of the Dietary Reference Intake. For vitamin C, vitamin D, vitamin E, vitamin K, calcium, phosphorus, magnesium, and manganese, the current maximum RDAs (over age and sex) are up to 50% higher than the Daily Values used in labelling, whereas for other nutrients the estimated maximum needs have gone down. As of October 2010, the only micronutrients that are required to be included on all labels are vitamin A, vitamin C, calcium, and iron. To determine the nutrient levels in the foods, companies may develop or use databases, and these may be submitted voluntarily to the U.S. Food and Drug Administration for review. The Food and Drug Administration (FDA) also offers a resource to help understand the Nutrition Facts label: How to Understand and Use the Nutrition Facts Label. Sample Label for Macaroni and Cheese

Nutrition Facts Serving Size: 1 cup (228g) Serving Per Container: 2 Check Calories

Amount Per Serving Calories: 250 Cal from Fat: 110 % Daily Value*

Limit these Nutrients

Get enough of these Nutrients

Total Fat 12g Saturated Fat 3g Trans Fat 3g Cholesterol 30mg Sodium 470mg Total Carb 31g Sugars 5g Dietary Fiber 0g Protein 5g

18% 15% 0% 10% 20% 10%

Quick Guide to Percent of Daily Value • 5% or less is Low • 20% or more is High

0%

Vitamin A 4% • Vitamin C 2% Calcium 20% • Iron 4% *Percent Values are based on a 2,000 calorie diet. Your Daily Values may be higher or lower depending on your calorie needs: Calories

Footnote Area

Total Fat Sat Fat Cholesterol Sodium Total Carb Dietary Fiber

Less Than Less Than Less Than Less Than

2,000

2,500

65g 20g 300mg 2,400mg 300g 25g

80g 25g 300mg 2,400mg 375g 30g

Nutrition Facts label

25

Source: http://en.wikipedia.org/wiki/Nutrition_facts_label

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

23

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

Appendix E: The EPA Fuel Economy Guide26 and Monroney Auto Window Sticker27 The Monroney sticker or window sticker is a label required in the United States to be displayed in all new automobiles and includes the listing of certain official information about the car. Since the mid-seventies the U.S. Environmental Protection Agency provides fuel economy metrics in the label to help consumers choose more fuel efficient vehicles. The window sticker was named after Almer Stillwell “Mike” Monroney, United States Senator from Oklahoma. Monroney sponsored the Automobile Information Disclosure Act of 1958, which mandated disclosure of information on new automobiles. In May 2011, the National Highway Traffic Safety Administration (NHTSA) and EPA issued a joint final rule establishing new requirements for a fuel economy and environment label that will be mandatory for all new passenger cars and trucks starting with model year 2013, though carmakers can adopt it voluntarily for model year 2012. The ruling includes new labels for alternative fuel and alternative propulsion vehicles available in the US market, such as plug-in hybrids, electric vehicles, flexible-fuel vehicles, hydrogen fuel cell vehicle, and natural gas vehicles. The common fuel economy metric adopted to allow the comparison of alternative fuel and advanced technology vehicles with conventional internal combustion engine vehicles is miles per gallon of gasoline equivalent (MPGe). A gallon of gasoline equivalent means the number of kilowatt-hours of electricity, cubic feet of compressed natural gas (CNG), or kilograms of hydrogen that is equal to the energy in a gallon of gasoline. The new labels include for the first time an estimate of how much fuel or electricity it takes to drive 100 miles (160 km), providing U.S. consumers with fuel consumption per distance travelled, the efficiency metric commonly used in many other countries. EPA’s objective is to avoid the traditional miles per gallon metric that can be potentially misleading when consumers compare fuel economy improvements, and known as the “MPG illusion.”

Sample Fuel Economy Guide Vehicle Listing

Monroney Auto Window Sticker 26 27

Source: U.S. Department of Energy Fuel Economy Guide: www.fueleconomy.gov/feg/pdfs/guides/FEG2014.pdf Source: http://en.wikipedia.org/wiki/Monroney_sticker

24

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Open Data Center Alliance Proof of Concept: Enterprise Cloud Service Quality Rev. 1.1

© 2014 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

25

Suggest Documents