Hard drive non recoverable error probability approaching 100% .... Disk. 1 or 2 parity data. 1 data m data m+1 data m+2. Server. SAS to disks. App. Erasure.
Next Generation Scalable and Efficient Data Protection Dr. Sam Siewert, Software Engineer, Intel Greg Scott, Cloud Storage Manager, Intel
STOS004
Agenda • What Is Durability and Why Should You
Care? • Measuring Durability: Mean Time to Data Loss (MTTDL) and Other Models • Techniques for Improving Durability • Large Object Store Reference Architecture
2
Agenda • What Is Durability and Why Should You
Care? • Measuring Durability: Mean Time to Data Loss (MTTDL) and Other Models • Techniques for Improving Durability • Large Object Store Reference Architecture
3
The Problem…
Hard drive non recoverable error probability approaching 100%
4
Another problem…
•
Rebuild times for Mirroring and Parity RAID are Getting out of Hand – –
8.3 Hours for 3TB SATA Sequential (100 MB/sec) 41.5 Hours for 3TB SATA Random (20 MB/sec)
More than a day to restore or initialize a drive? *Intel
estimates based on historical SATA drive performance data
Agenda • What Is Durability and Why Should You
Care? • Measuring Durability: Mean Time to Data Loss (MTTDL) and Other Models • Techniques for Improving Durability • Large Object Store Reference Architecture
6
Measuring Data Durability… • Mean time to data loss (MTTDL) – Average time before a system will lose data
• MTTDL Probability – Probability that a system will lose data
• Function of – Mean-time-to-failure (MTTF), same as MTBF SATA spec says 500,000 hours, but not at 100% duty cycle SATA HDD Experience shows MTTF typically 200,000 hours
– Mean-time-to-repair (MTTR)
Time to recover from a failure Example is time to re-mirror a drive 7
How Good is Standard MTTDL Model? • Model Compare to Statistics for SATA Disks? • Compare Standard RAID Data Protection to Erasure Codes? • Check Results with NOMDL
Infant Mortality
Mid to End of Life
Node Level
System Level
A large-scale study of failures in high-performance computing systems”, Bianca Schroeder, Garth A. Gibson
8
Models for Expected Data Loss •
Mean Time to Data Loss (MTTDL) – Simple 2-State Markov Model
•
Normalized Magnitude of Data Loss (NOMDL) – Sector Level phenomena – partial failures – Multi-state Monte Carlo simulation – Estimates amount of expected data loss per terabyte per year
MTTDL
NOMDL
Rebuild (MTTR)
Idle Scrub
Sector Remap
Healthy
Healthy
Failure
(Erasure) Failure
Read failure Rebuild (MTBF) NRE
Retry
Data Loss
Data Loss Recover
9
MTTDL Example: RAID 1 • Two 3TB SATA Desktop Drives • Drives are mirrored 1. Data corruption on one 2. Re-mirror to restore data 3. Second drive fails before first is restored 4. Loss of data
Re-Mirror Mirrored
• Following sequence of events
• Simple 2-state model Loss of data because of two failures before restore 10
The Annual Probability of Data Loss P (t ) failure = (1 − e − kt ), k = 1 P (t ) data _ loss = (1 − e
MTTF
1 − *Lifetime ( MTTDLset )
) * N sets , k = 1
MTTDL set
Birth/Death Exponential Model – – – – –
P(t)=Probability of Failure over time period k=probability of failure, t=time MTTF = Mean Time To Failure (or Between) N=Population, Lifetime=e.g. Annual MTTDL=Mean Time to Data Loss in Population
How good is it?
11
•
Optimistic – Only 2 states
•
Pessimistic – Does not account for proactive data protection measures
•
Does model resiliency to drive failures
MTTDL RAID Examples •
RAID1 Equation – Joint probability of 2 erasures – Coupling of mirror drives – Exposure window
•
Combined _ Failures _ with _ Data _ Loss Devices _ in _ Set * ( Exposure _ Window)
MTTF 2 MTTDLRAID1 = N * MTTR
RAID5 Equation – (N+1)*N Double Fault Scenarios
•
MTTDL =
MTTDLRAID5
MTTF 2 = ( N + 1) * N * MTTR
MTTDLRAID 6
MTTF 3 = ( N + 2) * ( N + 1) * N * MTTR 2
RAID6 Equation – Joint probability 3 erasures – (N+2)*(N+1)*N Triple Fault Scenarios
“Engineering Estimate” for probability of loss
12
Scaling With Virtualization Keeps Sets Independent, Constant Overhead, Virtual Mapping of RAID Sets over Nodes/Drives
13
Agenda • What Is Durability and Why Should You
Care? • Measuring Durability: Mean Time to Data Loss (MTTDL) and Other Models • Techniques for Improving Durability • Large Object Store Reference Architecture
14
Erasure Coding (EC) Erasure Coding
RAID App Server
App
data
data
RAID 5/6
Meta Data Service
SAS to SCSI to disks disk data 1
data m
m data Disk
Disk
data m+1
Erasure Coding Client
SCSI to disk or IP to storage servers data m+2
1 or 2 parity Disk
data location
Disk
slice 1
slice m
m minimum Storage Service Storage Node
Storage Service Storage Node
slice n
slice m+1
k spare Storage Service Storage Node
Storage Service Storage Node
EC extends the data protection architectures of RAID 5/6 to RAID k k = the number of failures that can be tolerated without data loss: For RAID 5, k=1; For RAID 6, k=2; For EC, k = n
EMC* Atmos* and Isilon* are example systems using EC 15
MTTDL Example: Erasure Coding • 10:16 example • fragments across 16
drives • All drives functioning • Failure occurs: 1. 2. 3. 4. 5.
Drive 1 fails (MTT fail) Drive replaced (MTT repair) Restore started (MTT restore) 6 more drives fail (MTT data loss) Loss of data
New build writes lost to all frags drives
Loss of data if 6 drives fails before drive 1 restore 16
The MTTDL Equation for EC Why different? • Resilient to Triple Faults or Better • Numerator is Joint Probability of Triple … N-tuple Failure (Erasure) • Denominator Includes Linear Coupling Terms • Parity De-Clustering Vastly Improves MTTR Linear Degradation Due to Coupling
MTTDLEC 3
MTTDLEC 6
17
Power Law Improvements
MTTF 4 = S * ( N + 3) * ( N + 2) * ( N + 1) * ( N ) * MTTR 3
MTTF 7 = S * ( N + 6) * ( N + 5) * ( N + 4) * ( N + 3) * ( N + 2) * ( N + 1) * ( N ) * MTTR 6
Agenda • What Is Durability and Why Should You
Care? • Measuring Durability: Mean Time to Data Loss (MTTDL) and Other Models • Techniques for Improving Durability • Large Object Store Reference Architecture
18
Tomorrow’s Datacenter Business processes, Decision support, HPC
Employee
VPN or LAN Collaborative, IT infra., App dev, Web infra.
Consumer or Biz Customer Content Delivery Network
WWW
IOPS/TB focus e.g. Business Database Premium SLA Storage
Dedicated Servers
Compute Virtualized Servers
LowLatency, Proximity Storage
“Centralized” Storage High-Capacity Storage
$/TB optimized e.g. Backup or Large Object Storage
Tomorrow’s datacenters add lower cost, high-capacity storage to traditional low-latency, premium storage 19
Durability Options for Large Object Comparison of a rack
implementation • 42U Rack • 32 Storage Nodes (SN) • 10 Hard drives per SN • No single point of failure in rack • Comparison of both durability and $/TB
20
Durability Config
Minimal drives (m)
Spare drives (k)
No single failure drives
Erasure Coding 16
10 drives in 10 SNs
6 drives in 6 SNs
No additional
RAID 0+1
10 drives in 1 SN
10 drives in 2nd SN
No additional
RAID 5+1
9 drives in 2 SNs
1 drive in each SN
10 drives in 2nd SN
RAID 6+1
8 drives in 2 SNs
2 drives in each SN
10 drives in 2nd SN
RAID 3way
10 drives in 1 SN
10 drives in 2nd and 3rd SN
No additional
Large Object Store Rack (Network Configuration)
Clients
Dual 1/10GE BaseT Switch
10GE
x8 10GE to client TOR switches • x4 10GE to each CS/MD server • x32 GE to active switch 2 •
x40-1/10GE x8-10GE x40-1/10GE x8-10GE Client/MD Server Client/MD Server Cleint/MD Server
GE
32 Storage Servers
• x1 GE BaseT to active switch 1 • x1 GE BaseT to active switch 2
Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server
10GE
Redundant Client/Metadata Servers
• x2 10GE BaseT to active switch 1 • x2 10GE BaseT to active switch 2
No Single Point of Failure: Dual Switches and Dual Connectivity to all servers in rack 21
Large Object Store Rack
Large object storage (e.g. Haystack)
(Storage Node)
Amplidata X86-64 RedHat* Linux*
x40-1/10GE x810GE x8x40-1/10GE 10GE Client/MD Server Client/MD Server
42 RU
Cleint/MD Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server
Portwell WADE8011 Mini-ITX board
Intel® 206 chipset
DMI G2
x4 SATA 6Gb/s
SuperMicro SASLP-MV8 x6 x10
x10 1u 3.5” SATA Disk Enclosure
SATA
Intel® Xeon™ E3-1220L x4 PCIe G2
x4 PCIe G2
Intel® Dual GbE
SAS 6Gb/s
x2 x32 GbE
6Gb/s
Western Digital 3TB SATA Storage Drive
2G ECC DDR3 Memory
x2 Arista 7140T
Storage Server Reference Architecture
x2 x8 10GbE SFP+
~1PB of raw storage in a 42u rack High Efficiency, Durability, Scalability with Erasure Coding
x2 GbE
Large Object Store Rack
Large object storage (e.g. Haystack)
(Controller Node)
Amplidata X86-64 RedHat* Linux*
x40-1/10GE x810GE x8x40-1/10GE 10GE Client/MD Server Client/MD Server
42 RU
Cleint/MD Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server Storage Server
Intel® Server Board S5520UR
Intel® Xeon® processor 5620
Intel® Xeon® processor 5620
12G ECC DDR3 Memory
12G ECC DDR3 Memory
QPI
Intel® x4 5520 PCIe chipset G2
x4 PCIe G2
Intel® X520-T2 Dual 10GbE
x2 SATA 3G
Intel® X520-T2 Dual 10GbE
x2- x2 10GbE BaseT
Intel®
Solid-State Drive 320 Series
x2 Arista 7140T
x2 SATA 3G
Intel® Server Chassis SR2625URLXT
Seagate 500GB SATA Storage Drive
x2- x8 10GbE SFP+
Client/Metadata Server Reference Architecture
Dual Metadata and Client (erasure encode/decode) server Dual10GE throughput to Application Servers and to Storage Servers 23
Converged Storage Server with EC Value (320 Drive, 960TB comparison, no single point of failure1) Value
Description
Number nodes=32, 10 drives/node, Cap/Node=30TB EC16 RAID0+1 RAID5+1 m=10, k=6 m=10, k=0 m=9, k=1 16 nodes 2 nodes 2 nodes
Efficiency
Durability Scalability2
Raw/Usable Efficiency Usable Capacity (TB) Power/Usable Capacity Relative data loss risk best storage scaling
RAID6+1 RAID 3way m=8, k=2 m=10, k=0 2 nodes 3 nodes
63%
50%
40%
34%
33%
600
480
432
384
320
53%
67%
74%
83%
1
10-8
2288
1.6
10-6
1
$343
$429
$476
$536
$643
EC is the best efficiency at equivalent durability compared to RAID6+1 24
1Hardware 2Estimate
configuration Large Object Reference Architecture using ServersDirect and CDW web prices 8/9/2011
Summary • Increasing drive density and rebuild time are
creating data protection crisis
• MTTDL model a sufficient predictor of data loss
risk
• Erasure Codes offer the improved data
durability over traditional RAID and triple replication at lower cost
• Intel’s large object reference architecture
provides a cost effective implementation
25
Call to Action • Be aware of data durability especially for
building capacity storage with SATA drives
• Understand impact of data durability of Scale-
out Storage
• Migrate to Erasure Coding for Large Object
Store for optimal durability
26
Additional Sources of Information on This Topic:
27
1.
“An Analysis of Data Corruption in the Storage Stack”, Lakshmi Bairavasundaram, Garth R. Goodson, et al
2.
“A large-scale study of failures in high-performance computing systems”, Bianca Schroeder, Garth A. Gibson
3.
“Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?”, Bianca Schroeder, Garth A. Gibson
4.
“An Analysis of Latent Sector Errors in Disk Drives”, Lakshmi Bairavasundaram, Garth R. Goodson, et al
5.
Memory Systems: Cache, DRAM, Disk, Bruce Jacob, Spencer Ng, David Wang.
6.
“Mean time to meaningless: MTTDL, Markov models, and storage system reliability”, Kevin Greenan, James Plank, Jay Wylie, Hot Topics in Storage and File Systems, June 2010.
MTTDL Equations for Large Object Store • RAID0 + 1: RAID 0 across 10 drives in storage node, storage node mirrored to second node
MTTF 2 MTTDLRAID 0+1 = N * MTTR
• RAID5 + 1: RAID5 across 10 drives (9 primary drives, 1 drive parity), storage node mirrored to second node with RAID5
MTTDLRAID5 MTTDLRAID5+1 = N * MTTRRAID5
• RAID6 + 1: RAID6 across 10 drives (8 primary drives, 2 drives parity), storage node mirrored to second node with RAID6
MTTDLRAID 6 MTTDLRAID 6+1 = N * MTTRRAID 6
2
2
• EC16: 16 Fragments across 16 nodes MTTDLEC 6
28
MTTF 7 = ( N + 6) * ( N + 5) * ( N + 4) * ( N + 3) * ( N + 2) * ( N + 1) * ( N ) * MTTR 6
Legal Disclaimer • INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL® PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. • Intel may make changes to specifications and product descriptions at any time, without notice. • All products, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. • Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request. • Any code names featured are used internally within Intel to identify products that are in development and not yet publicly announced for release. Customers, licensees and other third parties are not authorized by Intel to use code names in advertising, promotion or marketing of any product or services and any such use of Intel's internal code names is at the sole risk of the user • Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. • Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to: http://www.intel.com/products/processor_number • Intel product plans in this presentation do not constitute Intel plan of record product roadmaps. Please contact your Intel representative to obtain Intel's current plan of record product roadmaps. • Intel, Xeon, Sponsors of Tomorrow and the Intel logo are trademarks of Intel Corporation in the United States and other countries. • *Other names and brands may be claimed as the property of others. • Copyright ©2011 Intel Corporation.
29
Risk Factors
The above statements and any others in this document that refer to plans and expectations for the second quarter, the year and the future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should,” and their variations identify forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand could be different from Intel's expectations due to factors including changes in business and economic conditions, including supply constraints and other disruptions affecting customers; customer acceptance of Intel’s and competitors’ products; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers. Potential disruptions in the high technology supply chain resulting from the recent disaster in Japan could cause customer demand to be different from Intel’s expectations. Intel operates in intensely competitive industries that are characterized by a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological developments and to incorporate new features into its products. The gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; product mix and pricing; the timing and execution of the manufacturing ramp and associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and intangible assets. Expenses, particularly certain marketing and compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's products and the level of revenue and profits. The majority of Intel’s non-marketable equity investment portfolio balance is concentrated in companies in the flash memory market segment, and declines in this market segment or changes in management’s plans with respect to Intel’s investments in this market segment could result in significant impairment charges, impacting restructuring charges as well as gains/losses on equity investments and interest and other. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Intel’s results could be affected by the timing of closing of acquisitions and divestitures. Intel's results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust and other issues, such as the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an injunction prohibiting us from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the report on Form 10-Q for the quarter ended April 2, 2011. Rev. 5/9/11
30