HPCS HPCchallenge Benchmark Suite

4 downloads 0 Views 2MB Size Report
Feb 1, 2005 - IBM Power 4+. 256. NAVO. IBM p690 Power4 504 procs p690. IBM Power 4. 504. ARL. IBM SP Power3 512 procs. RS/6000 SP. IBM Power 3.
Presentation

Agenda

HPCS HPCchallenge Benchmark Suite David Koester The MITRE Corporation Email Address: [email protected] Jack Dongarra and Piotr Luszczek ICL/UT Email Addresses: {dongarra, luszczek}@cs.utk.edu Abstract The Defense Advanced Research Projects Agency (DARPA) High Productivity Computing Systems (HPCS) HPCchallenge Benchmarks examine the performance of High Performance Computing (HPC) architectures using kernels with more challenging memory access patterns than just the High Performance LINPACK (HPL) benchmark used in the Top500 list. The HPCchallenge Benchmarks build on the HPL framework and augment the Top500 list by providing benchmarks that bound the performance of many real applications as a function of memory access locality characteristics. The real utility of the HPCchallenge benchmarks are that architectures can be described with a wider range of metrics than just Flop/s from HPL. Even a small percentage of random memory accesses in real applications can significantly affect the overall performance of that application on architectures not designed to minimize or hide memory latency. The HPCchallenge Benchmarks includes a new metric — Giga UPdates per Second — and a new benchmark — RandomAccess — to measure the ability of an architecture to access memory randomly, i.e., with no locality. When looking only at HPL performance and the Top500 List, inexpensive build-your-own clusters appear to be much more cost effective than more sophisticated HPC architectures. HPCchallenge Benchmarks provide users with additional information to justify policy and purchasing decisions. We will compare the measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper. Additional information on the HPCchallenge Benchmarks can be found at http://icl.cs.utk.edu/hpcc/ Introduction At SC2003 in Phoenix (15-21 November 2003), Jack Dongarra (ICL/UT) announced the release of a new benchmark suite — the HPCchallenge Benchmarks — that examine the performance of HPC architectures using kernels with more challenging memory access patterns than High Performance Linpack (HPL) used in the Top500 list. The HPCchallenge Benchmarks are being designed to complement the Top500 list and provide benchmarks that bound the performance of many real applications as a function of memory access characteristics — e.g., spatial and temporal locality. Development of the HPCchallenge Benchmarks is being funded by the Defense Advanced Research Projects Agency (DARPA) High Productivity Computing Systems (HPCS) Program. The HPCchallenge Benchmark Kernels Local Global DGEMM (matrix x matrix multiply) High Performance LINPACK (HPL) PTRANS — parallel matrix transpose STREAM • COPY • SCALE • ADD • TRIADD RandomAccess (MPI)RandomAccess 1D FFT 1D FFT I/O b_eff — effective bandwidth benchmark

Form Approved OMB No. 0704-0188

Report Documentation Page

Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.

1. REPORT DATE

2. REPORT TYPE

01 FEB 2005

N/A

3. DATES COVERED

-

4. TITLE AND SUBTITLE

5a. CONTRACT NUMBER

HPCS HPCchallenge Benchmark Suite

5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

6. AUTHOR(S)

5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

8. PERFORMING ORGANIZATION REPORT NUMBER

The MITRE Corporation 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)

10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S)

12. DISTRIBUTION/AVAILABILITY STATEMENT

Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES

See also ADM00001742, HPEC-7 Volume 1, Proceedings of the Eighth Annual High Performance Embedded Computing (HPEC) Workshops, 28-30 September 2004 Volume 1., The original document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a. REPORT

b. ABSTRACT

c. THIS PAGE

unclassified

unclassified

unclassified

17. LIMITATION OF ABSTRACT

18. NUMBER OF PAGES

UU

28

19a. NAME OF RESPONSIBLE PERSON

Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18

Additional information on the HPCchallenge Benchmarks can be found at http://icl.cs.utk.edu/hpcc/. Flop/s The Flop/s metric from HPL has been the de facto standard for comparing High Performance Computers for many years. HPL works well on all architectures ― even cache-based, distributed memory multiprocessors ― and the measured performance may not be representative of a wide range of real user applications like adaptive multi-physics simulations used in weapons and vehicle design and weather, climate models, and defense applications. HPL is more compute friendly than these applications because it has more extensive memory reuse in the Level 3 BLAS-based calculations. . Memory Performance There is a need for benchmarks that test memory performance. When looking only at HPL performance and the Top500 List, inexpensive build-your-own clusters appear to be much more cost effective than more sophisticated HPC architectures. HPL has high spatial and temporal locality ― characteristics shared by few real user applications. HPCchallenge benchmarks provide users with additional information to justify policy and purchasing decisions Not only does the Japanese Earth Simulator outperform the top American systems on the HPL benchmark (Tflop/s), the differences in bandwidth performance on John McCalpin’s STREAM TRIAD benchmark (Level 1 BLAS) shows even greater performance disparity. The Earth Simulator outperforms the ASCI Q by a factor of 4.64 on HPL. Meanwhile, the higher bandwidth memory and interconnect systems of the Earth Simulator are clearly evident as it outperforms ASCI Q by a factor of 36.25 on STREAM TRIAD. In the presentation and paper, we will compare the measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random memory accesses in real applications can significantly affect the overall performance of that application on architectures not designed to minimize or hide memory latency. Memory latency has not kept up with Moore’s Law. Moore’s Law hypothesizes a 60% compound growth rate per year for microprocessor “performance”, while memory latency has been improving at a compound rate of only 7% per year. The memory-processor performance gap has been growing at a rate of over 50% per year since 1980. The HPCchallenge Benchmarks includes a new metric — Giga UPdates per Second — and a new benchmark — RandomAccess — to measure the ability of an architecture to access memory randomly, i.e., with no locality.

GUPS is calculated by identifying the number of memory locations that can be randomly updated in one second, divided by 1 billion (1e9). The term “randomly” means that there is little relationship between one address to be updated and the next, except that they occur in the space of ½ the total system memory. An update is a read-modify-write operation on a table of 64-bit words. An address is generated, the value at that address read from memory, modified by an integer operation (add, and, or, xor) with a literal value, and that new value is written back to memory

Abstract

HPCS HPCchallenge Benchmark Suite David Koester, Ph.D. (MITRE) Jack Dongarra (UTK) Piotr Luszczek (ICL/UTK) 28 September 2004

MITRE

Slide-1 HPCchallenge Benchmarks

ICL/UTK

Agenda

Outline

• • • • •

Brief DARPA HPCS Overview Architecture/Application Characterization HPCchallenge Benchmarks Preliminary Results Summary

MITRE

Slide-2 HPCchallenge Benchmarks

ICL/UTK

High Productivity Computing Systems ¾ Create a new generation of economically viable computing systems and a procurement methodology for the security/industrial community (2007 – 2010) Impact: z Performance (time-to-solution): speedup critical national security applications by a factor of 10X to 40X z Programmability (idea-to-first-solution): reduce cost and time of developing application solutions z Portability (transparency): insulate research and operational application software from system z Robustness (reliability): apply all known techniques to protect against outside attacks, hardware faults, & programming errors

HPCS Program Focus Areas

Applications: z Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling and biotechnology

Fill Fill the the Critical Critical Technology Technology and and Capability Capability Gap Gap Today Today (late (late 80’s 80’s HPC HPC technology)…..to…..Future technology)…..to…..Future (Quantum/Bio (Quantum/Bio Computing) Computing) ICL/UTK MITRE

Slide-3 HPCchallenge Benchmarks

High Productivity Computing Systems -Program Overview¾ Create a new generation of economically viable computing systems and a procurement methodology for the security/industrial community (2007 – 2010)

Full Scale Development

Half-Way Point Phase 2

Vendors

Technology Assessment Review

Advanced Design & Prototypes Pr

Concept Study Phase 1 MITRE

Slide-4 HPCchallenge Benchmarks

Petascale/s Systems

t i v ti c u od

Phase 2 (2003-2005)

Validated Procurement Evaluation Methodology

am e yT

Test Evaluation Framework

New Evaluation Framework

Phase 3 (2006-2010) ICL/UTK

HPCS Program Goals‡ •

HPCS overall productivity goals: – Execution (sustained performance) ƒ ƒ



Development ƒ ƒ



1 Petaflop/sec (scalable to greater than 4 Petaflop/sec) Reference: Production workflow 10X over today’s systems Reference: Lone researcher and Enterprise workflows

Productivity Framework – Base lined for today’s systems – Successfully used to evaluate the vendors emerging productivity techniques – Provide a solid reference for evaluation of vendor’s proposed Phase III designs.



Subsystem Performance Indicators 1) 2) 3) 4)

2+ PF/s LINPACK 6.5 PB/sec data STREAM bandwidth 3.2 PB/sec bisection bandwidth 64,000 GUPS

MITRE

Slide-5 HPCchallenge Benchmarks



Bob Graybill (DARPA/IPTO) (Emphasis added)

ICL/UTK

Outline

• • • • •

Brief DARPA HPCS Overview Architecture/Application Characterization HPCchallenge Benchmarks Preliminary Results Summary

MITRE

Slide-6 HPCchallenge Benchmarks

ICL/UTK

Processor-Memory Performance Gap CPU

“Moore’s Law”

Processor-Memory Performance Gap: (grows 50% / year) DRAM DRAM 7%/yr.

100 10 1

µProc 60%/yr.

1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000

Performance

1000

•Alpha 21264 full cache miss / instructions executed: 180 ns/1.7 ns =108 clks x 4 or 432 instructions • Caches in Pentium Pro: 64% area, 88% transistors *Taken from Patterson-Keeton Talk to SigMod MITRE

Slide-7 HPCchallenge Benchmarks

ICL/UTK

Processing vs. Memory Access •

Doesn’t cache solve this problem? – It depends. With small amounts of contiguous data, usually. With large amounts of non-contiguous data, usually not – In most computers the programmer has no control over cache – Often “a few” Bytes/FLOP is considered OK



However, consider operations on the transpose of a matrix (e.g., for adjunct problems) – Xa= b XTa = b – If X is big enough, 100% cache misses are guaranteed, and we need at least 8 Bytes/FLOP (assuming a and b can be held in cache)



Latency and limited bandwidth of processor-memory and node-node communications are major limiters of performance for scientific computation MITRE

Slide-8 HPCchallenge Benchmarks

ICL/UTK

Processing vs. Memory Access High Performance LINPACK Consider another benchmark: Linpack Ax=b Solve this linear equation for the vector x, where A is a known matrix, and b is a known vector. Linpack uses the BLAS routines, which divide A into blocks. On the average Linpack requires 1 memory reference for every 2 FLOPs, or 4Bytes/Flop. Many of these can be cache references

MITRE

Slide-9 HPCchallenge Benchmarks

ICL/UTK

Processing vs. Memory Access STREAM TRIAD Consider the simple benchmark: STREAM TRIAD a(i) = b(i) + q * c(i)

a(i), b(i), and c(i) are vectors; q is a scalar Vector length is chosen to be much longer than cache size Each execution includes 2 memory loads + 1 memory store 2 FLOPs 12 Bytes/FLOP (assuming 32 bit precision) No computer has enough memory bandwidth to reference 12 Bytes for each FLOP!

MITRE

Slide-10 HPCchallenge Benchmarks

ICL/UTK

Processing vs. Memory Access RandomAccess 64 bits

Tables

T

The expected value of the number of accesses per memory location T[ k ] E[ T[ k ] ] = (2n+2 / 2n) = 4

2n 1/2 Memory

k T[ k ]

Define Addresses

Highest n bits

Length 2n+2

{Ai}

MITRE

Bit-Level Exclusive Or



Data Stream

Slide-11 HPCchallenge Benchmarks

ai

k = [ai ]

Sequences of bits within ai

Data-Driven Memory Access



ai

64 bits

The Commutative and Associative nature of ⊕ allows processing in any order

Acceptable Error — 1% Look ahead and Storage — 1024 per “node” ICL/UTK

Bounding Mission Partner Applications

HPCS Productivity Design Points FFT

Spatial Locality

Low

RandomAccess

Mission Partner Applications HPL

PTRANS STREAM

High High MITRE

Slide-12 HPCchallenge Benchmarks

Temporal Locality

Low ICL/UTK

Outline

• • • • •

Brief DARPA HPCS Overview Architecture/Application Characterization HPCchallenge Benchmarks Preliminary Results Summary

MITRE

Slide-13 HPCchallenge Benchmarks

ICL/UTK

HPCS HPCchallenge Benchmarks

• HPCSchallenge Benchmarks – Being developed by Jack Dongarra (ICL/UT) – Funded by the DARPA High Productivity Computing Systems (HPCS) program (Bob Graybill (DARPA/IPTO))

To Toexamine examinethe theperformance performanceof ofHigh HighPerformance Performance Computer Computer(HPC) (HPC)architectures architecturesusing usingkernels kernelswith with more morechallenging challengingmemory memoryaccess accesspatterns patternsthan than High HighPerformance PerformanceLinpack Linpack(HPL) (HPL)

MITRE

Slide-14 HPCchallenge Benchmarks

ICL/UTK

HPCchallenge Goals



To examine the performance of HPC architectures using kernels with more challenging memory access patterns than HPL – HPL works well on all architectures ― even cachebased, distributed memory multiprocessors due to 1. 2. 3. 4.

• •

Extensive memory reuse Scalable with respect to the amount of computation Scalable with respect to the communication volume Extensive optimization of the software

To complement the Top500 list To provide benchmarks that bound the performance of many real applications as a function of memory access characteristics ― e.g., spatial and temporal locality MITRE

Slide-15 HPCchallenge Benchmarks

ICL/UTK

HPCchallenge Benchmarks • •

Local DGEMM (matrix x matrix multiply) STREAM – – – –

• • • • • • •

FFT

EP-RandomAccess

COPY SCALE ADD TRIADD

EP-RandomAccess 1D FFT Global High Performance LINPACK (HPL) PTRANS — parallel matrix transpose G-RandomAccess 1D FFT b_eff — interprocessor bandwidth and latency

DGEMM

STREAM

FFT

G-RandomAccess

HPL

PTRANS

• HPCchallenge HPCchallenge pushes pushes spatial spatial and and temporal temporal boundaries; boundaries; sets sets performance performance bounds bounds • Available Available for for download download http://icl.cs.utk.edu/hpcc/ http://icl.cs.utk.edu/hpcc/ MITRE

Slide-16 HPCchallenge Benchmarks

ICL/UTK

Web Site http://icl.cs.utk.edu/hpcc/ • • • • • • • • • •

Home Rules News Download FAQ Links Collaborators Sponsors Upload Results

MITRE

Slide-17 HPCchallenge Benchmarks

ICL/UTK

Outline

• • • • •

Brief DARPA HPCS Overview Architecture/Application Characterization HPCchallenge Benchmarks Preliminary Results Summary

MITRE

Slide-18 HPCchallenge Benchmarks

ICL/UTK

Preliminary Results Machine List (1 of 2) Affiliatio n

Manufacturer

System

ProcessorType

Procs

U Tenn

Atipa Cluster AMD 128 procs

Conquest cluster

AMD Opteron

128

AHPCRC

Cray X1 124 procs

X1

Cray X1 MSP

124

AHPCRC

Cray X1 124 procs

X1

Cray X1 MSP

124

AHPCRC

Cray X1 124 procs

X1

Cray X1 MSP

124

ERDC

Cray X1 60 procs

X1

Cray X1 MSP

60

ERDC

Cray X1 60 procs

X1

Cray X1 MSP

60

ORNL

Cray X1 252 procs

X1

Cray X1 MSP

252

ORNL

Cray X1 252 procs

X1

Cray X1 MSP

252

AHPCRC

Cray X1 120 procs

X1

Cray X1 MSP

120

ORNL

Cray X1 64 procs

X1

Cray X1 MSP

64

AHPCRC

Cray T3E 1024 procs

T3E

Alpha 21164

ORNL

HP zx6000 Itanium 2 128 procs

Integrity zx6000

Intel Itanium 2

128

PSC

HP AlphaServer SC45 128 procs

AlphaServer SC45

Alpha 21264B

128

ERDC

HP AlphaServer SC45 484 procs

AlphaServer SC45

Alpha 21264B

484

MITRE

Slide-19 HPCchallenge Benchmarks

ICL/UTK

1024

Preliminary Results Machine List (2 of 2) Affiliation

Manufacturer

System

ProcessorType

Procs

IBM

IBM 655 Power4+ 64 procs

eServer pSeries 655

IBM Power 4+

64

IBM

IBM 655 Power4+ 128 procs

eServer pSeries 655

IBM Power 4+

128

IBM

IBM 655 Power4+ 256 procs

eServer pSeries 655

IBM Power 4+

256

NAVO

IBM p690 Power4 504 procs

p690

IBM Power 4

504

ARL

IBM SP Power3 512 procs

RS/6000 SP

IBM Power 3

512

ORNL

IBM p690 Power4 256 procs

p690

IBM Power 4

256

ORNL

IBM p690 Power4 64 procs

p690

IBM Power 4

64

ARL

Linux Networx Xeon 256 procs

Powell

Intel Xeon

U Manchester

SGI Altix Itanium 2 32 procs

Altix 3700

Intel Itanium 2

32

ORNL

SGI Altix Itanium 2 128 procs

Altix

Intel Itanium 2

128

U Tenn

SGI Altix Itanium 2 32 procs

Altix

Intel Itanium 2

32

U Tenn

SGI Altix Itanium 2 32 procs

Altix

Intel Itanium 2

32

U Tenn

SGI Altix Itanium 2 32 procs

Altix

Intel Itanium 2

32

U Tenn

SGI Altix Itanium 2 32 procs

Altix

Intel Itanium 2

32

NASA ASC

SGI Origin 23900 R16K 256 procs

Origin 3900

SGI MIPS R16000

256

U Aachen/RWTH

SunFire 15K 128 procs

Sun Fire 15k/6800 SMP-Cluster

Sun UltraSparc III

128

OSC

Voltaire Cluster Xeon 128 procs

Pinnacle 2X200 Cluster

Intel Xeon

128

MITRE

Slide-20 HPCchallenge Benchmarks

ICL/UTK

256

STREAM TRIAD vs HPL 120-128 Processors Basic Performance 120-128 Processors 2.5

2.0

EP-STREAM TRIAD Tflop/s 1.5

HPL TFlop/s ()op/s

1.0

HPL

Ax=b y

X1

oc pr 8 12 on Xe e r cs st lu p r o cs C ro ir e 1 28 8 p lta K 1 2 cs Vo 1 5 2 o m p r o cs ire u i 8 r n F ta n 12 8 p I Su + 12 s r4 lt ix e 5 I A o w C4 oc pr P rS SG 8 5 2 6 5 er ve 2 1 s c M IB ha S iu m p r o lp t a n 2 8 A P 00 I D 1 H 6 0 r AM zx P u ste cs H Cl p ro a 24 s ip At 1 1 ro c p X y 24 s ra C 1 1 ro c p X 4 y r a 12 o cs C X1 p r 0 12 y

ra

ra

C

a(i) = b(i) + q *c(i)

C

s

ICL/UTK MITRE

Slide-21 HPCchallenge Benchmarks

STREAM TRIAD 0.5

0.0

STREAM TRIAD vs HPL >252 Processors Basic Performance >=252 Processors 2.5

2.0

1.5

EP-STREAM TRIAD Tflop/s 1.0

HPL TFlop/s ()op/s

HPL

Ax=b X1

y

y

ra

ra

C

s oc p r ocs 24 pr 1 0 12 cs E 5 ro T3 er 3 4 p o cs y pr w ra 50 o C 4 P 8 4 cs er 5 4 o SP w pr 4 6 M Po S C IB 0 25 r 9 p 6 er ve 1 6 K o cs r M R IB ha S 0 6 p lp 3 9 0 2 5 A n 2 cs P i n Xe o pr o H r ig x 6 I O wo r 2 5 ocs r r4 SG et N we 6 p x 25 Po nu L i 90 r4 + p 6 we o cs M IB 5 P ro 65 2 p M 25 o cs IB X1 p r 2 25

a(i) = b(i) + q *c(i) C

ICL/UTK MITRE

Slide-22 HPCchallenge Benchmarks

STREAM TRIAD 0.5

0.0

STREAM ADD vs PTRANS 60-128 Processors Basic Performance 60-128 Processors 10,000.0

1,000.0

100.0

s oc pr 8 12 n eo rX s s te roc o c us p pr Cl 2 8 2 8 s ire 1 1 ro c c s lta 5 K 2 p ro m 1 V o ire niu 1 28 8 p 2 s a nF I t r4+ 5 1 ro c S u ltix we C4 8 p I A P o r S 12 cs S G 55 rv e 2 pr o 6 e m M a S ni u 28 I B lp h I t a 1 A 00 M D HP x 60 er A s z st oc u r HP C l 4 p s a 2 c ip 1 r o At X1 4 p s 2 c s ay 1 r o Cr X 1 4 p cs roc 2 p ay 1 r o 4 c s Cr X 1 0 p 4 6 pro 2 r ay 1 e 4 Cr X 1 P ow + 6 ay 0 e r4 Cr p69 ow s c M P ro I B 65 5 4 p s c M 6 ro IB X 1 0 p s ay 6 ro c Cr X 1 0 p ay 6 Cr X 1 ay Cr

a = a + bT PTRANS

a(i) = b(i) + c(i) STREAM ADD 0.1

ICL/UTK MITRE

Slide-23 HPCchallenge Benchmarks

PTRANS GB/s EP-STREAM ADD GB/s GB/s

10.0

1.0

STREAM ADD vs PTRANS >252 Processors Basic Performance >=252 Processors 10,000.0

EP-STREAM ADD GB/s 100.0

PTRANS

a = a + bT C

ra

X1

y

y

ra

s oc s pr oc 24 pr 10 2 s E 51 oc s 3 T3 pr c er y 4 ro ra ow 50 4 p C P 4 er 4 8 o cs SP w 5 pr M Po C 4 6 IB 25 90 rS p 6 rve 6 K cs e 1 o M IB ha S 0 R p r lp 3 9 0 2 5 6 A 2 n s P H igi n e o r o c X p r x I O o r 2 5 6 cs SG t w 4 ro e N we r 6 p x 5 n u Po + 2 Li 90 r4 p 6 we o s M IB 5 P ro c 65 p 2 M 25 cs IB o X1 p r 2 25

a(i) = b(i) + c(i) C

STREAM ADD 0.1

ICL/UTK MITRE

Slide-24 HPCchallenge Benchmarks

PTRANS GB/s 1,000.0

GB/s

10.0

1.0

Outline

• • • • •

Brief DARPA HPCS Overview Architecture/Application Characterization HPCchallenge Benchmarks Preliminary Results Summary

MITRE

Slide-25 HPCchallenge Benchmarks

ICL/UTK

Summary •

DARPA HPCS Subsystem Performance Indicators – – – –



Important to understand architecture/application characterization –



2+ PF/s LINPACK 6.5 PB/sec data STREAM bandwidth 3.2 PB/sec bisection bandwidth 64,000 GUPS Where did all the lost “Moore’s Law performance go?”

HPCchallenge Benchmarks — http://icl.cs.utk.edu/hpcc/ – –

Peruse the results! Contribute!

HPCS Productivity Design Points FFT

Spatial Locality

Low

RandomAccess

Mission Partner Applications HPL

PTRANS STREAM

High High

MITRE

Slide-26 HPCchallenge Benchmarks

Temporal Locality

Low

ICL/UTK