Developer. â . Before. Release-time. Candidate. 1. Candidate. 2 ..... gg_x = gx(i); gg_xyz = gg_x * gg_yz;. SXX(I,J,K) = SXX(I,J,K) * gg_xyz; SYY(I,J,K) = SYY(I,J ...
Directive‐based Auto‐tuning for the Finite Difference Method on the Xeon Phi Takahiro Katagiri, Satoshi Ohshima, Masaharu Matsumoto (Information Technology Center, The University of Tokyo) iWAPT2015, Hyderabad International Convention Centre, Hyderabad, INDIA Session 2, 13:30 ‐14:00 May 29th, 2015
1
Outline 1. 2. 3. 4.
Background and ppOpen‐AT Functions Code Optimization Strategy Performance Evaluation Conclusion
Outline 1. 2. 3. 4.
Background and ppOpen‐AT Functions Code Optimization Strategy Performance Evaluation Conclusion
Background • High‐Thread Parallelism (HTP)
– Multi‐core and many‐core processors are pervasive. • Multicore CPUs: 16‐32 cores, 32‐64 Threads with Hyper Threading (HT) or Simultaneous Multithreading (SMT) • Many Core CPU: Xeon Phi – 60 cores, 240 Threads with HT.
– Utilizing parallelism with full‐threads is important.
Performance Portability (PP) ◦ Keeping high performance in multiple computer environments. Not only multiple CPUs, but also multiple compilers. Run‐time information, such as loop length and number of threads, is important. ◦ Auto‐tuning (AT) is one of candidates technologies to establish PP in multiple computer environments.
4
ppOpen‐HPC Project • Middleware for HPC and Its AT Technology – Supported by JST, CREST, from FY2011 to FY2015. – PI: Professor Kengo Nakajima (U. Tokyo)
• ppOpen‐HPC – An open source infrastructure for reliable simulation codes on post‐peta (pp) scale parallel computers. – Consists of various types of libraries, which covers 5 kinds of discretization methods for scientific computations.
• ppOpen‐AT – An auto‐tuning language for ppOpen‐HPC codes – Using knowledge of previous project: ABCLibScript Project. – Auto‐tuning language based on directives for AT.
5
Software Architecture of ppOpen‐HPC User Program ppOpen‐APPL
ppOpen‐MATH
FEM
FDM
MG
GRAPH STATIC
ppOpen‐AT
FVM VIS
BEM
DEM
MP
DYNAMIC
Auto‐Tuning Facility Code Generation for Optimization Candidates Search for the best candidate Automatic Execution for the optimization
ppOpen‐SYS
COMM
FT
Many‐core CPUs
Multi‐core CPUs
Optimize memory accesses GPUs 6
Design Philosophy of ppOpen‐AT • A Directive‐base AT Language – Multiple regions can be specified. – Low cost description with directives. – Do not prevent original code execution.
• AT Framework Without Script Languages and OS Daemons (Static Code Generation) – Simple: Minimum software stack is required. – Useful: Our targets are supercomputers in operation, or just after development. – Low Overhead: We DO NOT use code generator with run‐time feedback. – Reasons: • Since supercomputer centers do not accept any OS kernel modification, and any daemons in login or compute nodes. • Need to reduce additional loads of the nodes. • Environmental issues, such as batch job script, etc.
7
ppOpen‐AT System Library Developer
Before User ① ppOpen‐APPL /* Release‐time Knowledge Automatic ppOpen‐AT Code Directives ② Generation ppOpen‐APPL / *
Selection
This user benefited from AT. Auto‐tuned Kernel ⑥ Execution Library User
ppOpen‐AT Auto‐Tuner ⑤ ③
Library Call
④
Candidate Candidaten
3 Candidate 2 Candidate 1
Execution Time :Target Computers
Run‐ time
Scenario of AT for ppOpen‐APPL/FDM Library User
Specify problem size, number of MPI processes and OpenMP threads. Set AT parameter, and execute the library (OAT_AT_EXEC=1) Set AT parameters, and execute the library (OAT_AT_EXEC=0) Execution with optimized kernels without AT process.
■Execute auto-tuner: With fixed loop lengths (by specifying problem size and number of MPI processes and OpenMP threads) Measurement of timings for target kernels. Store the best candidate information.
Store the fastest kernel information Using the fastest kernel without AT (except for varying problem size and number of MPI processes and OpenMP threads.) 9
AT Timings of ppOpen‐AT (FIBER Framework) One time execution (except for varying problem size and number of MPI processes )
OAT_ATexec() … do i=1, MAX_ITER Target_kernel_k() … Target_kernel_m() … enddo
AT for Before Execute‐time Execute Target_Kernel_k() with varying parameters … Execute Target_Kernel_m() with varying parameters Store the best parameter Is this first call?
Yes Read the best parameter Is this first call?
Yes Read the best parameter
Outline 1. 2. 3. 4.
Background and ppOpen‐AT Functions Code Optimization Strategy Performance Evaluation Conclusion
Target Application • Seism_3D: Simulation for seismic wave analysis. • Developed by Professor Furumura at the University of Tokyo. – The code is re‐constructed as ppOpen‐APPL/FDM.
• Finite Differential Method (FDM) • 3D simulation –3D arrays are allocated. • Data type: Single Precision (real*4)
12
Flow Diagram of ppOpen‐APPL/FDM
Space difference by FDM. d pq ( x, y , z ) dx 1 M /2 1 1 cm [ pq {x ( m ) x, y , z} pq {x (m ) x, y , z}], x m 1 2 2 ( p, q x , y , z )
Initialization Velocity Derivative (def_vel)
Explicit time expansion by central difference. n
Velocity Update (update_vel)
u p
1 2
n
u p
1 2
n n n 1 xp yp zp f pn t , ( p x, y , z ) y z x
Velocity PML condition (update_vel_sponge) Velocity Passing (MPI) (passing_vel) Stress Derivative (def_stress) Stress Update (update_stress) Stress PML condition (update_stress_sponge) Stress Passing (MPI) (passing_stress)
NO Stop Iteration? YES End
13
Target Loop Characteristics • Triple‐nested loops !$omp parallel do do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 end do end do end do !$omp end parallel do
OpenMP directive to the outer loop (Z‐axis) Loop lengths are varied according to problem size, the number of MPI processes and OpenMP threads. The codes can be separable by loop split.
What is separable codes? Variable definitions and references are separated. There is a flow‐dependency, but no data dependency between each other. d1 = … d2 = … … dk = … … … = … d1 … … = … d2 … … … = … dk …
Variable definitions Split to Two Parts Variable references
d1 = … d2 = … … = … d1 … … = … d2 … … … dk = … … … = … dk …
ppOpen‐AT Directives : Loop Split & Fusion with data‐flow dependence !oat$ install LoopFusionSplit region start !$omp parallel do private(k,j,i,STMP1,STMP2,STMP3,STMP4,RL,RM,RM2,RMAXY,RMAXZ,RMAYZ,RLTHETA,QG) DO K = 1, NZ DO J = 1, NY DO I = 1, NX RL = LAM (I,J,K); RM = RIG (I,J,K); RM2 = RM + RM RLTHETA = (DXVX(I,J,K)+DYVY(I,J,K)+DZVZ(I,J,K))*RL !oat$ SplitPointCopyDef region start QG = ABSX(I)*ABSY(J)*ABSZ(K)*Q(I,J,K) !oat$ SplitPointCopyDef region end SXX (I,J,K) = ( SXX (I,J,K) + (RLTHETA + RM2*DXVX(I,J,K))*DT )*QG SYY (I,J,K) = ( SYY (I,J,K) + (RLTHETA + RM2*DYVY(I,J,K))*DT )*QG SZZ (I,J,K) = ( SZZ (I,J,K) + (RLTHETA + RM2*DZVZ(I,J,K))*DT )*QG !oat$ SplitPoint (K, J, I) STMP1 = 1.0/RIG(I,J,K); STMP2 = 1.0/RIG(I+1,J,K); STMP4 = 1.0/RIG(I,J,K+1) STMP3 = STMP1 + STMP2 RMAXY = 4.0/(STMP3 + 1.0/RIG(I,J+1,K) + 1.0/RIG(I+1,J+1,K)) RMAXZ = 4.0/(STMP3 + STMP4 + 1.0/RIG(I+1,J,K+1)) RMAYZ = 4.0/(STMP3 + STMP4 + 1.0/RIG(I,J+1,K+1)) !oat$ SplitPointCopyInsert SXY (I,J,K) = ( SXY (I,J,K) + (RMAXY*(DXVY(I,J,K)+DYVX(I,J,K)))*DT )*QG SXZ (I,J,K) = ( SXZ (I,J,K) + (RMAXZ*(DXVZ(I,J,K)+DZVX(I,J,K)))*DT )*QG SYZ (I,J,K) = ( SYZ (I,J,K) + (RMAYZ*(DYVZ(I,J,K)+DZVY(I,J,K)))*DT )*QG END DO; END DO; END DO !$omp end parallel do 16 !oat$ install LoopFusionSplit region end
Specify Loop Split and Loop Fusion
Re‐calculation is defined. Loop Split Point
Using the re‐calculation is defined.
Re‐ordering of Statements : increase a chance to optimize register allocation by compiler !OAT$ RotationOrder sub region start Sentence i Sentence ii !OAT$ RotationOrder sub region end !OAT$ RotationOrder sub region start Sentence 1 Sentence 2 !OAT$ RotationOrder sub region end
Generated Code Sentence i Sentence 1 Sentence ii Sentence 2
17
The Kernel 1 (update_stress) • m_stress.f90(ppohFDM_update_stress) !OAT$ call OAT_BPset("NZ01") !OAT$ install LoopFusionSplit region start !OAT$ name ppohFDMupdate_stress !OAT$ debug (pp) !$omp parallel do private(k,j,i,RL1,RM1,RM2,RLRM2,DXVX1,DYVY1,DZVZ1,D3V3,DXVYDYVX1, DXVZDZVX1,DYVZDZV1) do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 RL1 = LAM (I,J,K) !OAT$ SplitPointCopyDef sub region start RM1 = RIG (I,J,K) !OAT$ SplitPointCopyDef sub region end RM2 = RM1 + RM1; RLRM2 = RL1+RM2; DXVX1 = DXVX(I,J,K); DYVY1 = DYVY(I,J,K); DZVZ1 = DZVZ(I,J,K) D3V3 = DXVX1 + DYVY1 + DZVZ1 SXX (I,J,K) = SXX (I,J,K) + (RLRM2*(D3V3)‐RM2*(DZVZ1+DYVY1) ) * DT SYY (I,J,K) = SYY (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DZVZ1) ) * DT SZZ (I,J,K) = SZZ (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DYVY1) ) * DT
Measured B/F=3.2
The Kernel 1 (update_stress) • m_stress.f90(ppohFDM_update_stress) !OAT$ SplitPoint (K,J,I) !OAT$ SplitPointCopyInsert DXVYDYVX1 = DXVY(I,J,K)+DYVX(I,J,K) DXVZDZVX1 = DXVZ(I,J,K)+DZVX(I,J,K) DYVZDZVY1 = DYVZ(I,J,K)+DZVY(I,J,K) SXY (I,J,K) = SXY (I,J,K) + RM1 * DXVYDYVX1 * DT SXZ (I,J,K) = SXZ (I,J,K) + RM1 * DXVZDZVX1 * DT SYZ (I,J,K) = SYZ (I,J,K) + RM1 * DYVZDZVY1 * DT end do end do end do !$omp end parallel do !OAT$ install LoopFusionSplit region end
Automatic Generated Codes for the kernel 1 ppohFDM_update_stress
#1 [Baseline]: Original 3-nested Loop #2 [Split]: Loop Splitting with K-loop (Separated, two 3-nested loops) #3 [Split]: Loop Splitting with J-loop #4 [Split]: Loop Splitting with I-loop #5 [Split&Fusion]: Loop Fusion to #1 for K and J-loops (2-nested loop) #6 [Split&Fusion]: Loop Fusion to #2 for K and J-Loops (2-nested loop) #7 [Fusion]: Loop Fusion to #1 (loop collapse) #8 [Split&Fusion]: Loop Fusion to #2 (loop collapse, two one-nest loop)
The Kernel 2 (update_vel)
• m_velocity.f90(ppohFDM_update_vel) !OAT$ install LoopFusion region start !OAT$ name ppohFDMupdate_vel !OAT$ debug (pp) !$omp parallel do private(k,j,i,ROX,ROY,ROZ) do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 ! Effective Density !OAT$ RotationOrder sub region start ROX = 2.0_PN/( DEN(I,J,K) + DEN(I+1,J,K) ) ROY = 2.0_PN/( DEN(I,J,K) + DEN(I,J+1,K) ) ROZ = 2.0_PN/( DEN(I,J,K) + DEN(I,J,K+1) ) !OAT$ RotationOrder sub region end !OAT$ RotationOrder sub region start VX(I,J,K) = VX(I,J,K) + ( DXSXX(I,J,K)+DYSXY(I,J,K)+DZSXZ(I,J,K) )*ROX*DT VY(I,J,K) = VY(I,J,K) + ( DXSXY(I,J,K)+DYSYY(I,J,K)+DZSYZ(I,J,K) )*ROY*DT VZ(I,J,K) = VZ(I,J,K) + ( DXSXZ(I,J,K)+DYSYZ(I,J,K)+DZSZZ(I,J,K) )*ROZ*DT !OAT$ RotationOrder sub region end end do; end do; end do !$omp end parallel do !OAT$ install LoopFusion region end
Measured B/F=1.7
Automatic Generated Codes for the kernel 2 ppohFDM_update_vel • #1 [Baseline]: Original 3‐nested Loop. • #2 [Fusion]: Loop Fusion for K and J‐Loops. (2‐nested loop) • #3 [Fusion]: Loop Split for K, J, and I‐Loops. (Loop Collapse) • #4 [Fusion&Re‐order]: Re‐ordering of sentences to #1. • #5 [Fusion&Re‐order]: Re‐ordering of sentences to #2. • #6 [Fusion&Re‐order]: Re‐ordering of sentences to #3.
The Kernel 3 (update_stress_sponge) • m_stress.f90(ppohFDM_update_stress_sponge) !OAT$ call OAT_BPset("NZ01") !OAT$ install LoopFusion region start !OAT$ name ppohFDMupdate_sponge !OAT$ debug (pp) !$omp parallel do private(k,gg_z,i,gg_y,gg_yz,i,gg_x,gg_xyz) do k = NZ00, NZ01 gg_z = gz(k) do j = NY00, NY01 gg_y = gy(j); gg_yz = gg_y * gg_z; do i = NX00, NX01 gg_x = gx(i); gg_xyz = gg_x * gg_yz; SXX(I,J,K) = SXX(I,J,K) * gg_xyz; SYY(I,J,K) = SYY(I,J,K) * gg_xyz SZZ(I,J,K) = SZZ(I,J,K) * gg_xyz; SXY(I,J,K) = SXY(I,J,K) * gg_xyz SXZ(I,J,K) = SXZ(I,J,K) * gg_xyz; SYZ(I,J,K) = SYZ(I,J,K) * gg_xyz end do end do end do !$omp end parallel do !OAT$ install LoopFusion region end
Measured B/F=6.8
Automatic Generated Codes for the kernel 3
ppohFDM_update_sponge • #1 [Baseline]: Original 3‐nested loop • #2 [Fusion]: Loop Fusion for K and J‐loops (2‐nested loop) • #3 [Fusion]: Loop Split for K, J, and I‐loops (Loop Collapse)
Outline 1. 2. 3. 4.
Background and ppOpen‐AT Functions Code Optimization Strategy Performance Evaluation Conclusion
An Example of Seism_3D Simulation West part earthquake in Tottori prefecture in Japan at year 2000. ([1], pp.14) The region of 820km x 410km x 128 km is discretized with 0.4km. NX x NY x NZ = 2050 x 1025 x 320 ≒ 6.4 : 3.2 : 1.
Figure : Seismic wave translations in west part earthquake in Tottori prefecture in Japan. (a) Measured waves; (b) Simulation results; (Reference : [1] in pp.13) [1] T. Furumura, “Large‐scale Parallel FDM Simulation for Seismic Waves and Strong Shaking”, Supercomputing News, Information Technology Center, The University of Tokyo, Vol.11, Special Edition 1, 2009. In Japanese.
AT Candidates in This Experiment 1. Kernel update_stress –
8 Kinds of Candidates with Loop fusion and Loop Split.
2. Kernel update_vel –
6 Kinds of Candidates with Loop Fusion and Re‐ordering of Statements.
3 Kinds of Candidates with Loop Fusion. 3. Kernel update_stress_sponge 4. Kernel update_vel_sponge 5. Kernel ppohFDM_pdiffx3_p4 6. Kernel ppohFDM_pdiffx3_m4 7. Kernel ppohFDM_pdiffy3_p4 8. Kernel ppohFDM_pdiffy3_m4 9. Kernel ppohFDM_pdiffz3_p4 10. Kernel ppohFDM_pdiffz3_m4
27
AT Candidates in This Experiment (Cont’d) 3 Kinds of Candidates with Loop Fusion for Data Packing and Data Unpacking. 11. Kernel ppohFDM_ps_pack 12. Kernel ppohFDM_ps_unpack 13. Kernel ppohFDM_pv_pack 14. Kernel ppohFDM_pv_unpack
28
ppohFDM_ps_pack (ppOpen‐APPL/FDM) • m_stress.f90(ppohFDM_ps_pack) !OAT$ call OAT_BPset("NZ01") !OAT$ install LoopFusion region start !OAT$ name ppohFDMupdate_sponge !OAT$ debug (pp) !$omp parallel do private(k,j,iptr,i) do k=1, NZP iptr = (k‐1)*3*NYP*NL2 do j=1, NYP do i=1, NL2 i1_sbuff(iptr+1) = SXX(NXP‐NL2+i,j,k) i1_sbuff(iptr+2) = SXY(NXP‐NL2+i,j,k) i1_sbuff(iptr+3) = SXZ(NXP‐NL2+i,j,k) i2_sbuff(iptr+1) = SXX(i,j,k) i2_sbuff(iptr+2) = SXY(i,j,k) i2_sbuff(iptr+3) = SXZ(i,j,k) iptr = iptr + 3 end do end do end do !$omp end parallel do
ppohFDM_ps_pack (ppOpen‐APPL/FDM)
• m_stress.f90(ppohFDM_ps_pack)
!$omp parallel do private(k,j,iptr,i) do k=1, NZP iptr = (k‐1)*3*NL2*NXP do j=1, NL2 do i=1, NXP j1_sbuff(iptr+1) = SXY(i,NYP‐NL2+j,k); j1_sbuff(iptr+2) = SYY(i,NYP‐NL2+j,k) j1_sbuff(iptr+3) = SYZ(i,NYP‐NL2+j,k) j2_sbuff(iptr+1) = SXY(i,j,k); j2_sbuff(iptr+2) = SYY(i,j,k); j2_sbuff(iptr+3) = SYZ(i,j,k) iptr = iptr + 3 end do; end do; end do !$omp end parallel do !$omp parallel do private(k,j,iptr,i) do k=1, NL2 iptr = (k‐1)*3*NYP*NXP do j=1, NYP do i=1, NXP k1_sbuff(iptr+1) = SXZ(i,j,NZP‐NL2+k); k1_sbuff(iptr+2) = SYZ(i,j,NZP‐NL2+k) k1_sbuff(iptr+3) = SZZ(i,j,NZP‐NL2+k) k2_sbuff(iptr+1) = SXZ(i,j,k); k2_sbuff(iptr+2) = SYZ(i,j,k); k2_sbuff(iptr+3) = SZZ(i,j,k) iptr = iptr + 3 end do; end do; end do !$omp end parallel do
!OAT$ install LoopFusion region end
Machine Environment (8 nodes of the Xeon Phi) The Intel Xeon Phi
Xeon Phi 5110P (1.053 GHz), 60 cores
Memory Amount:8 GB (GDDR5) Theoretical Peak Performance:1.01 TFLOPS One board per node of the Xeon phi cluster InfiniBand FDR x 2 Ports
Mellanox Connect‐IB PCI‐E Gen3 x16 56Gbps x 2 Theoretical Peak bandwidth 13.6GB/s Full‐Bisection
Intel MPI Based on MPICH2, MVAPICH2 4.1 Update 3 (build 048)
Compiler:Intel Fortran version 14.0.0.080 Build 20130728 Compiler Options: ‐ipo20 ‐O3 ‐warn all ‐openmp ‐mcmodel=medium ‐shared‐intel –mmic ‐align array64byte KMP_AFFINITY=granularity=fine, balanced (Uniform Distribution of threads between sockets)
Execution Details • • • • • •
ppOpen‐APPL/FDM ver.0.2 ppOpen‐AT ver.0.2 The number of time step: 2000 steps The number of nodes: 8 node Native Mode Execution Target Problem Size (Almost maximum size with 8 GB/node) – NX * NY * NZ = 1536 x 768 x 240 / 8 Node – NX * NY * NZ = 768 * 384 * 120 / node (!= per MPI Process)
• The number of iterations for kernels to do auto‐tuning: 100
Execution Details of Hybrid MPI/OpenMP • Target MPI Processes and OMP Threads on the Xeon Phi – The Xeon Phi with 4 HT (Hyper Threading)
– PX TY: X MPI Processes and Y Threads per process – P8T240 : Minimum Hybrid MPI/OpenMP execution for ppOpen‐APPL/FDM, since it needs minimum 8 MPI Processes. – P16T120 P2T8 – P32T60 # # # # # # # # # # # # # # # # 1 1 1 1 1 1 – P64T30 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 – P128T15 P4T4 Target of cores for – P240T8 one MPI Process # # # # # # # # # # # # # # # # – P480T4 1 1 1 1 1 1 0 1 2 3 4 5 6 7 8 9 0
1
2
3
4
5
– Less than P960T2 cause an MPI error in this environment.
Loop Length per thread (Z‐axis) (8 Nodes, 1536x768x240 / 8 Node )
Loop length per thread MPI processes for Z‐axis are different between each MPI/OpenMP execution due to software restriction.
12
6 4 0.5
1
2
2
PERFORMANCE ON WHOLE TIME
[Seconds]
Whole Time (without AT)
1200
NX*NY*NZ = 1536x768x240/ 8 Node Hybrid MPI/OpenMP Phi : 8 Nodes (1920 Threads)
1000
Others IO passing_stress passing_velocity
800 Comm. Time
600
update_vel_spo nge update_vel update_stress_s ponge update_stress
400
def_stress
200 Comp. Time
0 P8T240 P16T120 P32T60 P64T30 P128T15 P240T8 P480T4
def_vel
Whole Time(with AT) [Seconds] 1200 1000 800 600
NX*NY*NZ = 1536x768x240/ 8 Node
12.1% Speedups Hybrid MPI/OpenMP 3.3% Speedups Phi : 8 Nodes (1920 Threads) 5.9% Speedups 4.8% Speedups 51.5% Speedups 51.7% Speedups 40.0% Speedups
Comm. Time
400
Others IO passing_stress passing_velocity update_vel_spon ge update_vel update_stress_s ponge update_stress def_stress
200 Comp. Time
0 P8T240 P16T120 P32T60 P64T30 P128T15 P240T8 P480T4
def_vel
Maximum Speedups by AT (Xeon Phi, 8 Nodes) 558
Speedup [%] Speedup = max ( Execution time of original code / Execution time with AT ) for all combination of Hybrid MPI/OpenMP Executions (PXTY)
NX*NY*NZ = 1536x768x240/ 8 Node
200
171 30
20
51
Kinds of Kernels
BREAK DOWN OF TIMINGS
AT Effect (update_stress) [Seconds]
250 (Accumulated time for 2000 steps ) 218.49 217.49 Without AT With AT 200 150 113.41 41.05 47.26 100 42.21 54.81 38.12 32.02 32.40 39.16 50
231.38
46.76 45.37
0 6
P8T240 P16T120 P32T60
P64T30 P128T15 P240T8
5.58
Speedups
4 2 0
2.07 Best SW:6
1.48 1.30 1.08 Best SW:5
Best SW:5
P8T240 P16T120 P32T60
Best SW:4
4.65
P480T4
5.10
5.6x Best SW:6
Best SW:5
Best SW:6
P64T30 P128T15 P240T8
P480T4
AT Effect for Pack and Unpack Optimization [Seconds]
1500 1000
(Whole time) 1,106 1,053 731 721
500 0 2 1.5 1 0.5
Without data pack and unpack AT With data pack and unpack AT
623 503 590 495 493 449 521 423 397 431
P8T240 P16T120 P32T60
1.05 1.01
1.26
Speedups
0 P8T240 P16T120 P32T60
P64T30 P128T15 P240T8
1.10 1.23
P480T4
1.49 1.17
1.5x P64T30 P128T15 P240T8
P480T4
AT Effect for Re‐ordering of Statements (update_vel, 100 times) 5
[Seconds]
4 3 2
4.00 4.20
3.82 3.24 2.65 3.38 3.49 2.04 3.06 3.09 2.60 2.09 2.08 2.04 2.69 2.58 2.27 2.06 2.27 2.03 2.04 Baseline Re‐orderling Re‐ordering + loop fusion (nested two)
1 0 2
P8T240 P16T120 P32T60 P64T30 P128T15 P240T8 1.76 Speedups by re‐ordering and loop fusion
1.36
1.5 1 0.5 0
1.01
1.8x
1.00
1.30
P480T4
1.51
0.97
Only re‐ordering is not so effective. But applying loop fusion at the same time is effective.
P8T240 P16T120 P32T60
P64T30 P128T15 P240T8
P480T4
COMPARISON WITH DIFFERENT CPU ARCHITECTURES
AT Effect (without AT) 1.06 1.05
Fast
1.04 1.03 1.02 1.01 1 0.99 0.98
NX*NY*NZ = 1536x768x240 / 8 Node
1.05
8 Nodes Whole Time
1.03
Hybrid MPI/OpenMP Phi : 8 Nodes (1920 Threads) Ivy : 8 Nodes (80 Threads) FX10 : 8 Nodes (128 Threads)
1 P240T8 Xeon Phi
P80T1
P128T1
Ivy Bridge_HT1
0.97 Speedup to Phi
FX10
AT Effect (with AT) 1.2
Fast
1
1
NX*NY*NZ 0.8 = 1536x768x240 / 8 Node
Whole Time
0.78
8 Nodes
0.68
0.6 0.4 0.2
Xeon Phi
P240T8
P80T2
P128T1
is the fastest if we apply AT Ivy Bridge_HT1 FX10
Xeon Phi
0 Speedup to Phi
AUTO‐TUNING TIME AND THE BEST IMPLEMENTATION
Auto‐tuning Time update_vel
update_stress 600
553
500 400 300
10.9%
AT overhead to total time (2000 time steps)
500
AT Time [seconds]
400
320
300
303
200
200
8.3%
AT overhead to total time (2000 time steps)
426
AT Time [seconds]
189
185
5.2%
5.8%
Ivy Bridge
FX10
8.4%
10.1% 100
100
0
0
Xeon Phi
Ivy Bridge
FX10
Xeon Phi
なぜ、カーネル数が違うか不明 (バージョンが違う?)
Parameter Distribution (update_stress) Parameter Value 8 Nodes 9 Phi
8
Ivy Bridge_HT1
FX10
NX*NY*NZ = 1536x768x240/ 8 Node
7 6 5 4 3 2 1 0 0
1
2
3
4
MPI Parallelism
5
6
7
8
Parameter Distribution (update_vel) Parameter Value 8 Nodes 7 Phi
6
Ivy Bridge_HT1
FX10
NX*NY*NZ = 1536x768x240/ 8 Node
5 4 3 2 1 0 0
1
2
3
4
MPI Parallelism
5
6
7
8
Parameter Distribution (update_stress_sponge) 4
8 Nodes
Parameter Value
Phi
Ivy Bridge_HT1
FX10
NX*NY*NZ = 1536x768x240/ 8 Node
3
2
1
0 0
1
2
3
4
MPI Parallelism
5
6
7
8
The Fastest Code (update_stress) Xeon Phi (P240T8) Ivy Bridge(P80T2) FX10 (P128T1) #5 [Split&Fusion]: Loop Fusion to #1 for K and J‐loops (2‐nested loop)
!$omp parallel do private (k,j,i,RL1,RM1,RM2,RLRM2,DXVX1,DYVY1,DZVZ1,D3V 3,DXVYDYVX1,DXVZDZVX1,DYVZDZV1) DO k_j = 1 , (NZ01‐NZ00+1)*(NY01‐NY00+1) k = (k_j‐1)/(NY01‐NY00+1) + NZ00; j = mod((k_j‐1),(NY01‐NY00+1)) + NY00; DO i = NX00, NX01 RL1 = LAM (I,J,K); RM1 = RIG (I,J,K); RM2 = RM1 + RM1; RLRM2 = RL1+RM2; DXVX1 = DXVX(I,J,K); DYVY1 = DYVY(I,J,K); DZVZ1 = DZVZ(I,J,K); D3V3 = DXVX1 + DYVY1 + DZVZ1; SXX (I,J,K) = SXX (I,J,K) + (RLRM2*(D3V3)‐RM2*(DZVZ1+DYVY1) ) * DT SYY (I,J,K) = SYY (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DZVZ1) ) * DT SZZ (I,J,K) = SZZ (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DYVY1) ) * DT DXVYDYVX1 = DXVY(I,J,K)+DYVX(I,J,K); DXVZDZVX1 = DXVZ(I,J,K)+DZVX(I,J,K); DYVZDZVY1 = DYVZ(I,J,K)+DZVY(I,J,K) SXY (I,J,K) = SXY (I,J,K) + RM1 * DXVYDYVX1 * DT SXZ (I,J,K) = SXZ (I,J,K) + RM1 * DXVZDZVX1 * DT SYZ (I,J,K) = SYZ (I,J,K) + RM1 * DYVZDZVY1 * DT END DO END DO !$omp end parallel do
#4 [Split]: Loop Splitting with I‐loop !$omp parallel do private (k,j,i,RL1,RM1,RM2,RLRM2,DXVX1,DYVY1,DZVZ1,D3V 3,DXVYDYVX1,DXVZDZVX1,DYVZDZV1) do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 RL1 = LAM (I,J,K); RM1 = RIG (I,J,K); RM2 = RM1 + RM1; RLRM2 = RL1+RM2 DXVX1 = DXVX(I,J,K); DYVY1 = DYVY(I,J,K) DZVZ1 = DZVZ(I,J,K) D3V3 = DXVX1 + DYVY1 + DZVZ1 SXX (I,J,K) = SXX (I,J,K) + (RLRM2*(D3V3)‐RM2*(DZVZ1+DYVY1) ) * DT SYY (I,J,K) = SYY (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DZVZ1) ) * DT SZZ (I,J,K) = SZZ (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DYVY1) ) * DT end do do i = NX00, NX01 RM1 = RIG (I,J,K) DXVYDYVX1 = DXVY(I,J,K)+DYVX(I,J,K) DXVZDZVX1 = DXVZ(I,J,K)+DZVX(I,J,K) DYVZDZVY1 = DYVZ(I,J,K)+DZVY(I,J,K) SXY (I,J,K) = SXY (I,J,K) + RM1 * DXVYDYVX1 * DT SXZ (I,J,K) = SXZ (I,J,K) + RM1 * DXVZDZVX1 * DT SYZ (I,J,K) = SYZ (I,J,K) + RM1 * DYVZDZVY1 * DT end do end do end do !$omp end parallel do
#1 [Baseline]: Original Loop !$omp parallel do private (k,j,i,RL1,RM1,RM2,RLRM2,DXVX1,DYVY1,DZVZ1,D3V 3,DXVYDYVX1,DXVZDZVX1,DYVZDZV1) do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 RL1 = LAM (I,J,K); RL1 = LAM (I,J,K) RM1 = RIG (I,J,K); RM2 = RM1 + RM1 RLRM2 = RL1+RM2; DXVX1 = DXVX(I,J,K); DYVY1 = DYVY(I,J,K) DZVZ1 = DZVZ(I,J,K) D3V3 = DXVX1 + DYVY1 + DZVZ1 SXX (I,J,K) = SXX (I,J,K) + (RLRM2*(D3V3)‐RM2*(DZVZ1+DYVY1) ) * DT SYY (I,J,K) = SYY (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DZVZ1) ) * DT SZZ (I,J,K) = SZZ (I,J,K) + (RLRM2*(D3V3)‐RM2*(DXVX1+DYVY1) ) * DT DXVYDYVX1 = DXVY(I,J,K)+DYVX(I,J,K) DXVZDZVX1 = DXVZ(I,J,K)+DZVX(I,J,K) DYVZDZVY1 = DYVZ(I,J,K)+DZVY(I,J,K) SXY (I,J,K) = SXY (I,J,K) + RM1 * DXVYDYVX1 * DT SXZ (I,J,K) = SXZ (I,J,K) + RM1 * DXVZDZVX1 * DT SYZ (I,J,K) = SYZ (I,J,K) + RM1 * DYVZDZVY1 * DT end do end do end do !$omp end parallel do
The Fastest Code (update_vel) Xeon Phi (P240T8) Ivy Bridge(P80T2) FX10 (P128T1) #5 [Fusion&Re‐order]: Re‐ordering of sentences to #2.
#5 [Fusion&Re‐order]: #1 [Baseline]: Original Loop Re‐ordering of sentences to #2.
!$omp parallel do private (i,j,k,ROX,ROY,ROZ) DO k_j = 1 , (NZ01‐NZ00+1)*(NY01‐NY00+1) k = (k_j‐1)/(NY01‐NY00+1) + NZ00 j = mod((k_j‐1),(NY01‐NY00+1)) + NY00 do i = NX00, NX01 ROX = 2.0_PN/( DEN(I,J,K) + DEN(I+1,J,K) ) VX(I,J,K) = VX(I,J,K) & + ( DXSXX(I,J,K)+DYSXY(I,J,K)+DZSXZ(I,J,K) )*ROX*DT ROY = 2.0_PN/( DEN(I,J,K) + DEN(I,J+1,K) ) VY(I,J,K) = VY(I,J,K) & + ( DXSXY(I,J,K)+DYSYY(I,J,K)+DZSYZ(I,J,K) )*ROY*DT ROZ = 2.0_PN/( DEN(I,J,K) + DEN(I,J,K+1) ) VZ(I,J,K) = VZ(I,J,K) & + ( DXSXZ(I,J,K)+DYSYZ(I,J,K)+DZSZZ(I,J,K) )*ROZ*DT end do end do !$omp end parallel do
!$omp parallel do private (i,j,k,ROX,ROY,ROZ) do k = NZ00, NZ01 do j = NY00, NY01 do i = NX00, NX01 ROX = 2.0_PN/( DEN(I,J,K) + DEN(I+1,J,K) ) ROY = 2.0_PN/( DEN(I,J,K) + DEN(I,J+1,K) ) ROZ = 2.0_PN/( DEN(I,J,K) + DEN(I,J,K+1) ) VX(I,J,K) = VX(I,J,K) + ( DXSXX(I,J,K)+DYSXY(I,J,K)+DZSXZ(I,J,K) )*ROX*DT VY(I,J,K) = VY(I,J,K) + ( DXSXY(I,J,K)+DYSYY(I,J,K)+DZSYZ(I,J,K) )*ROY*DT VZ(I,J,K) = VZ(I,J,K) + ( DXSXZ(I,J,K)+DYSYZ(I,J,K)+DZSZZ(I,J,K) )*ROZ*DT end do end do end do !$omp end parallel do
RELATED WORK
Originality (AT Languages) AT Language / Items ppOpen‐AT Vendor Compilers Transformation Recipes POET X language SPL
# 1 OAT Directives Out of Target Recipe Descriptions Xform Description Xlang Pragmas SPL Expressions ADAPT Language
ADAPT
Atune‐IL PEPPHER Xevolver
Atune Pragmas PEPPHER Pragmas (interface) Directive Extension (Recipe Descriptions)
# 2 ✔
# 3 ✔
# 4 ✔ Limited
# 5 ✔
# 6
✔
✔
✔ ✔
✔ ✔
# 7 None ‐ ChiLL
POET translator, ROSE X Translation, ‘C and tcc A Script Language ✔ ✔ ✔ Polaris ✔ ✔ Compiler Infrastructure, Remote Procedure Call (RPC) A Monitoring ✔ Daemon ✔ ✔ ✔ PEPPHER task graph and run-time (✔) (✔) (✔) ROSE, (Users need to define rules. ) XSLT Translator
#1: Method for supporting multi-computer environments. #2: Obtaining loop length in run-time. #3: Loop split with increase of computations, and loop fusion to the split loop. #4: Re-ordering of inner-loop sentences. #5: Algorithm selection. #6: Code generation with execution feedback. #7: Software requirement.
Outline 1. 2. 3. 4.
Background and ppOpen‐AT Functions Code Optimization Strategy Performance Evaluation Conclusion
Conclusion Remarks • Loop transformation with loop fusion and loop split is a key technology for FDM codes to obtain high performance on many‐core architecture. • AT with static code generation (with dynamic code selection) is a key technology for AT with minimum software stack for supercomputers in operation.
To obtain free code (MIT Licensing): http://ppopenhpc.cc.u‐tokyo.ac.jp/
Future Work
• Improving Search Algorithm – We use a brute‐force search in current implementation. • But it is enough by applying knowledge of application.
– We have implemented a new search algorithm. • d‐Spline based search algorithm (a performance model) , collaborated with Prof. Tanaka (Kogakuin U.)
• Code Selection Between Different Architectures – Selection of codes between vector machine implementation and scalar machine implementation. – Whole code selection on main routine is needed.
• Off‐loading Implementation Selection (for the Xeon Phi) – If problem size is too small to do off‐loading, the target execution is performed on CPU automatically.
Thank you for your attention! Questions?
http://ppopenhpc.cc.u‐tokyo.ac.jp/
References (Related to authors) 1. 2. 3. 4. 5.
6.
H. Kuroda, T. Katagiri, M. Kudoh, Y. Kanada, “ILIB_GMRES: An auto‐tuning parallel iterative solver for linear equations,” SC2001, 2001. (Poster.) T. Katagiri, K. Kise, H. Honda, T. Yuba, “ABCLib_DRSSED: A parallel eigensolver with an auto‐tuning facility,” Parallel Computing, Vol. 32, Issue 3, pp. 231–250, 2006. T. Sakurai, T. Katagiri, K. Naono, H. Kuroda, K. Nakajima, M. Igai, S. Ohshima, S. Itoh, “Evaluation of auto‐tuning function on OpenATLib,” IPSJ SIG Technical Reports, Vol. 2011‐HPC‐130, No. 43, pp. 1–6, 2011. (in Japanese) T. Katagiri, K. Kise, H. Honda, T. Yuba, “FIBER: A general framework for auto‐ tuning software,” The Fifth International Symposium on High Performance Computing (ISHPC‐V), Springer LNCS 2858, pp. 146–159, 2003. T. Katagiri, S. Ito, S. Ohshima, “Early experiences for adaptation of auto‐ tuning by ppOpen‐AT to an explicit method,” Special Session: Auto‐Tuning for Multicore and GPU (ATMG) (In Conjunction with the IEEE MCSoC‐13), Proceedings of MCSoC‐13, 2013. T. Katagiri, S. Ohshima, M. Matsumoto, “Auto‐tuning of computation kernels from an FDM Code with ppOpen‐AT,” Special Session: Auto‐Tuning for Multicore and GPU (ATMG) (In Conjunction with the IEEE MCSoC‐14), Proceedings of MCSoC‐14, 2014.