Parallel Sparse Matrix Vector Multiplication using a Shared Virtual ...

5 downloads 0 Views 185KB Size Report
An auxiliary integer array ia of length (n + 1) points to the rst element of each row. An example of a sparse matrix with this storage is given below : A = 0. @. 1. 0 2.
Chapter 1 Parallel Sparse Matrix Vector Multiplication using a Shared Virtual Memory Environment  Fran cois Bodin

y

Jocelyne Erhel

y

Thierry Priol

y

Reprinted from "Proc. 6th SIAM Conference on Parallel Processing for Scienti c Computing",Norfolk, Virginia (USA), March 1993.

Abstract

Many iterative schemes in scienti c applications require the multiplication of a sparse matrix by a vector. This kernel has been mainly studied on vector processors and shared-memory parallel computers. In this paper, we address the implementation issues when using a shared virtual memory system on a distributed memory parallel computer. We study in details the impact of loop distribution schemes in order to design an ecient algorithm.

1 Introduction

Many scienti c applications require computations on large sparse matrices. Most iterative schemes include the multiplication of a sparse matrix by a vector, which should be ecient on parallel computers. The algorithm depends directly on the storage scheme chosen. Various schemes have been devised as explained in [1]. This paper deals with a parallel version of this kernel designed for a distributed memory parallel computer (DMPC) supplied with a Shared Virtual Memory (SVM). The basic idea of a SVM is to hide the underlying architecture of DMPCs by providing a virtual address space to the user. This latter is partitioned into pages spread among local processor memories. Each local memory acts as a large software cache for storing pages requested by the processor. DMPC could be thus programmed as more conventional shared memory parallel computers. However, accessing data through a SVM can dramatically decrease the eciency of the parallel algorithm if data locality is not well exploited. The aim of this paper is to demonstrate that using both an adequate data storage and a loop distribution scheme can keep low this overhead. Blocked matrix vector multiply is not considered. Such an algorithm can decrease communication cost on vector broadcasting and reduction, however an ecient use of the SVM, in that case, would require a new storage of the matrix and so would introduce many changes in Fortran programs. We also do not address the use of the internal processor cache studied in [6]. The paper is organized as follows. Sections 2 and 3 describe brie y the KOAN Shared Virtual Memory and Fortran-S compiler targeted to KOAN. Section 4 is devoted to the description of the parallel algorithm, mainly to the impact of sharing the resulting vector. A speci c loop partition scheme is designed, in order to distribute the workload. Section 5 gives and comments experimental results on various general non symmetric sparse matrices.  y

This work is partially supported by Intel SSD under contract no. 1 92 C 250 00 31318 01 2 IRISA - Campus Universitaire de Beaulieu - F-35042 RENNES Cedex - FRANCE

1

2

Bodin et al.

2 KOAN: a Shared Virtual Memory for the iPSC/2

KOAN is a Shared Virtual Memory designed at IRISA for the Intel hypercube iPSC/2 [4]. It is embedded into the operating system of the iPSC/2. It allows the use of fast, low-level communication primitives as well as a Memory Management Unit (MMU). Pages are managed by using the xed distributed manager algorithm as described in [5] and consistency of the shared memory is guaranteed by an invalidation protocol. This algorithm represents a good tradeo between ease of implementation and eciency. Let us now summarize some of the functionalities of the KOAN SVM system. KOAN SVM provides to the user several memory management protocols for handling some particular memory access patterns. An important one occurs when several processors have to write into di erent locations of the same page. This pattern involves a lot of messages since the page has to move from processor to processor (ping-pong e ect or false sharing). A weak cache coherence protocol can be used to let the processors modify concurrently their own copy of a page and to merge local copies into the shared one afterwards. Parallel algorithms based on a producer/consumer scheme are inecient on DMPCs when using a SVM system. Typically, a page is rst updated by a processor then accessed by the other processors. KOAN SVM can manage eciently this memory access pattern by using the broadcasting facility of the underlying topology of DMPCs (hypercube, 2D-mesh, etc...). The producer processor has to broadcast all the pages updated in the producer phase to the parallel consumer processors. These two memory management protocols are available to the user through several operating system calls that are used to specify a program section where a weak cache coherence protocol has to be used or to bound a producer phase.

3 Fortran-S: A programming interface for KOAN

A \user-friendly" programming environment for DMPC has been designed at IRISA by providing high-level parallel constructs. The user's code is written in standard Fortran-77 and contains directives to express parallel loops and shared variables (A shared variable can be read or written by all the processors). Shared variables are used for data structures that can be computed in parallel. So parallel loops are intended to compute values of the shared variables declared in the program. Parallel execution is achieved using a SPMD execution model (Single Program Multiple Data). At the beginning of the program execution, a thread is created on each processor and each processor starts to execute the program. All non shared variables are duplicated on the processors. Shared memory space is allocated by the KOAN SVM at the beginning of the execution. For executing parallel do loops the compiler distributes chunks of the iteration space to each processor according to the iteration distribution speci ed in the directives. The Fortran-S compiler is in charge of generating parallel processes with KOAN low level operating system calls from the source code. This approach provides a convenient parallel programming environment for several reasons. It allows an easy and ecient way of programming parallel algorithms ; moreover it facilitates debugging since the programs can be compiled and executed on a workstation. Though parallelism is based on a sharedmemory approach, the programming environment also provides message based primitives that can be used to handle eciently global operations and synchronizations. The prototype compiler is implemented using the Sigma-II system developed at Indiana University [3]. The compilation technique is not described further in this paper to keep it short. In the remaining part of this section, we give a brief overview of the main directives.

Sparse Matrix-Vector Product ; Shared Virtual Memory

3

In Fortran-S, shared variables must be declared explicitly by means of a directive and must be declared in the main program. Other variables are non-shared, namely each processor has each own copy of the variable. A shared variable is declared using the following directive : REAL V(N,N) C$ann[Shared(v)]

The iterations of a parallel loop are distributed among the processors and the processors are synchronized at the end of all the iterations. Several static scheduling strategies are provided by the compiler and the user can de ne its own strategy which may be dynamic. Among these strategies, the compiler can allocate chunks of consecutive iterations to processors or can distribute the iterations cyclically. These schedulings provide a good load-balancing if the work in the iterations is almost equally distributed. Otherwise more sophisticated schemes must be used. Therefore the Fortran-S compiler accepts user-de ned iterations partitions, which can be either static or dynamic (i.e. set at run time). A parallel loop is declared using the directive : C$ann[DoShared("scheduling")] do 20 nel = 1 ,nbnel sounds(nel) = sqrt(gama * p(nel) / ro(nel)) 20 continue where the string "scheduling" indicates the scheduling strategy

of the iterations and can

take the value : 1. "BLOCK": chunks of contiguous iterations are assigned over the processors. 2.

"CYCLIC":

iterations are distributed over the processor according to a modulo scheme. The rst iteration is a ected to the rst processor, second to the second processor, and so on.

3.

"USER":

User-de ned partitions. The partition of the iteration space is speci ed by the user, at run-time. This feature is important when load balancing depends on the data of the program. A weak cache coherence protocol as described brie y in section 2 can be associated with a shared variable during the execution of a parallel loop thanks to the following directive: C$ann[WeakCoherency(y)]

An example of use is given below : C$ann[DoShared("BLOCK")] C$ann[WeakCoherency(y)] do 1 i=1,n y(i) = f(....) 1 continue

In this example y is assumed to be a shared variable written simultaneously by many processors, so there may be false sharing on pages where the variable y is stored. The weak coherence protocol removes that phenomenon, by merging updates of the shared pages only once, at the end of the loop.

4 Algorithm and Data Structures

The sparse matrix by a vector multiplication is a CPU-intensive kernel found in most iterative schemes. The algorithm depends on the storage scheme, which may include some

4

Bodin et al.

zeros or not. Here we choose a compressed storage by rows which is commonly used and which is well-suited for parallel multiplication. Let n be the order of the matrix and nz be the number of non zeros elements. A real array a of length nz contains all the coecients while an integer array j a of same length nz contains the corresponding column indices. An auxiliary integer array ia of length (n + 1) points to the rst element of each row. An example of a sparse matrix with this storage is given below : A

=

0 @

1 3 0 9 0

0 4 6 0 11

2 0 7 0 0

0 5 0 10 0

0 0 8 0 12

1 A

a(1 : nz) ja(1 : nz) ia(1 : n + 1)

1

2

3

4

5

6

7

8

9

10

11

1

3

1

2

4

2

3

5

1

4

2

1

3

6

9

11

13

12 5

An intrinsic parallelism derives readily from the storage scheme. Namely, since the sparse matrix multiplication is expressed by rows, it is sucient to partition the matrix by rows and to handle the di erent blocks of rows in parallel. More precisely, the sequential algorithm is composed of an outer loop on rows with an inner loop on the elements of the rows, as indicated by the program below. The outer iterations on the rows are clearly parallel so that they can be assigned to parallel tasks. Below is the sequential algorithm along with the parallel version.

do i=1,n y(i) = 0. do k=ia(i),ia(i+1)-1 y(i) = y(i) + a(k)*x(ja(k)) end do end do

c shared variables : a,ja,y c private variables : n,i,k,temp,x,ia C$ann[DoShared("sched")] C$ann[WeakCoherency(y)] do i=1,n temp = 0. do k=ia(i),ia(i+1)-1 temp = temp + a(k)*x(ja(k)) end do y(i)=temp end

The data structures are the operand vector x, the resulting vector y and the sparse matrix composed of three vectors a; ia; j a. Only the vector y is written, others are readonly. The vectors x and ia of length n can be copied in local memories. On the contrary, we assume that the vectors a and j a cannot be duplicated because of memory requirements. In our environment, programming is simpli ed thanks to the use of shared arrays which are managed by KOAN and the compiler. Since the scheduling chosen for the algorithm requires a data distribution that di ers from the initial distribution, the system has to move pages during the rst executions of the multiplication. But after a few iterations, the pages will be located correctly and will stay in the local memories provided there is enough space, so that the SVM overhead becomes negligible. Here we do not deal with further uses of the operand vector x in the application. In general, the matrix vector multiply would require a global broadcast of the vector x or implicit copies of it into local memories via the SVM, introducing a meaningful overhead. As far as the resulting vector y is concerned, two cases must be studied. If the vector is used along the application with the same static partition, it may remain local to the processors. Thanks to the SPMD programming model, if the vector y is not declared as shared, each processor will get partial results. On the other hand, if its next use require a di erent partition, the vector must be shared. We use the weak coherence protocol to

Sparse Matrix-Vector Product ; Shared Virtual Memory

5

eliminate false sharing when updating the shared array y . The rows must be partitioned in order to balance the workload. The simplest strategy, which is provided by automatic parallelizers for instance, is to divide the rows into slices of almost equal size. This has been done for example in [7] to implement a sparse linear solver on a KSR computer. However, the load of the algorithm is not measured by the order of the matrix, but rather by its number of non-zeros. Therefore, a better partition consists for some matrices in computing slices with almost the same number of non-zeros but with unequal numbers of rows. We have experimented both strategies, by using the directive "DoShared" with the two options called "BLOCK" and "USER", where "USER" implements a partition of the iteration space into blocks of roughly the same number of non-zeros. In the last strategy the vector ia is used to compute the iteration space partition among the processors. Because that operation is done once (the matrix structure is not modi ed between two matrix vector multiplies, so the computed scheduling stays valid) and because the vector ia is a private vector, the overhead is kept low.

5 Numerical Results

The algorithm is implemented using Fortran-S on a hypercube iPSC/2 with at most 32 processors. It should be noted that the peak performance of one processor is only 0:3 M ops. The code is tested on various matrices (see table 1) in order to measure the impact of the order and the number of non-zeros. They are band matrices or come from the Harwell-Boeing collection [2]. The resulting vector y is either distributed in local memories (vector y is private, so the results of the matrix vector multiply are distributed in the local copies of the vector y ) or managed by the shared virtual memory (delared as shared) using a weak cache coherence protocol. The performance di erences between the two experiments measure the cost of the weak coherence protocol used on vector y . The two scheduling strategies described above (BLOCK and USER) are also tested. Timings in seconds and rates in M ops are provided for hundred iterations in single precision (see tables 2,3,4,5). Large problem results are not given for on one or two processors because the size of the data exceeded the available memory. Experimental results show that the size of the problem is well measured by the number of non zeros in the matrix. Indeed, the results on the band matrices show similar performances, mainly for the version with distributed vectors, because they have almost the same number of non-zeros though they have di erent orders. Performance degradation comes from both load imbalance and the merging of the pages where parts of the vector y are stored (this is due to the weak coherence protocol used). The version with distributed vectors gives very good speed-ups, even for small problems with 32 processors, since data transfers between processors only occur during the rst matrix-vector multiply. We recall that here the operand vector x is not updated. The overhead induced by the merge into the shared vector is proportional to the maximal number of processors writing to a same page when merging the shared array. In case of the BLOCK strategy, this quantity can be roughly estimated by max(2; min(p; c=(n=p))) where c is the page size (here it is 1024 words of 32-bits) p is the number of processors and n is the order of the matrix. Small matrices such as bcspwr09 and band24 lead to a high overhead with 32 processors because the shared array holds in two pages. But for large n, the merge operation becomes relatively cheap since at most two processors share a same page. Load imbalance is particularly sensitive for the matrix orani678 because the distribution

6

Bodin et al. Table 1

Sparse Matrices Used.

matrix bcspwr09 lns3937 band5 orani678 band20 bcsstk24

order non-zeros 1723 6511 3937 25407 3900 42870 2529 90158 3900 159480 3562 159910 Table 2

Results with distributed vectors and with row partitioning (BLOCK).

matrix bcspwr09 (s) MFLOPS lns3937 (s) MFLOPS band5 (s) MFLOPS orani678 (s) MFLOPS band20 (s) MFLOPS bcsstk24 (s) MFLOPS

Number of processors 1 2 4 8 16 32 10.139 5.080 2.702 1.523 1.021 0.845 0.128 0.256 0.482 0.855 1.275 1.541 38.180 20.850 10.731 6.782 3.942 2.310 0.133 0.243 0.473 0.749 1.289 2.200 60.628 30.336 15.273 7.806 4.145 2.411 0.141 0.282 0.561 1.098 2.069 3.556 - 119.940 93.936 72.241 37.226 20.642 0.150 0.192 0.250 0.484 0.873 - 54.699 27.718 14.138 7.604 - 0.583 1.151 2.256 4.195 - 57.600 30.286 15.809 9.004 - 0.555 1.056 2.023 3.552

of non zeros is far from uniform. The rst partition (BLOCK) leads to poor performances whereas the second partition (USER) gives better performances by distributing equally the work. For band matrices where the rows have roughly the same number of non-zeros, both strategies yield similar results. We have measured the SVM overhead due to page moves for each iteration. As expected, the rst iteration is quite costly but in the next ones, this overhead can e ectively be neglected.

6 Conclusion

Results presented in this paper demonstrate that the use of a Shared Virtual Memory is an ecient way for programming distributed memory parallel computers. Our environment provides tools, based on directives, which automatically distribute both data and computations. However, users still have to carefully distribute the loop iterations in order to balance the workload and to limit as much as possible remote access through the SVM. We shown that computations involved in a sparse matrix vector multiply can be easily distributed by using an adequate loop scheduling strategy. Concerning data locality, the overhead induced by reading the matrix comes from an initial system distribution not related to the loop distribution. But these page moves become negligible as soon as the number of iterations calling the sparse matrix vector multiply becomes sizable. The main limitations to the speed-up are due to the e ective sharing of the resulting vector and the operand

Sparse Matrix-Vector Product ; Shared Virtual Memory

Table 3

Results with one shared vector and with row partitioning (BLOCK).

matrix bcspwr09 (s) MFLOPS lns3937 (s) MFLOPS band5 (s) MFLOPS orani678 (s) MFLOPS band20 (s) MFLOPS bcsstk24 (s) MFLOPS

Number of processors 1 2 4 8 16 32 10.139 5.960 3.915 3.688 4.908 8.046 0.128 0.218 0.333 0.353 0.265 0.161 38.180 21.571 11.879 8.739 6.626 6.788 0.133 0.236 0.428 0.581 0.767 0.749 60.628 31.250 16.363 9.721 6.871 7.017 0.141 0.274 0.524 0.882 1.248 1.222 - 121.524 94.946 74.053 39.711 25.071 0.148 0.190 0.243 0.454 0.719 - 55.800 29.596 16.925 12.254 - 0.572 1.078 1.884 2.603 - 58.734 31.934 18.358 13.287 - 0.544 1.001 1.742 2.407

Table 4

Results with one shared vector and with non-zeros partitioning (USER).

matrix bcspwr09 (s) MFLOPS lns3937 (s) MFLOPS band5 (s) MFLOPS orani678 (s) MFLOPS band20 (s) MFLOPS bcsstk24 (s) MFLOPS

Number of processors 4 8 16 32 3.913 3.641 4.815 8.062 0.333 0.357 0.270 0.161 11.130 6.824 5.640 6.410 0.456 0.745 0.901 0.793 16.371 9.694 6.722 7.022 0.524 0.884 1.275 1.221 33.751 18.862 14.231 13.491 0.534 0.956 1.267 1.336 55.493 29.224 16.442 12.049 0.575 1.091 1.940 2.647 56.910 29.987 17.056 12.650 0.562 1.066 1.875 2.528

7

8

Bodin et al. Table 5

Results on band matrices (BLOCK).

8 processors matrix order non zeros shared distributed band5 (s) 3900 42870 9.721 7.806 MFLOPS 0.882 1.098 band10 (s) 2047 42877 9.401 7.647 MFLOPS 0.912 1.121 band24 (s) 887 42863 10.473 7.612 MFLOPS 0.818 1.126 32 processors band5 (s) 3900 42870 7.017 2.440 MFLOPS 1.222 3.514 band10 (s) 2047 42877 8.369 2.407 MFLOPS 1.025 3.563 band24 (s) 887 42863 12.723 2.411 MFLOPS 0.674 3.556

vector. We studied here the rst e ect on a shared virtual memory and found that the overhead becomes small for matrices of large order. The second limitation occurs in an application calling iteratively the kernel and distributing the vector x in each iteration. Since this overhead increases with the number of processors, it may become the main bottleneck of an application ([7]). We plan to investigate this problem and the global data usage by implementing a linear sparse iterative solver requiring a sparse matrix vector multiply.

References

[1] I. Du , A. Erisman, J. Reid, Direct methods for sparse matrices. Oxford University Press, London, 1986. [2] I. Du , R. Grimes, and J. Lewis, Sparse matrix test problems, ACM TOMS, 15 (1989), pp. 1{14. [3] Dennis Gannon and Jenq Kuen Lee and Bruce Shei and Sekhar Sarukai and Srivinas Narayana and Neelakantan Sundaresan and Daya Atapattu and Francois Bodin. Sigma II: a tool kit for building parallelizing compilers and performance analysis systems. Proceedings of the IFIP Edinburgh Workshop on Parallel Programming Environments, April 1992. [4] Z. Lahjomri and T. Priol. Koan: a shared virtual memory for the ipsc/2 hypercube. In CONPAR/VAPP92, September 1992. [5] Kai Li. Shared Virtual Memory on Loosely Coupled Multiprocessors. PhD thesis, Yale University, September 1986. [6] O. Temam, W. Jalby, Characterizing the behavior of sparse algorithms on caches Proceedings of Supercomputing'92, pp.578-587. [7] D. Windheiser, E. Boyd, E. Hao, S. Abraham, KSR1 Multiprocessor : Analysis of Latency Hiding Techniques in a Sparse Solver Research Report, University of Michigan, November 1992.

Suggest Documents