A Parallel Implementation of the Davidson Method for Generalized Eigenproblems Eloy ROMERO 1 and Jose E. ROMAN Instituto ITACA, Universidad Politécnica de Valencia, Spain Abstract. We present a parallel implementation of the Davidson method for the numerical solution of large-scale, sparse, generalized eigenvalue problems. The implementation is done in the context of SLEPc, the Scalable Library for Eigenvalue Problem Computations. In this work, we focus on the Hermitian version of the method, with several optimizations. We compare the developed solver with other available implementations, as well as with Krylov-type eigensolvers already available in SLEPc, particularly in terms of parallel efficiency. Keywords. Generalized eigenvalue problem, Davidson-type methods, Messagepassing parallelization
1. Introduction Let A and B be large, sparse Hermitian matrices of order n. We are concerned with the partial solution of the generalized eigenvalue problem defined by these matrices, i.e., the computation of a few pairs (λ, x) that satisfy Ax = λBx,
(1)
where the scalar λ is called the eigenvalue, and the n-vector x is called the eigenvector. If B is a positive definite matrix, the eigenvalues and the eigenvectors are real. Otherwise the eigenpairs could be complex even if the matrices are real. This problem arises in many scientific and engineering areas such as structural dynamics, electrical networks, quantum chemistry, and control theory. Many different methods have been proposed for solving the above problem, including Subspace Iteration, Krylov projection methods such as Lanczos, Arnoldi or KrylovSchur, and Davidson-type methods such as Generalized Davidson or Jacobi-Davidson. Details of these methods can be found in [1]. Subspace Iteration and Krylov methods achieve good performance when computing extreme eigenvalues, but usually fail to compute interior eigenvalues. In that case, the convergence is accelerated by combining the method with a spectral transformation technique, i.e., to solve (A − σB)−1 Bx = θx instead of Eq. 1. However this approach adds the high computational cost of solving 1 Corresponding Author: Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain; E-mail:
[email protected].
large linear systems at each iteration of the eigensolver and very accurately (usually with direct methods). Davidson-type methods try to reduce the cost, by solving systems only approximately (usually with iterative methods) without compromising the robustness. This kind of methods are reaching increasing popularity due to their good numerical behaviour. Among the benefits of these methods, we can highlight (1) their good convergence rate in difficult problems, e.g. when internal eigenvalues are to be computed, (2) the possibility of using a preconditioner as a cheap alternative to spectral transformations, and (3) the possibility to start the iteration with an arbitrary subspace, so that good approximations from previous computations can be exploited. Parallel Davidson-type eigensolvers are currently available in the PRIMME [11] and Anasazi [2] software packages. However, currenly PRIMME can only cope with standard eigenproblems, and Anasazi only implements a basic block Davidson Method. Our aim is to provide robust and efficient parallel implementations of the Generalized Davidson Method in the context of SLEPc, the Scalable Library for Eigenvalue Problem Computations [8], that can address both standard and generalized problems. In this work, we focus on the case of symmetric-definite generalized eigenproblems, although some attention will be devoted to the case of semi-definite or indefinite B. Section 2 describes the different variants of the method that we have considered. In sections 3 and 4 we provide details related to the implementation. Section 5 presents some performance results comparing the developed implementation with other solvers.
2. Generalized Davidson Method The Generalized Davidson (GD) method is an iterative subspace projection method for eigenproblems. It is a generalization of the original Davidson’s method [4] that allows using an arbitrary preconditioner. In its variant for symmetric-definite generalized eigenproblems, in each iteration the GD method performs a Rayleigh-Ritz procedure for selecting the most relevant approximate eigenpairs (θ, x) in the search subspace represented by a matrix with B-orthonormal columns, V . Then it computes a correction to x and expands the search subspace by adding this correction. Many optimizations have been proposed with respect to the basic algorithm. The following are the most important ones considered in this work: • The block version (originally called Liu-Davidson method [9,5]) updates several eigenpairs simultaneously. Besides accelerating the convergence, this technique is useful for finding more than one eigenpair. However, the block size s should be kept small to avoid a cost blow-up. In this section, X denotes a matrix with s columns, the approximate eigenvectors xi . A similar notation is used for the corrections, D, and the residuals, R. • Restarting combined with locking strategies allows the computation of more eigenpairs without enlarging the search subspace’s maximum dimension. • Olsen’s variant [10] consists in computing the correction vector di as di ← K −1 (I − xi (x∗i K −1 xi )−1 x∗i K −1 )ri ,
(2)
being K −1 a preconditioner. • Thick restart [12], in particular the GD(mmin , mmax ) variant, restarts with the best mmin eigenvectors when the size of V reaches mmax .
Algorithm 1 (Block symmetric-definite generalized GD with B-orthogonalization) matrices A and B of size n, preconditioner K, number of wanted eigenpairs p, block size s, maximum size of V mmax , restart with mmin vectors e X) e Output: resulting eigenpairs (Θ,
Input:
Choose an n × s full rank matrix V such that V ∗ BV = I e