An iterative two-step algorithm for linear ... - Semantic Scholar

9 downloads 0 Views 235KB Size Report
As test example we consider the obstacle problem with di erent obstacles. ..... obstacle functions corresponding to 1 and 2, respectively. We implemented and ...
Numer. Math. 68: 95{106 (1994)

Numerische Mathematik

c Springer-Verlag 1994 Electronic Edition

An iterative two-step algorithm for linear complementarity problems? Michal Kocvara?? , Jochem Zowe

Mathematisches Institut, Universitat Jena, Germany Received July 14, 1993 / Revised version received February 1994

Dedicated to Professor Josef Stoer on the occasion of his 60th birthday

Summary. We propose an algorithm for the numerical solution of large-scale symmetric positive-de nite linear complementarity problems. Each step of the algorithm combines an application of the successive overrelaxation method with projection (to determine an approximation of the optimal active set) with the preconditioned conjugate gradient method (to solve the reduced residual systems of linear equations). Convergence of the iterates to the solution is proved. In the experimental part we compare the eciency of the algorithm with several other methods. As test example we consider the obstacle problem with di erent obstacles. For problems of dimension up to 24 000 variables, the algorithm nds the solution in less then 7 iterations, where each iteration requires about 10 matrix-vector multiplications. Mathematics Subject Classi cation (1991): 65K10

1. Introduction We consider the symmetric linear complementarity problem (LCP): Find x 2 Rn such that (1) Ax ? b  0; x  c; (x ? c)T (Ax ? b) = 0; here A is a given n  n real symmetric positive de nite matrix and b; c are vectors in Rn . For completeness let us mention that (1) is equivalent to the convex quadratic programming problem ((1) are the Karush{Kuhn{Tucker conditions for (2)) min f (x) := 21 xT Ax ? xT b (2) s.t. x 2 S; where S = fy 2 Rn j yi  ci ; i = 1; 2; : : :; ng: ? This research was supported by the German Scienti c Foundation (DFG) and the German-

Israeli Foundation for Scienti c Research and Development (GIF)

?? The author is on leave from the Czech Academy of Sciences, Prague, Czech Republic

Numerische Mathematik Electronic Edition { page numbers may di er from the printed version page 95 of Numer. Math. 68: 95{106 (1994)

96

M. Kocvara and J. Zowe

In the applications we have in mind, the matrix A will be large and sparse and thus iterative methods are well suited for the solution of (1). Many such methods are discussed in the literature. We mention three of them which are widely used in practice: (i) The successive overrelaxation method with projection (SORP) [7]; this method is quite popular due to its simplicity and robustness. (ii) The preconditioned conjugate gradient method combined with an active set strategy (PCGA) [8, 10]; this method is faster than SORP provided we start with a \good" active set. (iii) Multigrid methods (MG) [2, 5, 6]; these are the fastest ones but they need additional data for the auxiliary problem(s). In this paper we present a method which, for several large examples, proved to be faster than the SORP and PCGA ideas mentioned in (i) and (ii). The underlying idea copies the philosophy of the MG methods from (iii). Each iteration combines a relaxation step (which, in the multigrid terminology, is used to smooth the error and to nd an approximation of the set of active indices) with the approximate solution of an auxiliary problem (which aims at reducing the low-frequency components of the error). A similar two-step idea was proposed by More and Toraldo [9]; they couple at each step the gradient projection method with the conjugate gradient method. Our idea replaces the gradient projection method by SORP. The reason for this change is the smoothing e ect which is attributed to SORP and which is wellknown from the multigrid context. The numerical results in Sect. 3.2 strongly support our approach. We use the following notation: small italics x,y,b,c,: : : denote vectors, xi means the ith component of x, while xk is the kth successive iterate of a particular method. So xki is the ith component of the kth iteration vector xk . Similarly, Aij is the (i; j )?component of the matrix A.

2. Algorithms We recall the de nition of the algorithm SORP (successive overrelaxation with projection): SORP ([7]) Choose x0 2 Rn and put for k = 0; 1; 2; : : :

xki +1 (3)

8 < k 1 0X k = max :xi ? ! A @ Aij xj ii j

Suggest Documents