On Linear Asynchronous Iterations when the Spectral ... - CiteSeerX

0 downloads 0 Views 236KB Size Report
gence for a larger class of matrices for which the spectral radius of the modulus matrix is allowed to ... underlying mathematical models for asynchronous iterations. 1 Introduction ..... Moreover, let the starting vector x0 be nonnegative and let there exist at least one index i such ..... Mathematical Sciences. SIAM, Philadelphia ...
Preprint BUGHW-SC 2000/5

Bergische Universitat GH Wuppertal Fachbereich Mathematik

Andreas Frommer and Pierre Spiteri

On Linear Asynchronous Iterations when the Spectral Radius of the Modulus Matrix is One October 2000 http://www.math.uni-wuppertal/SciComp/

On Linear Asynchronous Iterations when the Spectral Radius of the Modulus Matrix is One Andreas Frommer

Pierre Spiteri y

dedicated to Prof. T. Yamamoto on the occasion of his 65th birthday

Abstract

A classical result on linear asynchronous iterations states that convergence occurs if and only if the spectral radius of the modulus matrix is less than 1. The present paper shows that if one introduces very mild restrictions on the admissible asynchronous processes, one gets convergence for a larger class of matrices for which the spectral radius of the modulus matrix is allowed to be equal to 1. The mild restrictions are virtually always ful lled in practical implementations. In this manner, our result contributes to the better understanding of the di erent hypotheses underlying mathematical models for asynchronous iterations.

1 Introduction In this paper we consider genuine parallel iterative methods for solving a linear system Ax = b; where A 2 C nn ; x; b 2 C n : (1) The iterative methods we deal with rely on a xed point formulation for (1) of the form x = Hx + c; where H = (hij ) 2 C nn and c 2 C n : (2) We assume that H and c are constructed such that x is a solution of the linear system (1) if and only if x is a solution of the xed point equation (2). For example, H may arise from an appropriate splitting of A, see [19], for example. As is well known, the total step method xk+1 = Hxk + c; k = 0; 1; : : : ; x0 given ; (3)  Fachbereich

Mathematik, Universitat Wuppertal, D-42097 Wuppertal, Germany, . The work of this author was partially funded by the French ministry of research through a visiting grant at ENSEEIHT y ENSEEIHT: Ecole Nationale Superieure d'Electrotechnique, d'Electronique, d'Informatique et d'Hydraulique de Toulouse, 2, rue Camichel, B.P. 7122, F-31071 Toulouse-Cedex 1, France, [email protected]

[email protected]

1

converges to a unique solution of (2) for any starting vector x0 if and only if (H ) < 1, see [19] for example. On a parallel computer, in order to avoid idle times due to imbalanced load or communication delays, ecient implementations of the total step method actually lead to new iterative schemes whose behaviour can only partly be determined a priori. These iterations belong to the so-called class of asynchronous methods, and the purpose of the present paper is to further contribute to the understanding of sucient conditions guaranteeing the convergence of such methods. Since the pioneering work by Chazan and Miranker [6], the condition (jH j) < 1 is known as a necessary and sucient condition for the convergence of asynchronous iterations. However, the precise mathematical model for the asynchronous iterations underlying the Chazan-Miranker result (see [6] and [3] for a slight modi cation) is very general. Virtually any practical situation can in fact be modelled mathematically by assuming slightly more restricitive hypotheses than in the Chazan-Miranker model, and this has indeed been done repeatedly in the literature on asynchronous iterations. Examples include iterations for singular systems [1, 2, 13], the exible communication model of [7, 8, 15], results on monotone convergence [10, 14] or iterations relying on interval arithmetic [11]. However, the linear case (3) with (H ) < 1 has never been reconsidered under such natural, but slightly more restrictive assumptions. We do so in the present paper, and we will nd that asynchronous iterations converge for a larger class of matrices H than those with (jH j) < 1. We believe that, besides establishing a new convergence theorem on linear asynchronous iterations, the main interest of our result is that it contributes to clarify the role of the various hypotheses in the mathematical models on asychronous iterations. The paper is organized as follows. We introduce our notation and some basic results in section 2. Section 3 presents and discusses the mathematical model for linear asynchronous iterations. This section also reviews the various hypotheses imposed on the model and states known convergence results. The main part of the paper is section 4 where we prove our new convergence theorem. We end with some conclusions in section 5.

2 Auxiliary Results In this section we brie y introduce our notation and state some auxiliary results. For a given matrix H 2 C nn , its (i; j ) entry will usually be denoted by hij but sometimes, we will nd it convenient to write (H )ij instead. The matrix H is termed nonnegative, H  0, if all its entries are nonnegative. It will be termed positive, H > 0, if all entries are positive. We adopt a similar notation and terminology for vectors. The modulus matrix jH j 2 R nn of H is de ned entrywise through (jH j)ij = jhij j; i; j = 1; : : : ; n: By (H ) we denote the spectral radius of H , i.e. the largest modulus of the eigenvalues of H . The following result can be found in [16], e.g. 2

Lemma 1 For any matrix H 2 C nn we have (H )  (jH j): The non-zero pattern of a matrix H 2 C nn de nes a directed graph G(H ) =

(V; E ) with vertices V and edges E via

V = f1; : : : ; ng; E = f(i; j ) j hij 6= 0g  V  V: A (directed) path of length q from vertex i to vertex j in G is a nite sequence of vertices (i = i0 ; i1 ; : : : ; iq?1 ; iq = j ) such that (il?1 ; il ) 2 E for l = 1; : : : ; q. The matrix H is termed irreducible if in G(H ) there is a path from any vertex i to any other vertex j 6= i. For a given vertex i, we de ne its (reverse) level sets L (i) recursively as

follows:

L0 (i) = fig L (i) = fj 2 V j there exists l 2 L?1 (i) such that (j; l) 2 E g ? for  = 1; : : : ; n:

[ ?1  =0

L ;

Clearly, the level sets L (i) bin those vertices j for which there is a path from j to i by the length of the shortest such path. A shortest path has always length less than n. Consequently, Ln(i) = ; for all i, and if H is irreducible, then ?1 L (i) = f1; : : : ; ng for all i. [n=0 A signature matrix is a diagonal matrix all diagonal elements of which have modulus 1. Similarity transformations with signature matrices play a major role when taking a closer look on when there can be equality in Lemma 1:

Lemma 2 Assume that H is irreducible. Then (H ) = (jH j) if and only if there exists a signature matrix S and a complex number with modulus 1 such that S ?1 HS  0. The proof can be found in [4, Th. 2.14], for example. We end this section with a purely technical result which will be used in section 4 for the proof of Theorem 3. Here, as in the sequel, arg(a) denotes the argument of a complex number a 6= 0, and this argument is unique mod 2.

Lemma 3 Let be given n sequences of complex numbers fakj g1k , an additional such sequence fbk g1 k and n complex numbers hj ; j = 1; : : : ; n such that =0

=0

n X k b = hj akj ; j =1

k = 0; 1; : : : and

n X j =1

jhj j = 1:

Moreover, assume that there exists c > 0 such that

lim jak j = c and klim bk = c: k!1 j !1 3

Let kj = arg(akj ); k = arg(bk ) and j = arg(hj ). Then for all j for which hj 6= 0 we have ? k k ? j  = 1: lim cos ? j k!1

Proof. If hj 6= 0 for just one j , the result is trivial. So assume that hj 6= 0 for at least two indices j . Then jhj j < 1 for all j , which will prevent us from dividing by zero in the algebraic manipulations to come. Fix any j for which hj 6= 0. We have

jbk ? hj akj j =

n X l=1;l6=j

k l

hl a :

(4)

For the left part of (4) we get, as for any di erence of two complex numbers,



q



bk ? hj akj = jbk j ? jhj akj j  1 + pkj (1 ? cos( k ? kj ? j ))

(5)

with pkj = 2  jjbbk ?jjhhjjaakjjjj2 : k

k

Using the triangle inequality on the right hand part of (4), we get

jbk ? hj akj j 

n X l=1;l6=j

jhl j  jakl j:

(6)

For the right hand side of (6) we get lim k!1

n X l=1;l6=j

jhl j  jakl j =

n X l=1;l6=j

jhl jc = (1 ? jhj j)c:

On the other hand, for the right hand side of (5), we have



lim jbk j ? jhj akj j = (1 ? jhj j)c and klim pk = 2jhj j=(1 ? jhj j)2 > 0: k!1 !1 j So the square?root in (5) tends to 1, and since pkj tends to a positive limit we have limk!1 1 ? cos(( k ? kj ? j ) = 0.

3 Asynchronous linear iterations Assume that we are given a linear system in the xed point form (2). Asynchronous iterations (2) may be regarded as a whole class of iterative methods derived from the total step method (3). One now allows that only certain components of the iterate are updated at a given time step and that more than just the previous iterate may be used in the updated process. The precise de nition is as follows, see [3, 5]. 4

De nition 1 For k = 1; 2; : : : let Jk  f1; : : : ; ng and (s (k); : : : ; sn(k)) 2 N n 1

be such that

si (k)  k ? 1 for i = 1; : : : ; n; k = 1; 2; : : : ; lim s (k) = +1 for i = 1; : : : ; n; k!1 i for every i 2 f1; : : : ; ng the set fk j i = Jk g is unbounded: Then the iteration

xki +1 =

( P n

xki

sj (k) j =1 hij xj + ci

if j 2 Jk if j 62 Jk

0

(7) (8) (9) (10)

is termed an asynchronous iteration (with delays si (k) and active components Jk ).

Asynchronous iterations arise naturally on parallel computers if one eliminates synchronization points in order to avoid idle times on the individual processors. Detailed discussions can be found in the surveys in [5, 12]. Typically, then, the components of the vector to compute are distributed in (nonoverlapping) blocks to the processors. The iteration counter k in (10) has then to be interpreted as counting every step in which any of the processors has nished its next update. The delays si (k) account for the fact that processors will usually need di erent times to accomplish their individual updates and that the most recent data may not be available due to communication delays. The active components Jk represent the block of components assigned to the processor which does the k-th update. In such a situation, the following more restrictive hypotheses than (7) - (9) will be satis ed:

si (k) = k ? 1 if i 2 Jk (11) there exists d 2 N such that k ? d  si (k)  k ? 1 for all i and k (12) there exists r 2 N such that [lk=+kr Jl = f1; : : : ; ng for all k (13) Here, (11) holds because a processor has always the latest information of `its' block available, and (12) as well as (13) simply re ect the fact that each communication and each update on the processors will neither become arbitrarily slow nor arbitrarily fast. We refer to [5, 12] for additional details. Under the assumptions (7), (8) and (9), the convergence behaviour of the asynchronous iteration (10) has been studied in [3, 6, 18]. In particular, the rst and second part of the following theorem date back to [3] and [6], respectively.

Theorem 1 Assume that (7) to (9) are ful lled. (i) If (jH j) < 1 the asynchronous iteration (10) converges to x , the unique

xed point of (2), i.e. for every starting vector x0 we have limk!1 xk = x .

5

(ii) If (jH j)  1 and if (2) has at least one xed point x , then there exists a starting vector x0 , a sequence of delays si (k) and a sequence of active components Jk , satisfying (7) to (9) and even (12) and (13) such that the iterates xk of (10) do not converge to x .

The proof for part (ii), as given in [6] (see also [9]), constructs appropriate delays and active components as well as a starting vector such that the iterates do not converge to x . This construction, while satisfying (12) and (13) crucially relies on not ful lling (11). Theorem 1 is complemented by the following result due to Lubachevsky and Mitra [13] in the context of asynchronous iterations for Markov chains.

Theorem 2 Assume H  0 is irreducible, (H ) = 1 and let c = 0 in (10). Moreover, let the starting vector x0 be nonnegative and let there exist at least one index i such that hii > 0, x0i > 0 and si (k) = k ? 1 whenever i 2 Jk . Finally, assume that the asynchronous iteration (10) satis es the more restrictive hypotheses (12) and (13). Then lim xk = x

k!1

where x > 0 is a positive vector satisfying Hx = x .

As a rst comment, let us note that [13] actually deals with an asynchronous model which looks more general than ours allowing for the delays sj (k) to also depend on the index i 2 Jk . However, this is only a formal generalization, since one can assume all Jk to be singletons as can be seen as follows: step k in (10) can alternatively be rewritten as a sequence of jJk j new steps, where in each new step we have just one active component. The delays have then to be rede ned, but the new delays together with the new sets (singletons) of active components will still satisfy (7) to (9) if the old ones did; and similarly for (11) to (13) (with rede nition of d and r). As another comment, note that Theorem 2 is by no means in con ict with part (ii) of Theorem 1: Under the assumptions of Theorem 2, the equation x = Hx has a one-dimensional subspace as its solution set, and the theorem just says that the asynchronous iteration converges to one (non-zero) point of that subspace. As was shown in [13], this point will actually depend on the delays and the active components of the asynchronous iteration. Note also that Theorem 2 assumes a particular starting vector and (11) to be ful lled for at least one particular component i. For further discussion, see also [17]. Theorems 1 and 2 set the stage for the work to be presented in this paper. Since we want to prove convergence results for the case (jH j) = 1, we certainly should look at matrices which are not nonnegative (Theorem 2) and we will have to assume more restricitive hypotheses than (7) - (9) (Theorem 1). Our main result in the next section will identify a class of such matrices for which (11) - (13) will turn out to be sucient for convergence. This demonstrates that condition (11) crucially a ects the convergence of asynchronous 6

iterations, whereas replacing (7) - (9) by (12) and (13) does not (Theorem 1 (ii)). Let us end this section by introducing another useful piece of notation. For a given time step k and a component i, we will use ri (k) to denote the latest time step before k at which the i-th component was modi ed, i.e.

ri (k) = maxfl j i 2 Jl and l  kg: We thus have

xki = xik?1 = : : : = xri i (k) =

n X j =1

(14)

hij xjsj (ri (k)) + ci ;

and, if (13) is satis ed,

k ? ri (k)  r for all i:

(15)

4 A new convergence result We start with a note on the scaling invariance of asynchronous iterations. Clearly, the synchronous total step method (3) is invariant under similarity transformations. By this, we mean that if we take any non-singluar matrix T 2 R nn , the iterates xk of (3) are related to the iterates xbk of

xbk = (T ?1 HT )xbk?1 + T ?1c; k = 1; 2; : : : through T xbk = xk ; k = 0; 1; : : : . An analogous property holds for asynchronous iterations only if we assume T to be diagonal. In the convergence analysis to come, we can therefore without loss of generality assume an appropriate diagonal scaling H D?1 HD with a non-singular diagonal matrix D. We will use such a diagonal scaling based on the following observation: If H is irreducible and (jH j) = 1, the Perron-Frobenius theorem (see [4, Th. 1.4], e.g.) shows that there exists a positive vector u 2 R n such that jH ju = u. This means that we have 1

1 X

ui j=1 jhij juj = 1; i = 1; : : : ; n: Taking D = diag(u) and scaling H in addition, n X j =1

D?1 HD we may therefore assume that,

jhij j = 1; i = 1; : : : ; n;

while still (jH j) = 1. 7

(16)

Theorem 3 Assume that the matrix H is irreducible, that (H ) < (jH j) = 1

and, without loss of generality, that H satis es (16). In addition, let all diagonal elements hii of H be real and positive. Moreover, assume that the asynchronous iteration (10) satis es (11) to (13). Then the iterates xk converge to x , the unique xed point of x = Hx + c. Proof. Clearly, since (H ) < 1 the xed point x exists and is unique. We will show that the error vectors ek = xk ? x converge to 0. The errors ek satisfy

eki +1 =

( P n

eki

sj (k) j =1 hij ej

if j 2 Jk : if j 62 Jk

(17)

Let us rst de ne quantities k as d?1 k?i k = max ke k1 ; k = 0; 1; : : : ; i=0

where we set e?d+1 = e?d+2 = : : : = e0 . Note that for given k > 0, we have

kek k1  k? : 1

(18)

Indeed, any component eki of ek either satis es eki = eik?1 which implies jeki j = jeik?1 j  k?1 or it satis es eki = Pnj=1 hij ejsj (k) with k ? d  sj (k)  k ? 1. This yields n n X sj (k) X k jei j  jhij j ej  jhij jk?1 = k?1 ; j =1 j =1

thus proving (18). By de nition of k this gives

k  k?1 for k = 1; 2; : : : ; which shows that the sequence fk g is convergent, lim  = c  0: k!1 k

(19)

Our ultimate goal is to show c = 0. To this purpose, we from now on assume that, on the contrary, c > 0 and the whole rest of this proof is concerned with obtaining a contradiction. This will be achieved by establishing the following four major conclusions: (i) For all i we have limk!1 jeki j = c: (ii) Denoting ik = arg(eki ) and ij = arg(hij ), we have for all k 



lim cos ik ? jsj (ri (k)) ? ij = 1 for all i and j for which hij 6= 0: k!1 (See (14) for the de nition of ri (k).) 8

(iii) For all i and j the limit klim ek =ek exists. !1 i j

(iv) There exists a signature matrix S such that S ?1 HS  0. Note that by Lemma 2 assertion (iv) is a contradiction to (H ) < (jH j), showing that our initial assumption c > 0 was wrong, thus proving the theorem. To show (i), let us rst introduce the notation  for the modulus of the smallest non-zero entry of H = (hij ):

 = minfjhij j j hij 6= 0g: Since limk!1 k = c, we have lim supk!1 jeki j  c for all i. We therefore must only show that lim inf k!1 jeki j  c. Assume that, on the contrary, this is not the case for some component i. This means that there exists  > 0 such that jeki j  c ?  for in nitely many k: (20) The idea is to show that this imples k < c for some k which is a contradiction to (19), since the k converge monotonically from above. Unfortunately, establishing this is rather technical. To start, let us x a constant satisfying (r +d+1)n 0 < <  (r+d+1)n  ; 1? where d and r are de ned in (12) and (13). Note rst that then  (21) < 1 ?    for  = 1; : : : ; (r + d + 1)n: By (19) there exists k( ) such that jekj j  c + for all j = 1; : : : ; n and all k  k( ): (22) Let k0  k( )+ d be such that jeki 0 j  c ? . Such k0 exists by virtue of (20). We will show the following: If m is a component belonging to the level set L (i), then 

jekm j  c ?  r

d n   ? (1 ?  (r+d)n+ )



0 for  = 0; : : : ; l. As a next step, consider the level 1 set L1 (i) of i, i.e. all components m for which hmi 6= 0. Whenever such component m is updated at time step k 2 fk0 + (r + d); : : : ; k0 + (r + d)ng (it is actually updated at least n ? 1 times in this time interval), we get

jekm j = hmi eik? + 1



n X j =1;j 6=i

si (k) i + (1

 jhmi j  e



hmj ejsj (k)

? jhii j)  (c + ):

Here, si (k)  k ? d  k0 , so that we can use (23) for  = 0 to get 



jekmj  jhmi j  c ? ( r d n  ? (1 ?  r d n ) ) + (1 ? jhmi j)  (c + )   = c ? jhmi j   r d n  ? (1 ?  r d n) + (1 ? jhmi j)    c ?    r d n  ? (1 ?  r d n) + (1 ? )   = c ?  r d n  ? (1 ?  r d n ) : ( + )

( + )

( + )

( + )

( + ) +1

( + )

( + )

( + ) +1

This proves (23) for the level set L1 (i). It should now be apparent that we can use induction on the level sets L (i) to establish (23) for the remaining level sets of i. Since the proof proceeds in a completely analogous manner as for  = 1 we refrain from reproducing the details here and consider (i) as settled. 10

The proof for part (ii) is much shorter. Because of (i) (with c > 0), we can assume without loss of generality that all eki are nonzero. In particular, all arguments ik are then well de ned. Given (i), for any xed i we can now apply Lemma 3 (with bk = eki and aki = ejsj (ri (k)) ) and we obtain (ii). We now turn to prove (iii). We start with an important observation arising from (ii) in the special case j = i where we have si (ri (k)) = ri (k) ? 1. Since we assumed hii > 0 we have ii = 0 so that (ii) reads limk!1 ik ? iri (k)?1 = 0 mod 2. For every l with ri (k) < l  k we trivially have il = il?1 mod 2, since eli = eil?1 . Altogether we therefore have lim k ? ik?1 = 0 mod 2 for all i: (25) k!1 i Now, to establish (iii), because of (i), we only have to show that limk!1 (ik ? jk ) exists mod 2 for all i and j . We will rst show that in fact  ? (26) lim cos ik ? jk ? ij = 1 whenever hij 6= 0: k!1 To this purpose, let us write

ik ? jk = ik ? jsj (ri (k)) +

k X l=sj (ri (k))+1

(jl?1 ? jl ):

Because of (12), (13) and? (15) the sum contains at most r +d summands. By (25) each of the summands jl?1 ? jl approaches 0 mod 2 as k ! 1. Therefore, (26) follows by using (ii) for the rst term ik ? jsj (ri (k)) and we have proved (iii) for all i and j with hij 6= 0. If hij = 0, let (i = i0 ; i1 ; : : : ; iq?1 ; iq = j ) be a path connecting i to j in the directed graph of the irreducible matrix H . Then

eki = eki  eki1  : : :  ekiq?1 : ekj eki1 eki2 ekj

and each of the right hand factors converges, showing that lim eki =ekj again exists. It remains to show (iv). Note rst that by (26) we have !

ekj lim arg h ij ek = 0 mod 2 whenever hij 6= 0: k!1 i

(27)

Let T denote the set of all the matrices T k = (1=c)  diag(ek ); k = 0; 1; : : : . By (i), T is bounded, so that it has at least one accumulation point S . Each such matrix S is a signature matrix, again by (i). By (27), for such S we have S ?1 HS  0. This ends the proof of (iv) and of the whole theorem.

5 Conclusions Our new result, Theorem 3, shows that asynchronous iterations converge for a larger class of matrices, if, in addition to (12) and (13), the natural condition (11) 11

is also met. This last condition is crucial in this context, since the construction of [6], see Theorem 1 (ii), shows that without it we can always construct delays and active components still satisfying (12) and (13) such that there is divergence. We think that the major merit of our new result is therefore to having contributed to clarify the role of the various typical conditions present in practical and mathematical models for asynchronous iterations. We wish to end this paper by a simple example which shows that a assumption on positive diagonal elements of H (as present in Theorem 3) is de nitely necessary in order to get convergence. Let   ? 1 = 2 ? 1 = 2 H = 1=2 ?1=2 ;

p

which has negative diagonal elements and is irreducible with (H ) = 1= 2 < 1 = (jH j). If we take e0 = (1; 1) and Jk = fk mod 2g; si (k) = k ? 1 for all i and k, then a simple calculation shows that the errors ek go through a cycle of length 4, where e0 = e4 = e8 = : : : = (1; 1)T ; e1 = e5 = e9 = : : : = (?1; 1)T ; e2 = e6 = e10 = : : : = (?1; ?1)T ; e3 = e7 = e11 = : : : = (1; ?1)T :

References [1] Jacques Bahi. Algorithmes paralleles asynchrones pours des systemes singuliers. CR, 326:1421{1425, 1998. [2] Jacques Bahi. Asynchronous iterative algorithms for nonexpansive linear systems. J. Parallel Distribut. Comp., 60:92{112, 2000. [3] Gerard M. Baudet. Asynchronous iterative methods for multiprocessors. J. ACM, 25:226{244, 1978. [4] Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia, 2nd edition, 1994. [5] Dimitri P. Bertsekas and John N. Tsitsiklis. Parallel and Distributed Computation. Prentice Hall, Englewood Cli s, New Jersey, 1989. [6] Dan Chazan and Willard L. Miranker. Chaotic relaxation. Linear Algebra Appl., 2:199{222, 1969. [7] Didier El Baz, Pierre Spiteri, and Jean-Claude Miellou. Asynchronous multisplitting methods with exible communication for pseudolinear p.d.e. In Proceedings of the Eighth International Colloquium on Di erential Equations, pages 145{152, Plovdiv, Romania, August 18-23, 1997. VSP International Science Publishers, Utrecht. 12

[8] Didier El Baz, Pierre Spiteri, Jean-Claude Miellou, and Didier Gazen. Asynchronous iterative algorithms with exible communication for nonlinear network ow problems. J. Parallel Distrib. Comput., 38:1{15, 1996. [9] Andreas Frommer. Losung linearer Gleichungssysteme auf Parallelrechnern. Vieweg Verlag, Wiesbaden, 1990. [10] Andreas Frommer. On asynchronous iterations in partially ordered spaces. Numer. Funct. Anal. Optimization, 12:315{325, 1991. [11] Andreas Frommer and Hartmut Schwandt. Asynchronous parallel methods for enclosing solutions of nonlinear equations. J. Comp. Appl. Math., 60:47{ 62, 1995. [12] Andreas Frommer and Daniel B. Szyld. On asynchronous iterations. J. Comp. Appl. Math., 123:201{216, 2000. [13] Boris Lubachevsky and Debasis Mitra. A chaotic asynchronous algorithm for computing the xed point of a nonnegative matrix of unit spectral radius. J. ACM, 33:130{150, 1986. [14] Jean-Claude Miellou. Iterations chaotiqes a retards; etudes de la convergence dans le cas d`espaces partiellement ordonnes. C. R. Acad. Sci., Paris, Ser. I, Math., 280:233{236, 1975. [15] Jean-Claude Miellou, Didier El Baz, and Pierre Spiteri. A new class of asynchronous iterative algorithms with order intervals. Math. Comput., 67:237{255, 1998. [16] James M. Ortega and Werner C. Rheinboldt. Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, 1970. [17] Daniel B. Szyld. The mystery of asynchronous iterations convergence when the spectral radius is one. Research Report 98-102, Department of Mathematics, Temple University, http://www.math.temple.edu/~szyld/papers.html, 1998. [18] Aydin U resin and Michel Dubois. Sucient conditions for the convergence of asynchronous iterations. Parallel Comput., 10:83{92, 1989. [19] Richard S. Varga. Matrix Iterative Analysis. Prentice Hall, Englewood Cli s, 1962.

13

Suggest Documents