Some Polytime Algorithms for Shortest Paths

1 downloads 0 Views 396KB Size Report
A "-neighborhood of a path is the union of balls of radius " centered at points of . ..... be denoted by \ , and the angle between 2 vectors will be treated as the ...
Some Polytime Algorithms for Shortest Paths Approximations in 3-Dimensional Space D. Burago1]

Department of Mathematics, Penn State University, USA

D. Grigoriev2[

IMR, University Rennes 1 Rennes, France

A. Slissenko3

Department of Informatics, University Paris-12, France

Version of October 4, 2001

Abstract We consider two 3-dimensional situations when a polytime algorithm for approximating a shortest path can be constructed. The main part of the paper treats a well known problem of constructing a shortest path touching lines in R3: given a list of straight lines L = (L1 ; : : :; Ln) in 3-dimensional space and two points s and t , nd a shortest path that, starting from s , touches the lines Li in the given order and ends at t . We remark that such a shortest path is unique. We show that it can be length-position "-approximated (i. e. both its  O(1) Rn length and its position can be found approximately) in time de e +O(n2 loglog "1 ), where de is the minimal distance between consecutive lines of L, e is the minimum of sines of angles between consecutive lines, and R is the radius of a ball where the initial approximation can be placed (such a radius can be easily computed from the initial data). As a computational model we take real RAM extended by square and cubic roots extraction. This problem of constructing a shortest path touching lines was known since longtime, and was conjectured to be hard. The existing methods for approximating shortest paths based on adding Steiner points which form a grid and subsequent applying Dijkstra's algorithm for nding the shortest path in the grid, provide the complexity bound which depends polynomially on "1 , while our algorithm for the problem under consideration has the complexity polynomial in loglog 1" for suciently small ". Our algorithm is motivated by an observation that the shortest path in question is a geodesic in a certain length space of nonpositive curvature (in the sense of A. D. Alexandrov), and it relies on (elementary) theory of CAT(0)-spaces. In the second part of the paper we make a remark concerning simple grid approximations. We assume that a parameter a > 0 describing separability of obstacles is given and the 1 Address: Dept. of Comput. Sci., Penn State University, University Park, PA 16802, USA.

E-mail: [email protected] ]

partially supported by an Alferd P. Sloan Fellowship and NSF Grant DMS-9803129

2 Address: IMR Universite Rennes-1, Beaulieu 35042, Rennes, France.

E-mail: [email protected] [ Member of St-Petersburg Steklov Mathematical Institute, Russian Academy of Sciences, St-Petersburg, Russia. 3 Address: Dept. of Informatics, University Paris-12, 61 Av. du Gen. de Gaulle, 94010, Creteil, France. E-mail: [email protected]  Member of St-Petersburg Institute for Informatics, Russian Academy of Sciences, St-Petersburg, Russia.

1

part of a grid with mesh a outside the obstacles is built (for semi-algebraic obstacles all these are polytime). We show that there is an algorithm of time complexity ? pre-calculations   1 6 O a which, given a-separated obstacles in a unit cube, nds a path (between given vertices s and t of the grid) whose length is bounded from above by (84 + 96a), where  is the length of a shortest path. On the other hand, as we show by an example, one cannot approximate the length of a shortest path better than 7 if using only grid polygons (constructed only from grid edges). For semi-algebraic obstacles our computational model is bitwise. For a general type of obstacles the model is bitwise modulo constructing the part of the grid admissible for our paths. Observe that the existing methods for approximating shortest paths are not directly applicable for semi-algebraic obstacles since they usually place the Steiner points forming a grid on the edges of polyhedral obstacles.

1 Introduction We consider two 3-dimensional situations when a polytime algorithm for approximating shortest path can be constructed. The main part of the paper concerns a known problem of constructing a shortest path touching lines in R3 in a speci ed order : given a list of straight lines L = (L1; : : :; Ln) in 3-dimensional space and two points s and t , nd a shortest path that starts from s , touches the lines Li in the given order and ends at t . We will call this problem the skew lines problem (cf. [Pap85]). The second situation is, in a way, more simple: we look for a length approximation of a shortest path (i. e. for a path such that its length, but not necessarily its position, is close to the length of a shortest paths) amidst obstacles that are separated. The general problem of constructing a shortest path or even a `suciently good approximation' for such a path is well known. It was shown to be NP-hard even for convex polyhedral obstacles [CR87] (by \polyhedral obstacles" we mean unions of polyhedra). The skew lines problem in R3 is also well known. This problem is mentioned in [Pap85] as, presumably, representing the basic diculties in constructing shortest paths in 3-dimensional Euclidean space. The paper [Pap85] states \For example, there is no ecient algorithm known for nding the shortest path touching several such lines in a speci ed order", where \such lines" means \skew lines in threedimensional space". The same problem of nding a shortest path touching straight lines was mentioned in a survey [SS90] as presumably dicult, though a possibility of using numerical methods was vaguely pronounced. In the problem of approximation of shortest paths we distinguish two types of approximations: length approximation and length-position approximation. Length approximation means that, given an ", we look for a path whose length is "-close to the length of a shortest path ("-close may mean either additive or multiplicative error). Position approximation presumes that a path we wish to construct is in a given neighborhood of a shortest path. A "-neighborhood of a path  is the union of balls of radius " centered at points of . To construct length-position approximation means that, given an ", we look for a path that is in an "-neighborhood of a shortest path and whose length is "-close to the length of a shortest path. In [CR87] one can nd the following result that can be related to the complexity of position approximation: for the case of polyhedral obstacles, nding the sequence of edges touched by a shortest path is NP-hard. In the same article [CR87] it is proven that for polyhedral obstacles p with the size of description N , determining O( N ) bits of the length of a shortest path is also NP-hard. Concerning weaker approximations, for the case of polyhedral obstacles [Pap85] describes an 2

algorithm nding a path which is of length at most (1+ ") times of the length of a shortest path, and whose running time is polynomial in the size of the input data and 1" . Considerable gaps in this paper have been xed in [CSY94] and the complexity of their algorithm was roughly O( n4"N ) where n is the total number of edges in the polyhedra. Further, another method was exhibited in [C87] with the complexity bound depending on n and on " as O( n"42 ). Another algorithm with the complexity bound O( n"64 ) is designed in [H99] which in addition, constructs for a xed source s a preprocessed data structure allowing one to compute subsequently an approximation for any target t with the complexity bound O(log( n" )). For an easier problem concerning the shortest path on a given polyhedral surface one can nd the exact shortest path. First, an algorithm with the complexity O(n3 log n) was exhibited in [SS86] in case of a convex surface, thereupon it was generalized to arbitrary polehedral surfaces in [MMP87] with the complexity O(n2 log n), and nally it was improved in [CH90] providing the complexity O(n2). To improve the quadratic complexity, several algorithms which give "approximations of a shortest path were produced: in [AHSV97] with complexity bound O(n + "13 ) in case of a convex surface, then in [ALMS98] with the complexity roughly O( n" ) in case of a simple polyhedral surface. In [H99] for a xed s a preprocessed data structure is constructed with the complexity O( "n3 ) in case of a convex surface and with the complexity O(n2 + n" ) in case of an arbitrary polyhedral surface, respectively, which allows one for any target t to output an approximation within the complexity O(log( n" )). All the mentioned approximating results use the approach of placing Steiner points (on the edges of the polyhedral obstacles) to form a grid and after that to nd the shortest path in this grid (besides, rounding in neighbourhoods of the vertices) as a shortest path in a graph applying the algorithm of Dijkstra [AHU76]. Since the number of the placed Steiner points depends polynomially on n and on 1" , the complexity of this approach should depend on n and on 1" as well. In the present paper we study the question whether it would be possible in some situations to improve signi cantly the complexity? Namely, when it would be possible to achieve the complexity polynomial in log log( 1" ) or when it would be possible to get rid of dependency on n at all (in order to proceed to semi-algebraic obstacles rather than commonly considered polyhedral ones)? We also consider approximating shortest paths in R3. Even in simple 2-dimensional situations, exact constructions of shortest paths involve substantial extensions of the computational model by operations like square root extraction or more powerful ones, see e. g. [HKSS94, GS99, GS98]. Note that a shortest path in R3 between two rational points, with even one cylindrical obstacle, may have transcendental length and necessitate using transcendental points in its construction [HKSS93]. The algorithm we propose for length-position approximation of the shortest path touching lines is based on methods of steepest descent. Each iteration of such a method involves measuring distances between points and other algebraic but not rational operation. If one uses a bitwise computational model, the error analysis becomes dicult and demands a considerable work that we have not done. So to give a solution to the skew lines problem we exploit here an \extended" computational models over reals, which we make precise below. In this paper we present two results: the rst one treats the skew lines problem, where we give a length-position approximation. The second result is a remark on the complexity of a length approximation by means of just a path in a grid (unlike the construction of [Pap85, CSY94]) amidst separated semi-algebraic obstacles. For our main result, concerning a length-position approximation of the shortest path touching lines in R3 our computational model is real RAMs [BSS89] with square and cubic roots 3

extraction. This model extends (by allowing also extracting cubic roots) the common model used (implicitely) in [C87, SS86, MMP87, CH90, AHSV97, ALMS98, H99] which admits rational operations and extracting square roots. We mention also that in [Pap85, CSY94] a bitwise model was used which takes into account the bit complexity. We start with a remark that the shortest path  we seek, is unique. We show that  can O(1) be length-position "-approximated in time Rn + O(n2 log log 1" ), where de is the minimal de e distance between consecutive lines of L, e is the minimum of sines of angles between consecutive lines, and R is the radius of a ball where the initial approximation can be placed.p Such a satisfac3 tory radius can be easily computed from the initial data, it is bounded by R 2 [ njj; 2n 2 j j], where jj is the length of  (estimate (3) in section 2). Observe that when e = 0 or de = 0 it could happen that the gradient descent and Newton's method which are invoked in our algorithm, do not both converge. (There are considerations how to overcome these degenerations but they need another, geometrically more `invariant' approach.) The algorithm is based on the following geometric idea, which shows that there is a unique minimum for the problem in question, and it is a strict minimum (the latter is quanti ed by means of a convexity estimate). Given a sequence of lines, one can consider a sequence of copies of Euclidean space, and glue them into a \chain" by attaching \neighboring" spaces along the corresponding line. The resulting (singular) space happens to be of nonpositive curvature. Now a shortest path we want to construct is a geodesic in this new space, and this immediately implies its uniqueness (by the Cartan-Hadamard Theorem). Furthermore, convexity comparisons for distance functions in non-negatively curved spaces allow us estimate the rate of convergence in gradient descent methods. This approach is somewhat similar to applications of Alexandrov geometry to certain problems concerning hard ball gas models and other semi-dispersing billiard systems, see [BFK98], [Bur98]. Thus, the principal feature of our algorithm is that its complexity depends polynomially on log log( 1" ) rather than on 1" as in the existing algorithms from [Pap85, CSY94, C87, AHSV97, ALMS98, H99] based on the grid method (at the expense of a worse dependency on n and on other geometric paramaters). It would be interesting to describe more situations when a better dependency of the complexity on 1" (logarithmic or even double logarithmic) would be possible simultaneously with better dependency on geometric parameters. Our second result is a remark on very simple grid approximations. The question concerning the possibilities of `generalized' grid approximations was mentioned in the conclusion of [CSY94]. The matter is that the result of [Pap85, CSY94] is based on such a method. The initial idea used in [Pap85, CSY94] is straightforward: partition the edges of polyhedrons that constitute the obstacles into suciently small segments and take them as vertices of a graph of visibility. Then determine whether two such segments are mutually visible, in which case connect them by edges. It is not a simple task to implement correctly this idea for the bitwise computational model because of the necessity to `approximate approximations', and this was the main source of gaps in [Pap85]. The primitive grid method we use avoids such diculties even for semialgebraic obstacles. But one cannot get approximations that are, in the worst case, better than 7 , where   is the length of the shortest path. More precisely, our result is as follows. When considering grid approximations we assume that a parameter a > 0 describing a separability of obstacles and the part of a grid admissible for paths is given, and s and t are vertices of the grid lying in the?space admissible for paths. We show that there is a bitwise 6  1 machine of time complexity O a that for given a-separated obstacles in a unit cube nds a path (between given points s and t ) whose length is bounded from above as (84  + 96a), where   is the length of a shortest path. On the other hand, we show by an example that one cannot approximate the length of a shortest path better than 7 . We conjecture that 7 is the 4

exact bound for 3-dimensional case. We do not discuss in detail how to construct the parameter a and the part of the grid admissible for paths. It depends on the type of obstacles. For semi-algebraic obstacles in R3 it can be done (precisely, in terms of algebraic numbers) in polytime by a bitwise machine. After that constructing a shortest path approximation deals only with natural numbers of a modest size. As we mentioned above the methods of placing Steiner points and forming a grid developped in [Pap85, CSY94, C87, AHSV97, ALMS98, H99] can't be directly applied to semi-algebraic obstacles since they place Steiner points just on the edges of polyhedral obstacles. On the other hand, in our simple grid method we get rid of dependency on n, while the algorithm depends on the parameter a of the obstacles, which is in accordance with the proposal expressed in [ALMS98]: "...while studying the performance of geometric algorithms, geometric parameters (e.g. fatness, density, aspect ratio, longest, closest) should not be ignored...". The major question concerning the skew lines problem that remains open is whether a position-length approximation can be found in polytime (in particular, polynomial in at least log 1" ) by bitwise machines without geometric restrictions. Generalization to the problem of nding a shortest path touching cylinders in R3 in a prescribed order looks rather plausible. The paper is divided into two parts. Its structure is as follows. The rst and the main part of the paper deals with shortest paths touching lines. In section 2 we give basic properties of the spaces under consideration, in particular, we explain what geometric properties ensure the uniqueness of the shortest path. Section 3 contains main technical estimates. The central point is to bound from below the eigenvalues of the Hessian of the path length function (section 3.2, Proposition 1). This bound is crucial for the estimation of complexity of the gradient descent that provides an initial approximation for the application of Newton's method that follows the gradient descent. Section 4 gives our approximation algorithm. It starts with a construction of an initial approximation, application of a gradient descent and application of Newton's method. In the last section 5 of Part 1 we estimate the complexity of the algorithm. Part 2 presents our grid algorithm. The de nition of separability is given in section 6. The same section (Proposition 3) gives a lower bound for the probability to get well separated obstacles when randomly choosing balls as obstacles. Then in section 7 we present an algorithm constructing a length approximation.

Acknowledgements. We are thankful to Misha Gromov for sharing his geometric insight, and

to Yuri Burago for his useful comments and help with references.

Part 1: Shortest Paths Touching Lines 2 Basic Properties of Shortest Paths Touching Lines Our construction is based on the following initial observation: we are seeking a shortest path in a simply connected metric space of nonpositive curvature. This observation guarantees the uniqueness of the shortest path; moreover, it suggests that the distance functions in the space enjoy certain convexity properties, which will actually permit us to use a method of steepest descend to reach an approximate position of the shortest path. First, we remind some known facts about spaces of nonpositive curvature.

5

2.1 On Spaces of Nonpositive Curvature

Let M be a simply connected metric space. Denote by jXY jM , or by jXY j if M is clear from the context, the distance between points X and Y in M. A path in M between two points A and B is a continuous mapping ' of [0; 1] into M such that '(0) = A and '(1) = B . The length P of a path ' can be de ned as the limit of sums 0in j'(i n1 )'((i +1) n1 )j as n ! 1. A shortest path between two points is a path connecting these points and having the smallest length. We suppose that for any two points of M there exists a shortest path between them. By the angle between two segments going out of the same point on the plane R2 we mean the (non directed) angle in [0;  ]. Given 3 points A, B and C the angle between segments AB and AC will be denoted by \BAC or \CAB , the angle between 2 segments  and  will be denoted by \ , and the angle between 2 vectors will be treated as the angle between two segments obtained from the vectors by translating them to the same origin. If di erent spaces are considered in the same context the angle will be denoted by \M . Take any three points A, B and C in M (Figure 1). Let AB and AC be shortest paths A M(BAC)

B't B

C'τ

ΠΑΒ (t)

ΠΑC (τ)

C

B' C'

Figure 1: Comparing triangles in M and on the plane. respectively between A and B , and A and C . Consider the triangle 4A0Bt0 C0 in the plane R2 such that jA0 Bt0 jR2 = jAB ([0; t])jM, jA0 C0 jR2 = jAC ([0;  ])jM, and jBt0 C0 jR2 = jAB (t)AC ( )jM. Denote by (t;  ) the angle between the intervals A0 Bt0 and A0 C0 in R2. The angle between AB and AC at A is de ned as limt; !0(t;  ), if the latter exists. De ne the angle at A to B and C , or shorter \M (BAC ) as the supremum of angles at A between a shortest path from A to B and a shortest path from A to C . Consider again three points A, B and C in M, and denote by the angle BAC . Draw in 2 R a triangle A0 B 0 C 0 determined by the following condition: jA0B0 jR2 = jABjM , jA0C 0jR2 = jAC jM and \M(BAC ) = \R2(B0A0C 0). By de nition, the space M is of nonpositive curvature if the angle \M (BAC ) exists and 0 jB C 0jR2  jBC jM for any three points A, B and C of M. It is known that (see, for instance, [Bal95]) (Uniqueness of the Shortest Path.) In any simply connected space of nonpositive curvature, there is a unique shortest path between any two points.

2.2 The Con guration Space

Let L = (L1; : : :; Ln ) be a list of straight lines in R3, and s and t two points. Without loss of generality assume that these points are not on the mentioned lines. Clearly, the shortest paths touching the lines of L in the given order are polygons (broken lines).

6

The space where we are looking for such a shortest broken line can be obtained by the following operation. For each pair (Li ; Li+1) take a copy Ri of R3. Glue consecutively Ri and Ri+1 along the line Li+1 . Denote the obtained space by RL . Each space Ri is a space of nonpositive curvature, and they are glued along isometric convex sets. The Reshetnyak's theorem (see [BN93]) implies that the resulting space RL is of nonpositive curvature. A similar construction can be used to obtain a space of nonpositive curvature to seek shortest paths consecutively touching convex cylinders Z = (1; : : :; n) in RN . In this case the gluings are made along the corresponding cylinders. Consider the shortest path problem in RL , where two points s and t are xed. From the uniqueness of shortest path between points in a simply-connected nonpositively curved space we immediately conclude that: There is a unique shortest path between any two given points in RL; hence a shortest path touching the lines in a prescribed order and connecting two given points is unique. Furthermore, a standard distance comparison argument with Euclidean development of a broken line representing an approximate solution implies that length-approximation guarantees position-approximation: If the p length of the actual shortest path is L, every path whose length is less than L +  belongs to the L=2-neighborhood of the actual shortest path. In order to avoid overcharging the construction, assume that the lines of L are pairwise disjoint. In fact we will use only that the consecutive lines are disjoint. The general case needs some additional attention to the situation when the shortest path comes near a point of intersection of consecutive lines. One could improve the algorithm and its complexity also for situations where the minimal distance between consecutive lines or the angles between consecutive lines are of the same order of magnitude as the desired precision of approximation. We omit a discussion of these issues here. By a path we mean a polygon P consisting of n + 1 links (straightline intervals), denoted by Pi, 1  i  n, connecting s with t via consecutive lines, i. e. the link P1 connects s with L1, the link Pi+1 connects Li with Li+1 (1  i  n ? 1), and the link Pn+1 connects Ln with t . When speaking about the order of points on a path we mean the order corresponding to going along the path starting from s . Thus any link of a path is directed from s to t . Notice that for a shortest path P the angle between Pi (incident ray) and Li must be equal to the angle between Li and Pi+1 (emergent ray). Clearly, the incident angle being xed, the emergent rays constitute a cone (which may have up to two intersection points with Li+1 meaning that a geodesic in our con guration space can branch, having two di erent continuations after passing through a line Li ).

2.3 Technical Notations

Throughout the text we use some notations that we summarize here { the reader can consult the list when coming across a new notation.  hU; V i is the scalar product of vectors U and V .  jU j is the L2-norm of a vector U (the length of U ).  kAk, where A is a matrix, is the spectral norm of A, i. e. kAk = supfhAU; U i : kU k = 1g.  L = (L1; : : :; Ln) is the list of straight lines in R3 we consider, and s and t are respectively the rst and the last point to connect by a polygon touching the lines of L in the order of the list. 7

 0i is a xed vector determining a point on Li. We use it as the origin of coordinates on Li.  i is a unit vector de ning the direction on Li.  T = (t1; : : :; tn) is a list of n reals that will serve as coordinates respectively on L1; : : :; Ln.

Our applications of gradient and Newton's methods take place in the space Rn of such points T.  To make the notations more uniform we assume that 00 = s , 0n+1 = t , and that always t0 = tn+1 = 0.  Wi(ti)=df (0i + ti i); clearly, it is a point on Li determined by a single real parameter ti . Note that W0 (t0) = s and Wn+1 (tn+1 ) = t .  Vi = Vi(ti; ti)=df Wi+1 (ti+1) ? Wi(ti) is a vector connecting Wi with Wi+1 (in this order), 0  i  n.  vi=df jVij is the length of Vi, 0  i  n.  A path P can be determined by a list of reals T that gives the consecutive points of P on the lines of L connected by this path. Thus, W0 (t0) = s , Wi (ti ) = P \ Li for 1  i  n and Wn+1(tn+1 ) = t .  (T ) is the path determined by a list of reals T , as explained above.  By Pi for a path P we denote the ith link of P . We consider Pi as a segment directed from Li to Li+1, 0  i  n. Regarded as a vector, it coincides with Vi.  For a directed interval  in R3 we denote by ? and + its beginning and its end respectively.  \ABC , where A, B and C are points, is the angle at B in the triangle determined by these 3 points.  \V V 0, where V and V 0 are vectors, is the angle in [0; ] between these vectors.   will be used to denote various L2-distances, e. g. (s ; Li) will denote the distance between s and Li.  di is the square of the distance between Li and Li+1 for 1  i  n ? 1, i. e. di = (Li; Li+1)2, d0=df (s ; L1)2, dn=df (Ln; t )2.  d=df minpfdi : 0  i  ng.  de=df + d.  D0 is a vector from s to L1 perpendicular to L1, Dn is a vector from Ln to t perpendicular to Ln , and Di is a vector connecting Li and Li+1 and perpendicular to both of them.  i=df (sinp\ii+1)2, 1  i  n ? 1, =df minf i : 1  i  n ? 1g.  e=df + .  i=df (cos \ii+1), 1  i  n ? 1.   is the shortest path between s and t .  T  = (t1; : : :; tn) is the point in Rn de ning , i. e.  = (T ).  0 is the initial approximation of  de ned below.  T 0 is the point in Rn de ning 0, i. e. 0 = (T 0).  r=df j0j is the length of the initial approximation. It will be clear from its construction that de < r. p  R=df r n is the radius of a ball in Rn containing all paths in question.  B is the closed ball in Rn centered at T 0 containing all representations (as (T ), T 2 Rn) of the paths under consideration.

2.4 Initial Approximation for the Shortest Path Denote by Pi0 the base of the perpendicular from s to Li . Take as initial approximation to the shortest path the polygon 0 that starts at s , then goes to P10 , then to P20 and so on to Pn0 and 8

nally to t . Clearly 0  B and jPiPi+1 j  (js Pij + js Pi+1 j) for 1  i  n ? 1: The length of 0 can be bounded as

jj  j0j = r  2

nX ?1 i=1

(1)

(s ; Li) + js t j  2njj:

(2)

The rst inequality follows from the fact that the length of any path connecting s and t and touching lines from L is greater than or equal to jj, the second inequality is implied by (1), and the third one is due to the fact that the distance from s to any line is not greater then jj. For R the estimations (2) gives

pnj (3) j  R  2n 32 jj: Remark that Li = fWi0 + ti : t 2 Rg; then Pi = Wi0 + t0i i is determined by hPi ; ii = 0, and thus Pi = Wi0 ? Wi0 ; i i.

2.5 Main Theorem

Hereafter we suppose that e > 0 and de > 0. Now we are in a position to formulate our main result:

Main Theorem.

There is an algorithm that given an " > 0 nds a length-position "-approximation of the shortest path touching lines in time 



Rn O(1) + O n2 log log 1  ; " de e

where R, de and e are as de ned above.

3 Bounds on Eigenvalues of the Hessian of the Path Length This section contains technical bounds on the norm of derivatives of the path function, in particular the main estimation, namely, a lower bound on the eigenvalues of the Hessian (`second derivative') of the path length. Morally speaking, the fact that the length function is at \least as convex as in Euclidean space" follows from the curvature bound. We will make direct computation though, due to the fact that we use a particular coordinate system. Consider a path (T ) = (t1 ; : : :; tn ) represented by points Wi , 0  i  n + 1 (we use notations from section 2.3).

Notations:  f (T )= jj = Pni=0 vi is the length of (T ).  g(T )= f 0(T )= gradf (T ) = ( @t@f1 (T ); : : :; @t@f (T )) is its gradient ( rst derivative). Let us begin with the formula for g , that is the rst variation of length. For 1  i  n we set  ?  = @f = @ v + v = @vi + @vi+1 = hVi?1; ii ? hVi; ii ; (4) df

df

i

@ti

df

@ti

n

i?1

i

@ti

@ti

vi?1

9

vi

and

g = (1(t1; t2); 2(t1; t2; t3); : : :; n?1(tn?2; tn?1 ; tn); n(tn?1 ; tn)) (which stresses the variables each component of g depends on).

(5)

3.1 The Second Variation Formula for the Path Length

The Hessian of jj, which we will denote  = (T ) (Jacobian of jj0), looks as shown on Figure2. 3 2 @2v @ 2 v1 0 @ 2 v1 0 ::: 0 0 @t1@t2 @t21 + @t21 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

@ 2 v21 + @ 2 v22 @t2 @t2

@ 2 v1 @t1 @t2

@ 2 v2 @t2 @t3

0

@ 2 v2 @t2@t3

:::

0

0

@ 2 v22 + @ 2v23 @t3 @t3

:::

0

0

: : :: : :

: : :: : :

: : :: : :

0

0

0

: : : : : :: : : :::

: : :: : :

@ 2 vn?1 @tn?1 @tn

@ 2 vn?1 + @ 2 vn @t2n @t2n

7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5

Figure 2: Hessian (of path length) . The matrix of  is symmetric 3-diagonal with positive diagonal entries. Here are the formulas for derivatives involved in  in arbitrary coordinates (for further references). 



@ hVi; i+1i = @ti@ti+1 = @ti vi @ 2 vi

D

@Vi ;  @ti i+1 vi

E

D

hVi; i+1i Vi; @V @t ? vi3

= ? hi ;vi+1i + hVi; i+1i3 hVi ; ii = i

=?1

i i

E

(6)

vi



vi cos \ii+1 ? cos \Vii+1  cos \Vii



(7)

!

@ 2vi = 1 1 ? hVi; ii2 = (sin \Vi; i)2 : vi @t2i vi vi2

(8)

!

@ 2vi = 1 1 ? hVi; i+1i2 = (sin \Vi; i+1)2 : vi @t2i+1 vi vi2

10

(9)

3.2 Lower and Upper Bound on the Eigenvalues of the Hessian of the Path Length

Decompose  into a hsum of matrices with 2  2-blocks corresponding to links of . Denote by 2 v0 i @ 0 the 1  1-matrix @t21 , by i for 1  i  n ? 1 the 2  2-matrix 2

3

@ 2 vi @ti @ti+1 5 @ 2 vi @t2i+1

@ 2 vi 2 4 @ti2 @ vi @ti @ti+1 i

h

e i the n  n-matrix consisting of i situated at and by n the 1  1-matrix @@t2 v2nn . Denote by  its proper place in  and with zeros at all other places. More P precisely, (1; 1)-element of i is e i. placed at (i; i) for 1  i  n and at (1; 1) for i = 0. Then  = 0in 

Lower Bound on Eigenvalues of .

Strict convexity of metric implies that the second derivative  is positive de nite. However, we need constructive upper and lower bounds on eigenvalues of . So we do not use explicitly the just mentioned property of the metric { it will follow from the estimations below. We start with a more dicult question of obtaining a lower bound on the least eigenvalue of , i. e. on inf fhU; U i : kU k = 1g. To do it we reduce this problem to the same problem for matrices i . Lemma 1 The only eigenvalue of 0 and that of n is greater than 2rd3 , and both eigenvalues of i for 1  i  n ? 1 are greater than 2rd 3 . Thus 2rd 3 is a lower bound on the norm of i for all i (and for any ti ). Proof. Using the notation introduced earlier (section 2.3) we can represent the link Vi as Vi = ti  i + ti+1  i+1 + Di. To simplify computations rewrite this formula as W = xA + yB + D, where W = Vi, x = ti , A = i, y = tp i+1 , B = i+1 and D = Di . For i = 0 we put x = 0, and for i = n we put y = 0. And let v = hW; W i. With these notations di = hD; Di, D ? A and D ? B.   i2 . Here For i = 0 we have that the only eigenvalue of 0 is its element @@t2 v210 = v10 1 ? hW;B 2 v0 hW; Bi = hyB + D; Bi = y, hW; W i = v02 = hyB + D; yB + Di = y2 + d0 and thus



2 k0k = v1 1 ? y2 0

v0

 =

1 ?y 2 + d ? y 2  = d0 > d : 0 v03 v03 r3

Similarly for i = n we have kn k  dvnn3 > rd3 . Consider i 2 f1; : : :; n ? 1g. Notice that in our notations

v2 = hxA + yB + D; xA + yB + Di = x2 + 2xy i + y2 + di and

"

i =

@2v @x22 @v @x@y

11

@2v # @x@y : @ 2v2 @y

Now we compute the second derivative of v :   i2 = 1 ?v 2 ? (x + y )2  @ 2 v2 = 1 1 ? hxA+yB+D;A i v @x v2 v3  ? ?  = v13 y 2 + di ? (y i )2 = v13 y 2 i + di .   i2 = 1 ?v 2 ? (x + y )2 = 1 ?x2 + d . @ 2 v2 = 1 1 ? hxA+yB+D;B i i 2 i v @y v v3 v3 



= ? v1i i ? hxA+yB+D;Bvih2xA+yB+D;Ai  ? = ? v13 i (x2 + 2xy i + y 2 + di) ? (x i + y )(x + y i )   = ? v13 (x2 i + 2xy 2i + y 2 i + di i ) ? (x2 i + xy + xy 2i + y 2 i ) @2v @x@y





= ? v13 xy 2i + di i ? xy = ? v13 di i ? xy i . ?



Let us estimate the determinant ?  deti = v16 (y 2 i + di )(x2 i + di ) ? (di i ? xy i )2   = v16 x2 y 2 2i + di i (x2 + y 2) + d2i ? d2i 2i ? x2 y 2 2i + 2xy i di i  ? = v16 di i (x2 + y 2) + d2i i + 2xy i di i  2 ? ?  = v 6i di (x2 + y 2) + d2i + 2xydi i  v 6i di(x2 + y 2 ) + d2i ? 2jxjjy jdi  div 6 i .  ? Note that Tri = v13 i (x2 + y 2 ) + 2di . The smallest of two eigenvalues of i is not less than v3d2i i d2i i = di i > d : deti   Tri v 6( i (x2 + y 2 ) + 2di) v 32di 2v 3 2r3

 d Proposition 1 All eigenvalues of (T ) are greater than 2r3 . Thus (T ) is positive de nite, r3 ?1 the metric function is strictly convex and k(T )k  d r3 , and k (T )k  d for any T .

Proof.

Now we estimate hU; U i for kU k = 1. E X XD hU; U i = e i U; U = hi Ui ; Uii ; i

i

where Ui is the appropriate 2-vector. From Lemma 1 we have

hU; U i 

X

i

d hU ; U i = d X hU ; U i = d  2 = d : 2r3 i i 2r3 i i i 2r3 r3

r3 ?1 This inequality gives a lower bound on eigenvalues of . Hence, kk  d r3 , and k k  d .



Upper Bound on the Norm of .

The matrix  is symmetric, therefore, using formulas (7), (8), (9) and standard inequalities for the norms one gets X kk = k(T )k = kk  max j j  4 : (10) 1

i

j

i;j

de

12

for all T .

3.3 Newton's Method for the Search of Zero of the Gradient of Path Length

Our algorithm consists of two phases described below in section 4. The second phase is an application of Newton's method to approximate the zero of the Jacobian f 0 of the length function. The main diculty, however, is to nd an appropriate approximation for Newton iterations. That will be done by a gradient descent described in section 4. We will use the bounds we just have obtained to estimate the rate of convergence of Newton's method. First let us recall a few standard facts about Newton's method. Consider a mapping H = (h1; : : :; hn ) from Rn into Rn. X and X with upperscripts and subscripts will denote vectorcolumns from Rn. Newton's iterations are de ned by the formula

X k+1 = X k ? (H 0(X k ))?1H (X k); where H 0 is the Jacobian of H .

(11)

Proposition 2 gives a sucient condition to ensure a fast convergence of Newton's iterations (it can be found in many textbooks on numerical methods).

Proposition 2 Suppose that X0 is a zero of H , i. e. H (X0) = 0, where 0 = (0; : : :; 0) 2 Rn. Let a; a1 ; a2 be reals such that 0 < a and 0  a1 ; a2 < 1, and denote a = fX : kX ? X0 k < ag, c = a1a2 and b = minfa; 1c g. If X0 2 b and (A) k(H 0(X ))?1k  a1 for X 2 a; (B) kH (X1) ? H (X2) ? H 0(X2)(X1 ? X2)k  a2 kX1 ? X2k2 for X1 ; X2 2 a then ?  k kX k ? X k  1 ckX 0 ? X k 2 : 0

c

0

(12)

In Proposition 2 there are two constants a1 and a2 : the norm of the inverse of H 0 has to be at most a1, (condition (A)); and a2 is, morally speaking, un upper bound on the norm of the second derivative of H 00 (condition (B)). Both bounds must hold in an a-neighborhood of the zero X0 of H . In our case a will be very `big', so we can ignore it for now. Then the convergence is determined by two things: a parameter b = a11a2 , and the choice of initial approximation X 0 that must be in an open b-neighborhood of the zero X0. We are interested in making ckX 0 ? X0k smaller than 1 with a known upper bound less than 1. We will construct X 0 such that ckX 0 ? X0k  21 . We will take the bound from Proposition 1 for a1 . As for a2, it is not hard to estimate it using formulas for elements of . We will choose X 0 in section 4.2.

Choosing parameters a1 and a2 to satisfy (A) and (B) of Proposition 2.

We start with calculating a2 . To satisfy Condition (B) it is sucient to show that the (partial) second derivatives of f are bounded in some neighborhood of X0 . We use Taylor's formula with the third derivatives of f . Take any T1; T2 2 B. In our case

H (X1) ? H (X2) ? H 0(X2)(X1 ? X2) = f 0(T1) ? f 0(T2) ? (T2)(T1 ? T2): 13

(13)

We assume that the vectors involved in (13) are represented as columns. To bound from above (j) the norm of this vector, rstly estimate its components. Using notations Tj = (t(j) 1 ; : : :; tn ), j = 1; 2, we can write the ith component of (13) as X (2) (2) (2) (1) (1) i(t(1) i?1 ; ti ; ti+1 ) ? i (ti?1 ; ti ; ti+1 ) + j

@i(t(2) j ) (t(1) ? t(2)): j @tj j

(14)

Taylor's formula says that (14) is equal to  X @ 2i() t0 ? t )2 + X @ 2i() (t0 ? t )(t0 ? t )  1 j j k k 2 j=i?1;i;i+1 @tj @ ( j j j;k=i?1;i;i+1 @tj @tk

(15)

(2) for some vector  = (j )j whose j th component is between t(1) j and tj . The equalities (6), (7), (8), (9) show that the second derivatives of i (notation from (5)) involved in (15) are of the form 12 P(sinO(1) '; cosO(1) ; (O(1))) + 21 P(sinO(1) '; cosO(1) ; (O(1))), vi vi?1 P where (sinO(1) '; cosO(1) ; (O(1)) is a sum of expressions in parenthesis (\arguments of "). Thus the absolute value of each second derivative of i is bounded by O( 1d ). Hence, the absolute value of each component of vector (13) is bounded by O( d1 ). This implies that in our case (see 13) kH (X1) ? H (X2) ? H 0(X2)(X1 ? X2)k

= kf 0 (T1) ? f 0 (T2) ? (T2)(T1 ? T2)k  C1



pn  2 d kT1 ? T2k

(16)

for some constant C1 > 0 and for T1; T2 2 B. p Thus we set a2 = C1  dn . As in our case H 0 = , Proposition 1 permits to take as a1 the lower bound for k?1 k from this proposition. r3 (see PropoNow we can de ne all the parameters for Newton's method: a = R , a = 1 d   ?  nr3 for the constant C1 from (16). sition 1), a2 = C1 nd (see (16)). Thus c = a1 a2 = C1 d 2 Without loss of generality we assume that a = r  1c . Hence we take 2 2 e4 b = 1 = d = e d : (17)

c

C1nr3

C1nr3

To ensure fast convergence we will need a `good' initial approximation X 0 for Newton's method, namely such that

X0 2 2 :

(18)

b

For further references we rewrite

b = c e2de4 = c d2 ; 2 nr3 2 2 nr3 where c2= 2C1 1 .

(19)

df

14

For such X 0 we have ckX 0 ? X0k  21 ;

(20)

?  k and the rate of convergence guaranteed by Proposition 2 will be 21 2 , where k is the number of iterations (see Proposition 2).

4 Algorithm for the Shortest Path Touching Lines

The algorithm takes as input a list L of lines, points s and t and " > 0. It outputs a path, represented as the list of points where it meets the lines of L. The output path is in the "-neighborhood of the shortest path and its length is "-close to the length of the shortest path. We assume that the lines of L are represented by vectors 0i (point on the line Li ) and i (unit vector directed along the line Li ) introduced in section 2.3. The complexity of transforming another usual representation of lines and points into this form is of linear complexity for our computation model. The algorithm consists of three phases: (Phase 1). Preliminary computations. (Phase 2). Application of a gradient method to nd an approximation for Newton's method. (Phase 3). Application of Newton's method.

4.1 Preliminary computations

Preliminary computations are straightforward. By the applications of a gradient method and Newton's method we approximate the position of the shortest path. An appropriate approximation of the length in our case is `automatic' (by using a smaller value of ", as described in Lemma 2 below. Lemma 2 j(T )j ? j(T )j  2kT ? T k1  pnkT ? T k.

Proof.

The last inequality is standard. Let us prove the rst one. We compare  = (T ) and  = (T  ) linkwise, see Figure 3, where i = AB , i = A B  , jAAj = jti ? ti j, jBB  j = jti+1 ? ti+1 j. The triangle inequality immediately implies that B A*

B*

A

Li

Li+1

Figure 3: Comparing the lengths of two paths.

15

jABj ? jABj  jAAj; jABj ? jABj  jBB j and thus, jAB j ? jA B  j  jAA j + jBB  j. Similarly, we get jA B  j ? jAB j  jAAj + jBB  j. Hence, jjAB j ? jAB  jj  jAA j + jBB  j. It remains to notice that we use each jAA j = jti ? ti j twice for 1  i  n.  Thus, to obtain length-approximation with precision ", we will construct a position p"n -approximation.

Phase 1: Compute the path 0 and the values r, R, , d (which will be used in estimations

that govern the two other phases).

4.2 Initial Gradient Descent to Initial Approximation of Newton's Method

Recall that f (t1 ; : : :; tn ) denotes the length of a path  represented by points on the lines of L, i. e. f (t1; : : :; tn ) = f (T ) = j(T )j. This function f is convex (see Proposition 1), and the point at which it attains its minimum is denoted by T  (section 2.3). Proposition 1 gives a positive lower bound for the norm of the Hessian  of f , we denote this lower bound by  . Let e be the upper bound on the norm of  from (10).  R Consider g =df gradf = @t@f1 ; : : :; @t@fn . Notice that g (T ) = 0. We have g (T ) = TT , where one can integrate  along any path from T  to T . Denote by  the distance from T  to T . Since  is positive de nite and its norm is at least , we have h(V ); V=kV ki  kV k for all vectors V . Hence the projection of integral of  along the segment from T  to T to this segment is at least  , and in particular the norm of the integral itself is at least  . Recalling that this integral is equal to the gradient of f at T , @f we conclude that the absolute value @ti (T ) of one of the coordinates of the gradient g (T ) is at least pn .



Choose i such that @t@fi (T )  pn . Let us estimate how one can decrease the length of the path by changing ti (which geometrically represents moving the point where the path meets Li ). To study how f depends on ti , introduce a function in one variable  given by '( ) = f (t1; : : :; ti?1; ; ti+1; : : :; tn ). Function ' is the restriction of f to a straight line and hence convex. Let ' attain its minimum at 0. We wish to estimate j'(ti ) ? '(0)j from below. Notice that our bounds on the Hessian of f imply that   '00  e. We need the following lemma.

Lemma 3

Let h be strictly convex function, i. e. h00 > 0 on a segment  . Suppose that h attains its minimum at a point 0 , and assume that h00  e. Then 0 2 (21) h( ) ? h(0)  (h (e)) 4 for all  2  .

Proof.

0 Suppose without loss of generality that 0 <  , and let a point 1 be such that h0 (1) = h 2() . This de nes 1 uniquely since the derivative h0 of h is strictly increasing.

16

Then

0

h( ) ? h(1)  ( ? 1)  h 2( ) :

(22)

On the other hand, integrating h00 from 1 to  yields h0( )  e  ( ?  ): 1 2 Multiplying (22) and (23) we obtain (21). Lemma 3 gives Lemma 4 In the notations introduced above

(23) 

2 2 j'(ti) ? '(0)j   e : 4n

This Lemma 4 describes what one gains (in terms of shortening the path in question) by setting ti = 0.

Phase 2: Gradient Method Procedure: o n Phase 2.1: Find i such that @t@f (T ) = maxj @t@f (T ) . (See formulas (4) for

= @@tji j ). This is done by using formulas (4), which involve only arithmetic operations and square root extractions. The complexity of this Phase 2.1 is linear in n. Phase 2.2: Find the point 0 where '( ) attains its minimum. Since ' is strictly convex, this happens at the point where its derivative vanishes. Using our notations (see section 2.3 and also the beginning of section 3) we have '( ) = v0(t1) + v1(t1; t2) +    + vi?1(ti?1;  ) + vi(; ti+1) +    + vn(tn ). Hence, using (4), we have d' ( ) = dvi?1 (ti?1 ;  ) + dvi (; ti+1) = d d d i

j

hVi?1(ti?1;  ); ii ? hVi(; ti); ii : vi?1 vi

@f @ti

(24)

Equating the expression (24) to zero, after trivial algebraic manipulations we get hVi?1(ti?1;  ); ii2 hVi(; ti); ii2 : = (25) hVi?1(ti?1;  ); Vi?1(ti?1;  )i hVi(; ti; Vi(; tii Notice that both Vi?1 (ti?1 ;  ) and Vi (; ti) depend in  linearly. Thus, the equation (25) is of the 4th degree in  , and it can be solved using square and cubic root extraction. Hence the complexity of Phase 2.2 is O(1). Note that, for a standard convexity reason, the modi cation of the path described in this phase cannot make it leave a ball where the path lied. Hence the path stays in B. Phase 2.3: Calculate the gain gain=df j'(ti ) ? '(0)j, T 0 = (t1; : : :; ti?1 ; 0; ti; : : :; tn ) and 1 UpBnd=df (gain4n)e 2 . Remark that the latter value is an upper bound on . If UpBnd < maxf p"n ; 2b g (see (19) and (18)) then go to Phase 3 with T 0 as the initial approximation for Newton's method. Otherwise, repeat Phase 2 with T = T 0 . 17

4.3 Application of Newton's Method

Phase 3: Newton's method:

Apply Newton's method with the value of T found by Phase 2.3 as X 0. Iterate (11) until obtaining a suciently close approximation, see section 5 below, formula (34). The rate of convergence is estimated in Proposition 2, formula (12).

5 Complexity of the Algorithm Now we summarize the comments on the complexity made in the previous sections to count the number of steps used by the algorithm to nd a length-position "-approximation of the shortest path touching the lines of L. By Ci, cj we will denote various positive constants (using capital C for upper bounds, and lower case c for lower bounds).

5.1 Complexity of Preliminary computations Phase 1 nds perpendiculars from s to lines Li . These perpendiculars fall at points Pi 2 Li . To nd a point Pi we solve a linear equation with one unknown ti : hWi (ti ) ? s ; ii = 0. This can

be done using only arithmetic operations. The complexity of solving one equation is constant, thus the total complexity of nding all Pi 's is linear in n. The points Pi represent the polygon 0, and computing its length j0j also involves square root extraction. However, the complexity is still linear in n. Thus, the parameter R has been computed. To compute sines ei = sin \i i+1 and cosines ei = cos \ii+1 we compute the scalar products hi; i+1i, and use the standard formulas that involve arithmetic operations and square root extractions only. The complexity is again linear in n. The next parameter to nd is the minimal distance between consecutive lines. For two consecutive lines Li and Li+1 , the distance between them is the length of the segment connecting the lines and perpendicular to both of them; this is again a standard \analytic geometry" computation, which yields de and d in linear time. Finally, we need the bounds for kk: the upper bound is e = d4e, and the lower bound is  = d r3 .

5.2 Complexity of Gradient Descent

The complexity of one iteration of the gradient descent of Phase 2 was estimated from above at the end of section 4.2. This complexity is linear in n. So we are to estimate a sucient number of iterations. Recall that kT ? T  k =   UpBnd (see Lemma 4 and Phase 2.3) . The algorithm iterates Phase 2.3 until UpBnd  2b . It is evident that initially we have gain  r. Hence, initially 1 e) 21 2 r3 ( rn ) ( gain  4 n   C3 e : UpBnd = 

(26)

dd

If UpBnd  maxf p"n ; 2b g, then the algorithm iterates Phase 2.3 and the new value of the gain satisfy

 dO(1) : gain  c4 UpBnd (nr)O(1)

(27)

18

Combining (26) and (27), which together estimate gain decreases in one iteration, one gets that the number of iterations (denoted NbIter below) is bounded by  rn O(1) NbIter  d : if p"n  2b and by

(28)

 rn O(1) NbIter  d "

(29)

otherwise. The bound in (28) is stronger than that in (29), so we take the bound in (28) for further references.

5.3 Complexity of Newton's Method

Once p"n  2b , Phase 3 of our algorithm applies Newton's method. Formula (12) for the convergence of Newton's method estimates the error after k iterations by 1 ? ckX 0 ? X k 2k ; (30) 0

c

which we want to be smaller than p"n . For our choice of initial approximation for Newton's method, see (20), and our value of 1c , see (17), estimate (30) takes the form   k   k   k 1 1 2 = b 1 2 = d2 1 2 : c 2 2 C1nr3 2 Thus we need  2k

1 2

(31)

pnr3" C 1 < d2 :

(32)

Inequality (32) is equivalent to 2k < log 1" + 21 log n + 3 log r + log 1 ? 2 log d + O(1)  log 1" + O(log r + log 1 ): (33) Where all log 's are with base 2; recall also that (without loss of generality) we assumed n  r. From (33) we can bound k as k < log(log 1 + O(log r + log 1 )): (34)

"



It remains to estimate the complexity of computing the kth iterate X k+1 using formula (11). In our case F (T ) = gradf (T ) and F 0 (T )?1 = (T )?1 . The complexity of calculating F (T ) is shown to be linear in n. Computing the inverse of a 3-diagonal n  n-matrix (T ) takes O(n2 ) operations. Note that the complexity of computing one element of (T ) is constant. Thus the total complexity of Newton's method is O(n2 log(log 1 + O(log r + log 1 ))): (35)

"



19

Taking into consideration that log(x + y )  (log x + log y ) for x; y  2 , we can simplify this estimation (35), obtaining the following bound for the complexity of Newton's method O(n2 log log 1 ) + O(n2(log r + log 1 )): (36)

"



Combining complexity estimates for the three phases and observing that the term

O(n2(log r +log 1 )) is much smaller than the bound on the complexity of the gradient procedure, we nally obtain the estimation of the Main Theorem.

Part 2: A Simple Grid Approximation of Shortest Paths Amidst Separated Obstacles 6 Separated Obstacles We consider obstacles in R3 though all the notions and constructions can be easily generalized to an arbitrary dimension. The dimension under consideration in uences only the constants that appear in estimations of the complexity of algorithms. The problem we study here is, as in the previous part, to construct a path that connects two given points s and t , does not intersect given obstacles W and whose length is close to that of a shortest path.

6.1 Obstacles and Paths

Let W be an arbitrary set in R3, and s and t be two points in its compliment. Denote by cl(S ), int(S ) and bnd(S ) respectively the closure, the interior and the boundary of a set S . Denote by Bv (a), where a 2 R0 and v 2 R3, the ball of radius a centered at v . The boundary bnd(W ) = cl(W ) n int(W ) may contain `degenerate' pieces. For example, an isolated point or a point with a neighborhood homeomorphic to a segment. Such pieces can hardly be considered as obstacles. So we assume that every point of bnd(W ) has a 2-dimensional neighborhood. For technical reasons it is convenient to assume a neighborhood of each point of bnd(W ) intersects int(W ). To achieve this we can `slightly in ate' W . It is known how to do it eciently for semi-algebraic obstacles, see e. g. [GS98]. A path is a continuous piecewise smooth image of a closed segment. A simple path or a quasisegment is a path without self-intersections (which is, clearly, homeomorphic to a segment). The set R3 n W will be called the free space, and its closure will be called the space admissible for trajectories. We consider only paths lying in the admissible space and not intersecting the interior of W .

6.2 Separability and Random Separated Balls

We say that obstacles W are a-separated if for any v 2 R3 the set W \ Bv (a) is convex. Note that if each connected component of W is convex and the distance between each two connected components is greater than 2a, then W is a-separated. Our main goal is to describe an algorithm that constructs a length approximation to a shortest path under the condition of separability of obstacles. But now we will make a digression 20

estimating the expectation of separability of obstacles constituted by n randomly chosen balls. The centers of the balls are independently chosen in the unit cube [0; 1]3, and their radii are independently chosen from an interval [0; r] under the uniform distribution.

Proposition 3 There exist constants c21; c2 > 0 such that for any r < c1n? 23 the union of n randomly chosen balls in [0; 1]3 is (c2n? 3 )-separated with probability greater than 32 .

Proof.

We claim that n randomly chosen points in the unit cube are c323 -separated with probability n greater than 32 for some constant c3 > 0. Indeed, a choice of n points in [0; 1]3 is equivalent to a choice of a point p = (x1; y1; z1; : : :; xn; yn; zn) 2 [0; 1]3n. For any pair of natural numbers i; j such that2 1  i < j  n, the measure of of those points p for which jxi ? xj j; jyi ? yj j; jzi ? zj j  c3 n? 3 is not p 3 greater than 2c3n? 23 . This is true for any constant c3 . Hence the measure of those p for which there exist a pair 1  i < j  n such that this condition holds is not greater than   n(n?1) p2c n? 23 3  1 for an appropriate constant c . 3 3 2 3 For nish the proof of the lemma we choose c1 and c2 so that 2c1 + c2 < c3. 

7 A Grid Algorithm for a Shortest Path Approximation

In this section we assume that the obstacles W  [0; 1]3 are a-separated and that s ; t 2 [0; 1]3.

7.1 Approximation Algorithm and its Complexity

Denote by L  [0; 1]3 a cubic grid with mesh (edge of the basic cube) a. Without loss of generality we assume that a1 is an integer and that s and t are nodes of the grid. Consider a graph G whose vertices are those nodes of the grid that do not belong to W and whose edges are those edges of the grid that do not intersect W . Assume that the length of every edge in G is a. The length of a path P will be denoted by jP j.

Theorem on Simple Grid Method. There is an algorithm of time complexity O(

? 1 6

a ) that, using the graph G de ned above, constructs a path G connecting s and t and whose length satis es the inequality jG j  392jj + 448a, where  is a shortest path amidst the obstacles W connecting s and t .

Remark 1. If a is not known but it is possible to construct the part of the grid admissible for

paths, an algorithm can consecutively try a = 21 ; 14 ; 18 ; : : : until it obtains a path with the bound from Theorem on Simple Grid Method.

From Proposition 3 one concludes Corollary 1 When the set of obstacles W is a union of n random balls whose radii are inde2 2 ? ? 3 3 pendently chosen from [0; c1n ], then a  c2 n with probability greater than 23 . Hence the algorithm will compute an approximation (as described in Theorem on Simple Grid Method) to a shortest path with probability greater than 32 ) within time O(n4 ).

21

We say that a path is a grid polygon if it goes only along the edges of the grid. To prove the Theorem, it suces to show that the length of a shortest grid polygon G connecting s and t and satis es

jG j  84jj + 96a:

(37)

Then this grid polygon can be constructed in quadratic time as a shortest path G in the graph

G by using any polytime algorithm for a shortest path in a graph (e. g. see [AHU76]).

Estimate (37), and hence Theorem on Simple Grid Method will follow from Lemmas 5 and 6. We say that a cube of the grid is visited by a path P if the closure of this cube without vertices intersects with P . Notice that P does not determine uniquely the order of visited cubes as P may visit two cubes simultaneously by going along their common edge.

Lemma 5 If two nodes v1 and v2 of the grid are connected by a path P that visited s distinct

cubes of the grid, then one can connect v1 and v2 by a grid polygon whose length is not greater than  12sa.

Proof.

For each cube K , we can consider a maximal segment of P contained in K . Thus P is subdivided into segments, each contained in one cube of the mesh and such that its continuation in either direction leaves the cube. Consider such a segment (a subpath) [wu] for a cube K , that is P enters K via a point w and leaves it via a point u (of course, P can make several visits like that to a cube). Let u lie in a face with vertices u1, u2 , u3, u4 . Then one of the four segments uui , 1  i  4, does not intersect the obstacles W . Indeed, assume the contrary. Then, because of a-separability of W , the intersection W with any cube of the grid and, thus, with any of its faces is a convex set. The fact that a segment lying in the face intersects W means that there are points of the segment in the interior of W . But u 62 int(W ). So if u is di erent from any ui , then our assumption implies that u 2 int(W ), for u is the convex hull of the intersection of the face with int(W ). If u = ui for some i then uui = u have the desired property. Now replace P by a path P 0 obtained by a sequence of the following modi cations. Take any cube K visited by P , together with a maximum segment [wu] of P in K . Let i, 1  i  4, be such that the segment uui does not intersect W . Insert the segments uui and ui u into P to force P to visit a vertex of the cube by going there and back. Repeating the same procedure for the entry point w (which is in its turn an exit point for some other cube), we modi ed our segment so that it ends and begins at a vertex of K . We will say that the visit of P to K terminates at ui. Perform this operation for all cubes visited by P . This gives P 0, a new path which visits the same cubes, and for which every maximum subpath contained in a cube begins at and leaves the cube via a vertex. Remark that though the number of cubes visited by P 0 is the same as for P , i. e. s, the length of P 0 might have increased. Delete from P 0 all loops connecting the same vertex, thus obtaining a new path P 00 . Notice that the number of vertices visited by P 00 is at most 8s, for any vertex of any of the visited cubes is visited at most once. Moreover, P 00 , still connects v1 and v2 . Now consider a maximum subpath of P 00 contained in a grid cube K and connecting two vertices of K . Denote by W1 the (convex) intersection W \ K . Consider the following homotopy of this subpath in K : pull each of its points v in the direction form the point of W1 nearest to v . (The nearest point is unique since W1 is convex.) Let the magnitude of the velocity of v (with 22

respect to the parameter of the homotopy) be equal to the distance from v to the boundary @K of K . This way we \push" the maximum subpath of P 00 in question away from W1 until we transform it into a path lying in the boundary @K of K , connecting the same vertices of K and not intersecting W . Of course, its length still might have increased. Repeat the same procedure for each maximal subpath contained in one cube, having constructed a path going along faces. Having repeated again the same operation for each maximum subpath contained in one boundary square of a grid cube (and now using a homotopy pushing this subpath to the boundary of this square), we end up with a polygonal path P 000 that consists of edges of the mesh, connects the same vertices v1 and v2 , and is contained in at most s cubes. After removing all loops from this path, we obtain a path P 0000 that traverses each edge at most once. Since s cubes together have at most 12s edges, we have constructed a path with required properties and whose length is at most 12as. 

Lemma 6 If a pathP connecting nodes v1 and v2 of the grid has visited s di erent cubes of  the grid, then jP j  s?7 2 a. Proof.

Mark the points when P visits for the rst time the 1st, the 9th, the 16th, ..., the (7l + 2)th, ... cube (we count only the visits to new cubes). The pieces of P between two consecutive marked  points will be called intervals. Clearly, the number of intervals is at least s?7 2 . It remains to show that the length of each interval is at least a. Reasoning by contradiction, note that the projection of an interval whose length is less than a to each coordinate axis is a segment of a length less than a. Hence it is contained in the interior of the union of two adjacent segments of the form [ka; (k + 1)a]. Now the product of these segments, which is the interior of the union of eight grid cubes with a common vertex, contains the interval. This contradicts to the assumption that the interval visited at least 9 cubes. 

Recall that s denotes the number di erent cubes of the grid visited by a shortest path  . ? s?of  2 6 s ? 2  Lemma 6 gives j j  7 a  7 ? 7 a. On the other hand, for the length of a shortest grid polygon G Lemma 5 gives jG j  12sa. Estimate (37) immediately follows from these two inequalities, and this concludes the proof of Theorem on Simple Grid Method. 

Remark 2. The estimation given by Theorem on Simple Grid Method is, obviously, far from the exact one. We conjecture that jG j  7jj.

The latter bound cannot be essentially improved as one can see from the following example. Denote by v1 , v2, v3 , v4 the vertices of the bottom face of the unit cube (ordered counter clockwise), and by u1 , u2 , u3 , u4 the respective vertices of the top face, see Figure 4. Let W be the polyhedron with the following 5 vertices: 3 vertices at the middles of edges v1 v2 , v3 v4 , u3u4, and 2 vertices on edges u1v1 and u2v2 close to points v1 and v2 respectively. To obtain W , apply to W0 a homothety with coecient (1 + "), for a small enough " > 0 cantered at the center of the cube. For this obstacle the only grid polygon connecting v1 and v2 visits consecutively v1, v4 , u4 , u1, u2, u3, v3, v2. On the other hand, one can see that v1 and v2 can be connected by a path (avoiding W and contained in the cube) of length close to 1.

23

v3

v2

v1

v4

u2 u3

u4

u1

Figure 4: Some faces of W0 .

References [AHSV97] P.K. Agarwal, S. Har-Peled, M. Sharir, and K.R. Varadarajan. Approximate shortest paths on a convex polytope in three dimensions. J.Assoc. Comput. Mach., 44:567{584, 1997. [AHU76] A. Aho, J. Hopcroft, and J. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1976. [ALMS98] L. Alexandrov, M. Lanthier, A. Maheshwari, and J.-R.Sack. An -approximation algorithm for weighted shortest paths on polyhedral surfaces. In Proc. 6th Scand. Workshop Algorithm Theory, vol. 1432 of Lecture Notes Comput. Sci., pp.11{22, Springer-Verlag, 1998. [Bal95] W. Ballmann. Lectures on Spaces of Nonpositive Curvature. DMV Seminar, Band 25. Birkhauser, Basel, Boston, Berlin, 1995. [BN93] V. Berestovsij and I. Nikolaev. Multidimensional generalized Riemann spaces. In Yu. Reshetnyak, editor, Geometry 4. Non-regular Riemann Geometry, Encyclodia of Math. Sciences, vol. 70. Springer-Verlag, 1993. [BSS89] L. Blum, M. Shub, and S. Smale. On a theory of computation and complexity over real numbers: NP-completeness, recursive functions and universal machines. Bull. Amer. Math. Soc., 1:1{46, 1989. [Bur98] D.Burago. Hard Balls Gas and Alexandrov Spaces of Curvature Bounded Above. Proceedings of the ICM, Invited Lecture, vol 2, pp 289-298, 1998. [BFK98] D. Burago, S. Ferleger, A. Kononenko Uniform estimates on the number of collisions in semi-dispersing billiards. Annals of Mathematics, 147, pp 695-708, 1998. [CR87] J. Canny and J. Reif. New lower bound technique for robot motion planning problemms. In Proc. 28th Annu. IEEE Symp. on Foundations of Comput. Sci., pages 49{60, 1987.

24

[CH90] [CSY94] [C87] [GS98] [GS99] [H99] [HKSS93]

[HKSS94]

[MMP87] [Pap85] [SS90] [SS86]

J. Chen, and Y. Han. Shortest paths on a polyhedron. In Proc. 6th Annu. ACM Symp. on Computational Geometry, (Berkeley, June 6-8), New York, pp.360{369, 1990. J. Choi, J. Sellen, and C.-K. Yap. Approximate Euclidean shortest path in 3-space. In Proc. 10th ACM Symp. on Computational Geometry, pages 41{48, 1994. K. L. Clarkson. Approximate algorithms for shortest path motion planning. In Proc. 19th Annu. ACM Symp. Th. Comput., pp. 56{65, 1987. D. Grigoriev and A. Slissenko. Polytime algorithm for the shortest path in a homotopy class amidst semi-algebraic obstacles in the plane. In Proc. of the 1998 Int. Symp. on Symbolic and Algebraic Computations (ISSAC'98), pages 17{24. ACM Press, 1998. D. Grigoriev and A. Slissenko. Computing minimum-link path in a homotopy class amidst semi-algebraic obstacles in the plane. St. Petersburg Math. J., 10(2):315{332, 1999. S. Har-Peled Constructing approximate shortest path maps in three dimensions SIAM J. Comput., 28(4):1182{1197, 1999. J. Heintz, T. Krick, A. Slissenko, and P. Solerno. Une borne inferieure pour la construction de chemins polygonaux dans Rn . In Publications du departement de mathematiques de l'Universite de Limoges, pages 94{100. l'Universite de Limoges, France, 1993. J. Heintz, T. Krick, A. Slissenko, and P. Solerno. Search for shortest path around semialgebraic obstacles in the plane. J. Math. Sciences, 70(4):1944{1949, 1994. Translation into English of the paper published in Zapiski Nauchn. Semin. LOMI, vol. 192(1991), p. 163{173. J. S.Mitchell, D. M.Mount, and C. H.Papadimitriou. The discrete geodesic problem. SIAM J. Comput., 16:647{668, 1987. C. H. Papadimitriou. An algorithm for shortest-path motion in three dimensions. Inform. Process. Lett., 20:259{263, 1985. J. T. Schwartz and M. Sharir. Algorithmic motion planning in robotics. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science. Vol. A, pages 391{430. Elsevier Science Publishers, B. V., 1990. M. Sharir, and A. Schorr. On shortest paths in polyhedral spaces. SIAM J. Comput., 15:193{215, 1986.

25