Continuation method in robotics

0 downloads 0 Views 128KB Size Report
mid 1970s by R. B. Kellog, T. Y. Li and J. ... g(x)• x x( )λ. •. •x_o=h(x). C. Fig. 1. The fixed point problem definition, for x0 ∈ Sn−1 ..... defined by F(x,0) and G(x,0).
Continuation method in robotics Krzysztof Tchoń Institute of Computer Engineering, Control and Robotics, Wrocław University of Technology, 50–372 Wrocław, Poland, email: [email protected]

Abstract. The continuation method, also called the method of homotopy, of embedding or of deformation, belongs to well established methods of solving mathematical problems, whose history dates back to H. Poincar´e. In a succinct manner the essence of this method can be epitomized in a statement that, if you cannot solve a problem, you just try to generalize it. Actually, the continuation method consists in embedding a given problem into a parameterized family of problems, solving each problem in the family, and finally, recovering the solution of the original problem from the family of solutions. The continuation method provides versatile tools for solving a wide spectrum of problems, of which we shall concentrate on three classes: the equivalence problems, the optimization problems and the inverse problems. Since our interest in the continuation method originates from robotics, special attention will be paid to the inverse kinematic problems for mobile manipulators. 1. Introduction The continuation method, belonging to the mathematical instrumentarium for more than a century, was re-discovered in the mid 1970s by R. B. Kellog, T. Y. Li and J. A. Yorke who were studying fixed points of a differentiable map [1]. Their observation can be summarized in the following way [2]. Let g : D n → D n be a smooth map of the n dimensional disk in Rn into itself, without fixed points on the boundary sphere S n−1 . We want to examine the fixed points of g(x), i.e. the points x for which g(x) − x = 0. To this aim, we denote by C ⊂ D n the set of fixed points, and define a map h : D n \ C → S n−1 in such a way that the image h(x) lies at the intersection with S n−1 of a ray emanating from g(x) and passing through x, as shown in figure (1). By

g(x) •

C x• x(l)



x_o=h(x)

Fig. 1. The fixed point problem

definition, for x0 ∈ S n−1 we have h(x0 ) = x0 , so h(x) is a retraction. The retraction map is smooth and, by the Sard’s theorem [3], almost all points x0 ∈ S n−1 are regular values of h(x), what means that at any y ∈ h−1 (x0 ) the Jacobian ∂h (y) has full rank n−1. This means that matrix ∂x for regular x0 the inverse image h−1 (x0 ) defines a curve x(λ) ∈ D n \ C, λ ∈ [0, 1), emanating from x0 , that satisfies the implicit equation x − x0 = λ(g(x) − x0 ). By following this curve toward the value λ = 1 we approach a fixed point of the map g(x). In this way the fixed point can be found efficiently. The map G(x, λ) = λg(x) − x + (1 − λ)x0 = λ(g(x) − x) + (1 − λ)(x − x0 ) = 0 (1) defines an embedding of the original fixed point problem G(x, 1) = g(x) − x = 0 into a family of problems G(x, λ) = 0, beginning with a trivial problem G(x, 0) = x − x0 = 0, where x0 comes from a dense subset of regular values of h(x). Equivalently, G(x, λ) establishes a homotopy between the maps G(x, 0) and G(x, 1). When λ approaches 1, the curve x(λ) obtained

by solving the implicit equation G(x, λ) = 0 provides a solution to the fixed point problem. To gain a further insight into the continuation method, let us look at the problem of the matrix inversion. Given a square n × n, positive definite matrix Q, we want to find a matrix M such that QM − In = 0, where In denotes the n × n unit matrix. We shall embed the problem into a family H(M, λ) = λ(QM −In )+(1−λ)(M −In ) = λ(Q − In )M + M − In = 0, (2) λ ∈ [0, 1] such that H(M, 0) = M − In = 0 has the trivial solution M = In and H(M, 1) = QM − In = 0 is the original matrix inversion problem. For a positive definite Q the derivative DM H(M, λ) is surjective, therefore the equation (2) defines a curve M = M (λ) emanating from In . By the differentiation of (2) we arrive at the matrix differential equation (λ(Q − In ) + In )

d M (λ) = −(Q − In )M (λ). dλ

Again by (2) (λ(Q − In ) + In )−1 = M (λ), so finally, we get the following matrix inversion algorithm: Q−1 = M (1), where M (λ) solves d M (λ) = −M (λ)(Q − In )M (λ) dλ

(3)

with the initial condition M (0) = In . The differential equation (3) is usually solved numerically, except for simple like the following " cases # 1 a example. Let Q = . Then, the inversion 0 1 algorithm "

#

d 0 a M (λ) = −M (λ) M (λ) 0 0 dλ "

#

1 −aλ defines a trajectory M (λ) = passing 0 1 through M (0) = I2 . The resulting inverse matrix Q

−1

"

#

1 −a = M (1) = . 0 1

In some cases one prefers an asymptotic inversion algorithm that provides a solution when the

parameter tends to +∞. To derive such an algorithm we shall replace (2) with the following homotopy K(M, θ) = (1−exp (−γθ))(QM −In )+exp (−γθ)(M −In ) = QM − In + exp (−γθ)(M − QM ) = 0, (4) where γ > 0 is a convergence parameter and θ ∈ [0, +∞). The differentiation of (4) with respect to θ, and a suitable substitution result in an asymptotic matrix inversion algorithm d M (θ) = −γM (θ)(QM (θ) − In ). dθ

(5)

The inverse Q−1 is obtained as the limit limθ→+∞ M (θ). It is instructive to notice that our algorithm (5) coincides with the dynamic matrix inversion algorithm developed in [4]. The presented above examples illustrate a general principle of the continuation method. Given a problem P we try to embed the problem into a family Pλ of problems, parameterized by λ ∈ [0, 1], such that P1 = P and P0 is a problem whose solution is easy. If the embedding is sufficiently regular then the solutions of the problems Pλ lie on a curve x(λ) and the solution to P is given by x(1). The solution curve x(λ) can be followed either in a discrete or in a continuous way [5], giving rise either to the discrete or to the continuous continuation. In the discrete case the interval [0, 1] is divided into a finite number of subintervals determined by 0 = λ0 < λ1 , . . . , λk = 1, and a chain of problems Pλi is solved using the predictor-corrector scheme [6]. The continuous case consists in computing the curve x(λ) as a solution of a differential equation referred to as the WażewskiDavidenko equation [7, 8]. Actually, this last approach has been adopted in the algorithms (3) and (5). From a diversity of problems that can be addressed by the continuation method we shall focus on three classes: the equivalence problems, the optimization problems and the inverse problems. Particular significance within the last class

will be attributed to the inverse kinematic problem for mobile manipulators. As a consequence, this paper is composed in the following way. Section 2 is concerned with equivalence problems. In section 3 we show an application of the continuation method to optimization problems. Section 4 concentrates on the inverse problems. The paper is concluded with Section 5. 2. Equivalence problems Let X denote a space supporting a differentiable structure. Suppose that τ1 (x), τ0 (x) are two geometric objects defined on X, like functions, vector fields, differential forms, tensors, etc. We call τ1 , τ0 equivalent, if there exists a local diffeomorphism ϕ(x) such that ϕ ∗ τ1 (x) = τ0 (x), where ∗ denotes an action of ϕ on τ1 . The equivalence problem consists in proving the existence of ϕ and in computing its form. A remarkable part of the theory of dynamic systems, singularity theory and differential geometry focuses on the determining the equivalence between a given object and the so called normal form. Let us mention in this context the straightening out theorem for vector fields, the Morse lemma for smooth functions, the Darboux theorem on symplectic forms or the Frobenius theorem on involutive distributions. All these issues, but also other, like the Poincar´e lemma on differential forms, can be addressed within the continuation method. Consider an equivalence problem of τ1 (x) to a normal form τ0 (x), such that the action ϕ ∗ τ1 = τ1 ◦ ϕ is just the composition of maps. We shall embed the problem into a 1-parameter family of equivalence problems consisting in establishing the equivalence of τλ (x) to the normal form using a family of local diffeomorphisms ϕλ (x) such that for λ ∈ [0, 1], τλ ◦ ϕλ (x) = τ0 (x),

(6)

where ϕ0 (x) = x and ϕλ (0) = 0. After a differentiation of (6) with respect to λ we obtain the basic equivalence equation ∂τλ ∂τλ (ϕλ (x))Xλ (ϕλ (x))+ (ϕλ (x)) = 0, (7) ∂x ∂λ

where Xλ (x) denotes a parameter dependent vector field such that dϕλ (x) = Xλ (ϕλ (x)). dλ If the flow of this field is well defined for every λ ∈ [0, 1] then ϕ1 (x) constitutes the solution of the equivalence problem. Now, the core of the method lies in solving (7) for Xλ (x). Since the first term on the left hand side of (7) is equal to the Lie derivative of LXλ τλ , in this context the continuation method is often referred to as the Lie transform method [3]. As an illustration of the Lie transform method we shall demonstrate a proof of the Morse lemma. Our presentation will be based on [3]. The Morse lemma can be stated in the following way. Lemma 1 Let h : Rn → R be a smooth function ∂h such that h(0) = 0 and ∂x (0) = 0. Suppose 2 ∂ h that the Hessian H = ∂x2 (0) of h(x) at 0 has rank n. Then, there exists a local diffeomorphism ϕ : Rn → Rn , ϕ(0) = 0, such that for x in an open neighbourhood of 0 1 h ◦ ϕ(x) = xT Hx. 2

(8)

Proof: In order to establish the equivalence between τ1 (x) = h(x) and τ0 (x) = f (x) = 1 T 2 x Hx, we introduce a homotopy τλ (y) = λh(y) + (1 − λ)f (y) = f (y) + λ(h(y) − f (y)), λ ∈ [0, 1], and postulate that there exists a family of diffeomorphisms ϕλ (x), ϕ0 (x) = x, such that τλ ◦ ϕλ (x) = τ0 (x).

(9)

Clearly, ϕ1 (x) will be the desired diffeomorphism. λ With the notation p(y) = ∂τ ∂λ (y) = h(y) − f (y) the basic equivalence equation (7) assumes the form ∂τλ (y)Xλ (y) + p(y) = 0, ∂y

(10)

where Xλ (x) denotes a parameter dependent vector field dϕλ (x) = Xλ (ϕλ (x)) dλ

whose flow is ϕλ (x). It is easily checked that ∂p ∂2p p(0) = 0, ∂y (0) = 0 and ∂y 2 (0) = 0. Now, our objective will be to determine the vector field Xλ . To this aim, by a Taylor formula we obtain 1

Z

p(y) = p(0) +

dp(sy) =

0

Z

1 ∂p

∂y

0

(sy)dsy.

Analogously, we get ∂p ∂p (z) = (0) + ∂y ∂y

1

∂p d (tz)= ∂y 0 Z 1 2 ∂ p (tz)dtz. ∂y∂z 0

Z





Next, suitable substitutions into (10) yield 

∂f ∂p (y) + λ (y) Xλ (y) + p(y) = ∂y ∂y 

Hy + λ

Z

1 ∂2p

0

∂y 2

!

(ty)dty Xλ (y)+ Z

1 ∂p 0

∂y

(ty)dty = 0.

From this equation we derive the following form of the vector field Xλ (y) = − H +λ

Z

0

1 ∂2p

∂y

(ty)dt 2

!−1 Z

0

1

∂p (ty)dt. ∂y

where the integer k is called the index of the matrix H. 3. Optimization problems We shall study the following constrained optimization problem: find a minimum of the objective function f : Rn → R subject to m < n equality constraints g(x) = 0, where g : Rn → Rm , and all the data are smooth [5]. Introduce two homotopies F (x, λ) and G(x, λ), λ ∈ [0, 1], such that F (x, 1) = f (x) and G(x, 1) = g(x). Let x0 solve the problem defined by F (x, 0) and G(x, 0). By the differentiation of G(x, λ) we get d ∂G ∂G (x, λ) x(λ) + (x, λ) = 0. ∂x dλ ∂λ We shall choose the discrete way of following the curve x(λ), so let 0 = λ0 < λ1 < . . . < λk = 1 denote a chain of points from the interval [0, 1], defining a uniform discretization of this interval with the step ∆λ. Suppose that xi is a solution of the optimization problem associated with λi . If the discretization step ∆λ is sufficiently small, we may assume that the problem λi+1 = λi +∆λ amounts to the minimization with respect to v of the function fi+1 (v) = F (xi + v∆λ, λi+1 )

2

∂ p Since ∂y 2 (0) = 0, then, for sufficiently small y, the matrix in the brackets is nonsingular and the vector field Xλ (y) is well defined for any λ ∈ [0, 1]. Furthermore, for every λ the vector field Xλ (0) = 0, i.e. 0 is the equilibrium point. The flow of this vector field defines the family of diffeomorphisms ϕλ (x) of which ϕ1 (x) is the solution of the equivalence problem. The existence of the flow for λ ∈ [0, 1] results from the lower semicontinuity of the positive life time of the flow [3]. Also, it follows that ϕλ (0) = 0, as desired.  The presented proof generalizes to the function h defined on a Banach space. In finite dimensions the normal form in (8) can be refined further; in fact, by a linear coordinate change the quadratic form 21 xT Hx transforms to

−x21

− x22

− ... −

x2k

+

x2k+1

+ ... +

x2n ,

satisfying the linear constraint gi+1 (v) =

∂G ∂G (xi , λi )v + (xi , λi ) = 0 ∂x ∂λ

resulting from retaining the linear terms of the Taylor expansion of G(xi + v∆λ, λi + ∆λ). Having computed v, the solution of the problem λi+1 is found as xi+1 = xi + v∆λ. This type of approach has been promoted e.g. in [9]. An alternative approach consists in converting the optimization problem into a system of nonlinear equations [10]. Let us begin with an unconstrained problem of minimizing an objective function f (x). The suitable homotopy can be chosen as F (x, λ) = λ

∂f (x) + (1 − λ)(x − x0 ), ∂x

where x0 is regarded as an initial point. The curve x(λ) satisfying the equation F (x, λ) = 0,

that connects the point (0, x0 ) with a minimum (1, x(1)) of f (x), can be computed as a solution of the Ważewski-Davidenko equation !−1

d ∂2f x(λ) = λ 2 (x(λ)) + (1 − λ)In × dλ ∂x   ∂f x(λ) − x0 − (x(λ)) ∂x that is well defined provided that the Hessian matrix of f (x) is nonsingular. As the next example we shall examine a problem with inequality constraints consisting in the minimization of f (x) subject to g(x) ≤ 0. The corresponding Karush-Kuhn-Tucker optimality conditions are the following [11] ∂f ∂x (x)

+



 ∂g T ∂x (x)u

= 0,

(11)

g(x) ≤ 0, u ≥ 0, uT g(x) = 0,

with u ∈ Rm denoting the vector of KarushKuhn-Tucker multipliers. Before a final embedding of the optimality conditions into a system of equations we need to define a pair of homotopies F (x, λ) and G(x, λ) such that F (x, 1) = f (x), G(x, 1) = g(x), and x0 is a solution of the problem defined by F (x, 0) and G(x, 0). The choice of these homotopies belongs to the art of the continuation method and has a vital significance for the efficiency of solution of the optimization problem. Next, we replace the conditions appearing in the second line of (11) by a set of equations as proposed in [12]. Eventually, putting together the homotopies and the optimality conditions, we obtain a combined homotopy allowing us to solve the optimization problem by embedding it into a family of nonlinear equations [10]  

λ H(x, u, λ) = 

∂F ∂x (x, λ)

+



∂G ∂x

T

(x, λ)u

K(x, u, λ)+



#

+(1 − λ)(x − x0 ) , 0

where, for i = 1, . . . , m, Ki (x, u, λ) = −|(1 − λ)b0i − Gi (x, λ) − ui |3 + ((1 − λ)b0i − Gi (x, λ))3 + u3i − (1 − λ)c0i ,

and the vectors b0 , c0 ∈ Rm should fulfil the conditions b0 , c0 > 0 and G(x0 , 0) − b0 < 0. Convergence conditions for optimization algorithms based on the continuation method have been provided in [10,13]. The application of the continuation method to multi-objective optimization problems is reported in [14, 15]. 4. Inverse problems Informally speaking, an inverse problem refers to a situation when, given an answer, we want to find a question. More formally, given a system of equations y = f (x) describing a phenomenon, we know y and need to discover x that satisfies these equations. The fixed point problem, the matrix inversion problem and the optimization problem converted to a system of equations that we have studied in the previous sections may serve as examples of the inverse problems. Indeed, solving the inverse problems belongs to the most classical applications of the continuation method [5, 16, 17]. At present, numerous software packages are available, dedicated to the continuation method of solving systems of polynomial equations [18, 19] and general systems of nonlinear equations [20]. A natural application area of the continuation method includes the inverse kinematic problems in robotics, sometimes referred to as the motion planning problems. The introduction of the continuation method into robotics should be credited to H. Sussmann [21] who proposed to use this method in order to solve the motion planning problem for nonholonomic mobile robots. A further development of this method has resulted both in new theory [22–24] as well as in new implementations [25]. Inspired by these results we have set forth a uniform theory of stationary and mobile manipulators known as the endogenous configuration space approach [26–30]. In what follows we shall expound a continuation method genesis of inverse kinematics algorithms for mobile manipulators. Having described a general principle supporting the derivation of these algorithms, we shall concentrate on the most often used inverse kinematics algorithm based on the Jacobian pseudoinverse of a mobile manipulator. By the very nature of the endogenous con-

figuration space approach the results referring to mobile manipulators specify directly to stationary manipulators and mobile platforms. 4.1. Mobile manipulator By a mobile manipulator we mean a robotic system composed of a nonholonomic mobile platform and an onboard manipulator. An example of such a robot consisting of a 4 wheel mobile platform that carries a 3 degree of freedom manipulator is shown in Figure 2. Taking into account the nonholonomic

constitutes the endogenous configuration space X = L2m [0, T ] × Rp of the mobile manipulator. The inner product matrices R(t) and W are chosen symmetric and positive definite. The output function k(q, x) computes the position and orientation of the end effector. Denote by q(t) = ϕq0 ,t (u(·)) the platform trajectory initialized at q0 and driven by u(·). For a given q0 and the control time horizon T the end point map Kq0 ,T (u(·), x) = y(T ) = k(ϕq0 ,T (u(·)), x), (13) of the system (12) defines the kinematics of the mobile manipulator. The derivative of the kinematics Jq0 ,T (u(·), x)(v(·), w) = d Kq ,T (u(·) + αv(·), x + αw) = dα α=0 0 Z

T

Φ(T, s)B(s)v(s)ds + D(T, x)w

C(T, x)

0

introduces the Jacobian operator of the mobile manipulator. Its computation exploits the linear approximation

Fig. 2. Mobile manipulator

constraints imposed on the motion of the platform (e.g. not allowing for the lateral or the longitudinal slip of its wheels), the kinematics of a mobile manipulator can be represented as a driftless control system with outputs (

q˙ = G(q)u = y = k(q, x).

Pm

i=1 gi (q)ui ,

(12)

In this system the vectors q ∈ Rn and x ∈ Rp refer, respectively, to the platform coordinates and to joint positions of the onboard manipulator, whereas y ∈ Rr describes the position and the orientation of the end effector. The control signals u(t) ∈ Rm have the meaning of the platform speeds. A control function of the mobile manipulator includes the platform control u(·) and the joint position x ∈ Rp of the onboard manipulator. The set of these control functions, equipped with the inner product h(u1 (·), x1 ), (u2 (·), x2 )iRW = Z

0

T

uT1 (t)R(t)u2 (t)dt + xT1 W x2 ,

(

ξ˙ = A(t)ξ + B(t)v, η = C(t, x)ξ + D(t, x)w

(14)

of the system (12) along the control-trajectory triple (u(t), x, q(t)). The matrices defining the approximation are A(t) = C(t, x)

∂(G(q(t))u(t)) , ∂q = ∂k(q(t),x) , ∂q

B(t) = G(q(t)), D(t, x) =

∂k(q(t),x) , ∂x

and the transition matrix Φ(t, s) satisfies the evolution equation ∂Φ ∂t (t, s) = A(t)Φ(t, s) with the initial condition Φ(s, s) = In . The Jacobian operator allows to distinguish between regular and singular endogenous configurations. An endogenous configuration (u(·), x) ∈ X is regular, if the Jacobian is surjective. A necessary and sufficient condition of regularity requires that the Gram matrix Dq0 ,T (u(·), x) = D(T, x)W −1 D T (T, x)+ C(T, x)

Z

T

Φ(T, s)B(s)R−1 (s)×

0

B T (s)ΦT (T, s)ds C T (T, x) (15) associated with the system (14) has rank r.

4.2. Inverse kinematic problem The inverse kinematic problem for a mobile manipulator is formulated in the following way: Given the kinematics (13) and a desirable position and orientation yd ∈ Rr of the end effector, find an endogenous configuration (u(·), x) ∈ X such that Kq0 ,T (u(·), x) = yd . Suppose that (u0 (·), x0 ) denotes an arbitrary endogenous configuration. For θ ∈ [0, +∞) we define the following homotopy (compare (4))

this pseudoinverse for mobile manipulators is sketched below. Given a vector η ∈ Rr , let us consider a Jacobian equation Jq0 ,T (u(·), x)(v(·), w) = η.

By definition, at a regular endogenous configuration (u(·), x) this equation is solvable for any η. We shall assume that a solution of this equation is found by the least squares method that is tantamount to minimizing the squared norm min ||(v(·), w)||2RW

K(u(·), x, θ) = (1 − exp (−γθ))(Kq0 ,T (u(·), x) − yd )+ exp (−γθ)(Kq0 ,T (u(·), x)−Kq0 ,T (u0 (·), x0 )) = Kq0 ,T (u(·), x) − yd +

(v(·),w)

under the equality condition (19). An application of the method of Lagrange multipliers results in the Jacobian pseudoinverse operator

exp (−γθ)(yd − Kq0 ,T (u0 (·), x0 )), (16) where γ > 0. It is easily seen that this homotopy connects the trivial problem Kq0 ,T (u(·), x) − Kq0 ,T (u0 (·), x0 ) = 0 (θ = 0), whose solution is (u0 (·), x0 ) with the original inverse kinematic problem (θ approaching +∞). By the differentiation of the identity K(uθ (·), x(θ), θ) = 0 we obtain the following Ważewski-Davidenko equation d Jq0 ,T (uθ (·), x(θ)) dθ

uθ (·) x(θ)

!

=

− γ(Kq0 ,T (uθ (·), x(θ)) − yd ). (17) Assume that Jq#0 ,T (u(·), x) is a right inverse of the Jacobian operator, what means that Jq0 ,T (u(·), x)Jq#0 ,T (u(·), x) = Ir . Then, the equation (17) can be transformed to the following d dθ

uθ (·) x(θ)

!

(19)

=

−γJq#0 ,T (uθ (·), x(θ))(Kq0 ,T (uθ (·), x(θ))−yd ). (18) For every right inverse of the Jacobian operator the dynamic system (18) defines a specific inverse kinematics algorithm for the mobile manipulator. 4.3. Jacobian pseudoinverse algorithm In robotic applications the most often exploited inverse kinematics algorithm relies on the pseudoinverse of the Jacobian. A derivation of

(u(·), x) : Rr −→ X Jq#P 0 ,T defined in the following way v(t) w 

!





= Jq#P (u(·), x)η (t) = 0 ,T

 R−1 (t)B T (t)ΦT (T, t)C T (T, x) −1 Dq0 ,T (u(·), x)η, W −1 DT (T, x)

(20)

where Dq0 ,T (u(·), x) is the Gram matrix (15). It is easily checked that the Jacobian pseudoinverse is a right inverse of the Jacobian. The dynamic system (18) that involves the operator (20) defines the Jacobian pseudoinverse inverse kinematics algorithm. A computable solution of the inverse kinematic problem is obtained by the introduction of a finite-dimensional representation of endogenous configurations, a discretization of (18), and the use of the Newton method [28]. 5. Conclusion In this paper we have made an overview of fundamental ideas underlying the continuation method. Specific examples have been provided of the application of this method to the equivalence, optimization, and inverse problems. As an illustration of the last application area, a general scheme for obtaining inverse kinematics algorithms for mobile manipulators has been presented. 6. Acknowledgments This research has been supported by a statutory grant.

References. [1] R. B. Kellog, T. Y. Li and J. A. Yorke, A constructive proof of the Brouwer fixed point theorem and computational results, SIAM J. Numer. Anal., 13, 1976, pp. 473-483. [2] J. C. Alexander, The topological theory of an embedding method, In: Continuation Methods, ed. by H. Wacker, Academic Press, New York, 1978, pp. 37-68. [3] R. Abraham, J. E. Marsden and T. Ratiu, Manifolds, Tensor Analysis, and Applications, Springer-Verlag, New York, 1988. [4] N. H. Getz and J. E. Marsden, Dynamical methods for polar decomposition and inversion of matrices, Linear Algebra Appl., 258, 1997, pp. 311-343. [5] S. L. Richter and R. A. DeCarlo, Continuation methods: Theory and applications, IEEE Trans. Circuits Syst., vol. CAS-30, 1983, pp. 347-352. [6] P. Deuflhard, Newton Methods for Nonlinear Problems, Springer-Verlag, Berlin, 2004. [7] T. Ważewski, Sur l’´evaluation du domaine d’existence des fonctions implicites r´eelles ou complexes, Ann. Soc. Pol. Math., vol. 20, 1947, pp. 81-120. [8] D. F. Davidenko, On a new method of numerically integrating a system of nonlinear equations, Dokl. Akad. Nauk SSSR, 88, 1953, pp. 601-603. [9] D. M. Dunlavy and D. P. O’Leary, Homotopy optimization methods for global optimization, Sandia Nat. Lab., Report SAND2005-7495. [10] L. T. Watson, Theory of globally convergent probability-one homotopies for nonlinear proramming, SIAM J. Optim., 11, 2000, pp. 761780. [11] E. Polak, Optimization: Algorithms and Consistent Approximations, Springer-Verlag, New York, 1997. [12] O. L. Mangasarian, Nonlinear Programming, McGraw-Hill, New York, 1969. [13] L. T. Watson and R. T. Haftka, Modern homotopy methods in optimization, Comput. Methods Appl. Mech. Eng., 74, 1989, pp. 289-304. [14] C. Hillermeier, Nonlinear Multiobjective Optimization - A Generalized Homotopy Approach, Birkh¨auser, Boston, 2001. [15] O. Sch¨utze, A. Del’Aere and M. Dellnitz, On continuation methods for the numerical treatment of multi-objective optimization problems, Dagstuhl Seminar Proc 04461, http://drops.dagstuhl.de/opus/volltexte/2005/349. [16] W. C. Rheinbold, Solution fields of nonlinear equations and continuation methods, SIAM J. Numer. Anal., 17, 1980, pp. 221-237.

[17] E. L. Allgower and K. Georg, Numerical Continuation Methods, Springer-Verlag, Berlin, 1990. [18] PHCpack: a general-purpose solver for polynomial systems by homotopy continuation, available at http://www.math.uic.edu/ jan/PHCpack/ phcpack.html [19] HOM4PS, available at http://www.mth.msu. edu/li/ [20] HOMPACK90, available at http://people.cs.vt. edu/ ltw/hompack/hompack90.html [21] H. J. Sussmann, New differential geometric methods in nonholonomic path finding, in: Systems, Models, and Feedback, A. Isidori and T. J. Tarn, Eds., Birkh¨auser, Boston, 1992, pp. 365384. [22] Y. Chitour and H. J. Sussmann, Motion planning using the continuation method, in: Essays on Mathematical Robotics, J. Baillieul, S. S. Sastry, and H. J. Sussmann, Eds., SpringerVerlag, New York, 1998, pp. 91-125. [23] A. Celouah and Y. Chitour, On the motion planning of rolling surfaces, Forum Math., 15, 2003, pp. 727-758. [24] Y. Chitour, A homotopy continuation method for trajectories generation of nonholonomic systems, ESAIM: Control, Optim. Calc. Var., vol. 12, 2006, pp. 139-168. [25] A. W. Divelbiss, S. Seereeram and J. T. Wen, Kinematic path planning for robots with holonomic and nonholonomic constraints, in: Essays on Mathematical Robotics, J. Baillieul, S. S. Sastry, and H. J. Sussmann, Eds., SpringerVerlag, New York, 1998, pp. 127-150. [26] K. Tchoń and R. Muszyński, Instantaneous kinematics and dexterity of mobile manipulators, in Proc. 2000 IEEE Int. Conf. Robot. Automat, San Francisco, CA, 2000, pp. 2493-2498. [27] K. Tchoń, Repeatability of inverse kinematics algorithms for mobile manipulators, IEEE Trans. Autom. Control, vol. 47, 2002, pp. 13761380. [28] K. Tchoń and J. Jakubiak, Endogenous configuration space approach to mobile manipulators: a derivation and performance assessment of Jacobian inverse kinematics algorithms, Int. J. Control, vol. 76, 2003, pp. 1387-1419. [29] K. Tchoń and J. Jakubiak, A repeatable inverse kinematics algorithm with linear invariant subspaces for mobile manipulators, IEEE Trans. Syst., Man, Cybernet. - Part B: Cybernetics, vol. 35, 2005, pp. 1051-1057. [30] K. Tchoń, Repeatable, extended Jacobian inverse kinematics algorithm for mobile manipulators, Syst. Control Lett., vol. 55, 2006, pp. 87-93.

Suggest Documents