SMCia/07 2007 Three-Rivers Workshop on Soft Computing in Industrial Applications University of Passau, Germany, August 1 - 3, 2007
1
Obstacle Avoidance for Kinematically Redundant Manipulators Based on an Improved Problem Formulation and the Simplified Dual Neural Network Shubao Liu, Xiaolin Hu and Jun Wang
Abstract— With the wide deployment of kinematically redundant manipulators in complex working environments, obstacle avoidance emerges as an important issue to be addressed in robot motion planning. In this paper, the inverse kinematic control of redundant manipulators with obstacle avoidance task is formulated into a (convex) quadratic programming (QP) problem with both equality and inequality constraints. Compared with our previous formulation, the new scheme is more favorable in the sense that it can yield better solutions to the control problem. To solve this time-varying QP problem in real time, a newly emergent recurrent neural network, simplified dual neural network, is adopted, which has lower structural complexity compared with existing neural networks for solving QP problems. The effectiveness of the proposed approach is demonstrated by using simulations on the Mitsubishi PA10-7C manipulator.
I. I NTRODUCTION As the development of automation industry, robot manipulators are required to work in more and more complex and dynamic environments. An important issue to be considered is how to effectively avoid static or moving objects in the workspace. Kinematically redundant manipulators are those having more degrees of freedom than required to perform given end-effector moving tasks. Being dexterous and flexible, they have been used for avoiding obstacles, singularity, and optimizing various performance criteria in addition to tracking desired end-effector trajectories. Of those versatile applications, obstacle avoidance is extremely important for successful motion control when obstacles exist in the workspace. Many studies have been reported on using kinematically redundant manipulators for motion control and obstacle avoidance. The most popular method for real-time obstacle avoidance is to apply the pseudoinverse formulation for obtaining a general solution at velocity level, which contains a minimum L2 -norm solution and a homogeneous solution (e.g., [1]–[5]). The pseudoinverse-based methods actually treat the inverse kinematics of redundant manipulators as an optimization problem. In order to conquer its inherent drawbacks [1], [2], [6], The work described in this paper was supported by a grant from the Germany/Hong Kong Joint Research Scheme sponsored by the Research Grants Council of Hong Kong and German Academic Exchange Service (reference No. G HK010/06). Shubao Liu is with the Division of Engineering, Brown University, Providence, RI 02912. Email: Shubao
[email protected]. Xiaolin Hu and Jun Wang are with the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. Email: {xlhu, jwang}@mae.cuhk.edu.hk.
[8], as well as to enclose practical considerations such as joint limits for various specific purposes such as minimizing the infinity-norm (instead of Euclidean norm) of joint velocity vector or exact joint torques, many researchers formulate the inverse kinematic control problem explicitly into optimization problems [2], [9]–[14]. In particular, examples for obstacle avoidance include efforts in [2], [9], [10], [14]. Following these ideas, in this paper, we will formulate the inverse kinematics problem with obstacle avoidance into a quadratic programming (QP) problem, which is more effective compared with counterparts in the literature. A main disadvantage of the optimization formulation based techniques is that they entails intensive computation at each step of the manipulator movement. For solving such timevarying problems the numerical algorithms are usually incompetent, especially in dynamic and/or uncertain environments. Parallel and distributed approaches such as recurrent neural networks to real-time obstacle avoidance are deemed necessary as well as desirable. In the past twenty years, recurrent neural networks for optimization have been widely explored since the work of Tank and Hopfield [15]. Reported results of numerous investigations have shown many advantages over traditional optimization algorithms (e.g., see [16], [17] and references therein). In particular, many neural networks have been developed to solve the inverse kinematics problems focusing on achieving pseudoinverse solutions [18], minimum torque [13], [19], optimal grasping force [20], and so on. In [14], a dual neural network is developed for minimizing the L2 -norm of joint velocity vector in presence of obstacles in workspace. Same problem is considered in the paper but with a different formulation and a new neural network possessing lower hardware complexity which is favored in circuit realization. The remainder of this paper is organized as follows. In Section II, the obstacle avoidance problem is formulated into a time-varying QP problem. In Section III, a neural network for solving the QP problem is presented. Section IV reports the simulation results and Section V concludes the paper. II. O BSTACLE AVOIDANCE S CHEMES A. Equality-Constrained Formulation The obstacle avoidance task is to identify from time to time the critical points, defined as the points on the manipulator
2
O
O O
Link Link C (a) Case 1 Fig. 1.
E (b) Case 2
r˙c
Critical point location Fig. 2.
link which are closest to the obstacles, and then assign them desired velocities which direct the critical point away from the obstacle. Each critical point C is defined as a point on the link L which is closest to the obstacle point O. As shown in Fig. 1, corresponding to different relative positions of the obstacle and the link, there are two possible cases for locating the critical point C on the vulnerable link L. Here the obstacle point O is the representative of the obstacle object. In model-based control, the position of the obstacle O is available a priori, and is thus derived via the online distance minimization between the manipulator link and the obstacle [1], [4], [10], [21]; while in sensor-based control, it is determined by synthetic information of sensor fusion technology, e.g., utilizing vision, ultrasonic and infra-red sensors [22], [23]. Taken obstacle avoidance into consideration, the forward kinematics of a serial-link manipulator can be described by the following augmented forward kinematics equation: re (t) = fe (θ(t)),
Link
C α
C
rc (t) = fc (θ(t))
(1)
−−→ OC
Escape direction and magnitude
where l and u are respectively lower and upper bounds of the joint velocity vector, and c(·) is an appropriate convex cost 2 ˙ ˙ ˙ function, for example, kθ(t)k 1 , kθ(t)k2 , kθ(t)k∞ , selected according to the task needs. To avoid singularity configurations or drifty in repeated operations, a linear term of θ˙ can be added. The first constraint corresponds to the primary goal of tracking a given end-effector motion, and the second constraint is to achieve the secondary goal of obstacle avoidance, which is included whenever the shortest distance between the obstacle and the manipulator is within a prescribed safety margin. When there exist more than one obstacle, multiple constraints, similar to the second constraint in (2) should be included. By putting the motion requirement of the critical points as a constraint instead of a part of the objective function (e.g., [9]), the solution obtained from (2) is ensured to drive the manipulator to avoid obstacles compulsively, if only there exists a feasible solution to the optimization problem (2).
m
where re ∈ R is m-dimensional position and/or orientation vector of the end-effector, and rc ∈ R3 is the position vector of critical point, θ(t) is the joint vector of the manipulator, and fe and fc are nonlinear functions of the manipulator with respect to respectively the end-effector and the critical point. If there exist more than one critical points, multiple equations, similar to the second one, are present. The manipulator path planning problem (also called inverse kinematics problem or kinematic control problem) is to find the joint variable θ(t) for any given re (t) and rc (t) through the inverse mapping of (1). Unfortunately, it is usually impossible to find an analytic solution due to the nonlinearity of fe (·) and fc (·). The inverse kinematics problem is thus usually solved at the velocity level with the relation Je (θ)θ˙ = r˙e ,
Jc (θ)θ˙ = r˙c ,
where Je (θ) = ∂fe (θ)/∂θ and Jc (θ) = ∂fc (θ)/∂θ are the Jacobian matrices, and r˙e is the desired velocity of the endeffector, r˙c is the desired velocity of the critical point which should be properly selected to effectively direct the link away from the obstacle. Then the motion planning and obstacle avoidance problem of a kinematically redundant manipulator can be formulated in a time-varying local optimization framework as [1], [2], [5]: ˙ minimize c(θ(t)) ˙ = r˙e (t) subject to Je (θ(t))θ(t) (2) ˙ Jc (θ(t))θ(t) = r˙c (t) ˙ 6u l 6 θ(t)
B. Inequality-Constrained Formulation The equality constrained formulation (2) has several drawbacks. First, how to determine the suitable magnitude of escape velocity r˙c ? This value should be selected large enough to ensure that the link moves faster than the obstacle (if the obstacle is moving) to ensure the minimum distance between them enlarge as time goes on. At the same time, to be safe and energy efficient, the velocity should not be too large. Second, suppose that there are p critical points. If m + 3p > n, the optimization problem (2) is overdetermined, i.e., it has no solution. In this sense, the equality constraints unnecessarily reduce the feasible solution space, sometimes even make it null. One possible approach to overcome these drawbacks is to replace the equality constraints by inequality constraints. As shown in Fig. 2, we can just constrain the direction and magnitude of r˙c in a range and let the optimization process to determine its accurate direction and magnitude. The constraints of direction and magnitude of r˙c can be considered as follows. The component of r˙c projected on * CE in Fig. 2) should change with the minimum OC (i.e., * distance |OC| between the obstacle and the link. When the distance is large, |CE| can be small, but when the distance is near small, especially less than the safety margin, |CE| should be big enough to direct the link away from the obstacle promptly. Because the moving velocity of the obstacle is not known, |CE| should be large enough when the distance |OE| is very small. A natural candidate function is the hyperbolic
3
c , where c is a positive parameter controlling |OC| the velocity magnitude. Another reason for choosing this candidate function is that the resulting inequality can be easily ˙ which can be seen clearly in the expressed in terms of θ, following derivation. With this candidate function, the obstacle avoidance constraint can be formulated as
function
|r˙c | cos α > or
(3)
yc − yo
zc − zo
Fig. 3.
zc − zo
If there are p critical points, define £ xc1 − xo1 yc1 − yo1 .. L(θ) = . £ xcp − xop ycp − yop
¤
Jc (θ)θ˙ > c.
zc1 − zo1 zcp − zop
¤ ¤
Jc1 (θ)
(4)
.
Jcp (θ)
Substituting the equality constraint in (2) by the inequality constraint (4), we get the following formulation with the L2 norm as the cost function, minimize subject to
Manipulator
Kinematic control process
JN (θ)θ˙ 6 0,
(3) can be written as yc − yo
T
It should be noted that a similar QP problem to (5) is formulated for the inverse kinematics problem with obstacle avoidance in [14]. That problem differs from (5) only in the inequality which represents the obstacle avoidance requirement. That’s to say, the inequality L(θ)θ˙ > c in (5) is replaced by
˙ r˙c = Jc (θ)θ,
xc − xo
³
C. Model Comparison
¤T
and
£
T Optimizer
Sensors
Because xc − xo
Obstacle detection
c , |OC|
OC > c. r˙c · *
£ * OC =
re
1 ˙ kθ(t)k22 2 ˙ = r˙e (t), Je (θ(t))θ(t) ˙ L(θ(t))θ(t) > c, ˙ 6 u. l 6 θ(t)
(5)
If we fix time t in (5), it becomes a convex quadratic programming problem and can be solved by any optimization technique. But because that the parameters in (5) such as Je and L are time-varying, such a solving procedure are required to repeat at each time instant. The entire kinematic control process is shown in Fig. 3, similar to most optimization based kinematic controls in [2], [9]–[14]. The desired end-effector velocity r˙e and the obstacle information, i.e., the position of the critical point C in Fig. 1 are fed into the optimizer as its inputs. In sensor based control, the critical point is detected with respect to the current pose of manipulator. The time-varying parameters of problem (5) are determined by the pose of the manipulator and the position of the obstacle. The optimal joint rate θ˙ that could make the manipulator avoid obstacles is generated as the optimization scheme output. By further taking integration of the joint velocities with the known initial values, we can get the joint motions that the manipulator should follow.
(6)
OC) ¦ JC (θ). The vector mawhere JN (θ) = −sgn(* trix multiplication operator ¦ is defined as u ¦ V = [u1 V1 , u2 V2 , · · · , up Vp ]T , where the column vector u = [u1 , u2 , · · · , up ]T and the row vector Vi denotes the ith row of matrix V . The geometric interpretation of (6) is that each component of r˙c should take the same sign as corresponding OC. In other words, the permissible directions component of * of r˙c constitute 1/8 of the whole space R3 in that r˙c ∈ R3 . However, from (3), the permissible directions of r˙c constitute one half of the whole space R3 in the proposed scheme of this paper. In this sense, the new scheme enlarges the feasible solution space of the QP problem compared to that in [14]. A simulation study in Section IV will validate the superiority of the new scheme over that in [14]. On the other hand, from (6), each critical point results in three inequalities because JN (θ) ∈ R3×n ; while in the new formulation (5), each critical point introduces only one inequality. It is wellknown in the optimization context that adding constraints to the problem implies adding complexity to the problem, and in the neural network context, implies adding complexity to the neural network architecture. As indicated above, compared to the equality formulations in [1], [2], [5] and the inequality formulation in [14], mathematically, the proposed inequality constraint in (5) has weakened the stringent requirements imposed by them on the feasible region of the QP problem to some extend. Nevertheless, it still does not assure that there always exists a solution to the problem. For example, too many obstacles in the workspace may result in an empty set of the feasible region of problem (5). By considering that such cases are encountered only in bad posed situations, where it is physically impossible to accomplish the motion task, for instance, because of the physical limits of the manipulator and environment, we hereafter assume there exists at least one solution to problem (5). Another weak assumption on the problem is that the rank of Je ∈ Rm×n is m, since otherwise for a well posed
4
manipulator, several equalities in Je θ˙ = r˙e can be removed by certain simple technique. III. N EURAL N ETWORK M ODEL The bottleneck of the proposed method in the last section for kinematic control of manipulators is the intensive computation for solving the QP problem (5) (see the block “Optimizer” in Fig. 3). If conventional numerical algorithms are adopted, a processing unit with very high performance is required; otherwise the motion of the manipulator will get slow. We here suggest a neural network for solving (5) specifically, which can be realized by a hardware system connected by a dense interconnection of simple analog computational elements. Because of the analog nature of the circuit system, neural networks have great potential in handling on-line optimization problems [15]. Rewrite (5) as follows minimize subject to where
· F (θ) =
L(θ) I
1 ˙ 2 kθk2 2 Je (θ)θ˙ = r˙e , ξ − 6 F (θ)θ˙ 6 ξ + .
¸
· −
,ξ =
c l
¸
· +
,ξ =
(7)
∞ u
¸ .
We adopt the simplified dual neural network invented in [24] to solve the above problem: • state equation du = −F M F T u + g(F M F T u − u + F s) − F s; (8a) dt output equation ²
•
v = M F T u + s,
(8b)
where M = I − JeT (Je JeT )−1 Je , s = JeT (Je JeT )−1 r˙e , u ∈ Rn+p is the state vector, v = θ˙ is the output vector, ² > 0 is a scaling parameter that controls the convergence rate of the neural network, and g(v) is a piecewise linear function defined as g(v) = [g1 (v1 ), g2 (v2 ), · · · , gn+p (vn+p )]T , and for i = 1, · · · , n + p, − ξi , if vi < ξi− , gi (vi ) = vi , if ξi− 6 vi 6 ξi+ , + ξi if vi > ξi+ . Because the analytic expression of Je , F can be obtained in the design stage, the analytic expressions of M and s can be computed beforehand, which will facilitate the circuits realization of the neural network. The architecture of the neural network is illustrated in Fig. 4 (for clarity, only the state equation part is shown). In the figure, the left part is about the state equation, where λ = 1/², F M F T = {wij }, F s = {qi }, u = {ui }, i = 1, ..., N, j = 1, ..., N and N = n + p. The right part is about the output equation, where M F T = {dij }, s = {si }, v = {vi }, i = 1, ..., n, j = 1, ..., N . It is seen from the figure that only n + p integrators are needed. Neural network (8) is simpler in structure than its predecessor, the dual neural network in [14]. From [24], the state equation (8a) is stable in the Lyapunov sense and globally
convergent to an equilibrium point, and the output v of the neural network defined in (8b) globally convergent to an exact solution of problem (7). Moreover, the convergence rate increases when ² decreases. IV. S IMULATION R ESULTS In this section, the validity of the proposed obstacle avoidance scheme and the real-time solution capability of the proposed neural network SDNN is shown through simulations with the Mistsubishi 7-DOF PA10-7C manipulator. The coordinates setup, structure parameters, joint limits, and joint velocity limits of this manipulator can be found in [13]. In this study, only the positioning of the end-point is concerned, then m = 3 and n = 7. The parameter c is chosen as 5e − 4. The parameter that controls the convergence rate of the neural network is chosen as ² = 10−6 . In this example, the desired motion of the end-effector is a circle of radius r = 20cm with the revolute angle about the x axis π/6. The task time of the motion is 10s and the joint variables are initialized as θ(0)=[0; −π/4; 0; π/2; 0; −π/4; 0]. In the workspace, suppose that there are two obstacle points, [-0.0454m -0.0737m 0.8367m] and [-0.1541m -0.0609m 0.5145m]. (These obstacle points are denoted as black dot points in the following visualization figures.) Fig. 5 illustrates the simulated motion of the PA10 manipulator in the 3D workspace, which is sufficiently close to the desired one with the tracking error less than 0.6mm, as shown in Fig. 6. Fig. 7 shows the comparison between the case with and without the obstacle avoidance scheme. Without obstacle avoidance, the link will collide with an obstacle as shown by the dotted line in Fig. 7. But when the proposed scheme is adopted, the link will move away from the obstacle once entering the danger zone, which is shown by the solid line in Fig. 7. The output of the neural network θ˙ (i.e., control signal) is shown in Fig. 8. From this figure, we can see that the control signals change rapidly when the manipulator links are near the obstacles, but more smoothly than the scheme in [14]. Fig. 9 shows the comparison of the minimized objective function between the present scheme and the scheme in [14]. The optimization performance of the present scheme (shown by the solid line) is superior to that in the scheme in [14] (shown by the dotted line) with no abrupt performance exacerbation and better performance in most of the operation period. This fact responds the argument stated in the end of Section II-B about the sizes of feasible solution spaces of the two schemes. V. C ONCLUDING R EMARKS A new obstacle avoidance scheme for kinematically redundant manipulators control is proposed based on quadratic programming (QP) formulation. Compared with the existing QP formulations in the literature, the new scheme has larger feasible solution space and can thus yield better solutions in terms of optimizing objective functions. An efficient neural network is proposed to solve the problem in real-time. Compared with its existing counterparts for solving QP problems
5
s1
+ + ...
w12 + + w1N
-
d11
q1
-
¦
¦
Ȝ
+ +
u1
+
+
¦
d12 + + d1 N
...
w11
¦
v1
u1 (t 0 )
g1(·)
...
...
s2
...
¦
¦
... wN 2 + + wNN
-
Fig. 4.
v2
sn
qN
-
¦
¦
Ȝ
+ +
uN
+
+
¦
¦
d n1
...
...
+ +
u 2 (t 0 )
g2(·)
... wN 1
Ȝ
d 22 + + d2N
+
+ -
¦
+ +
u2
...
w2 N
-
...
+ +
...
...
w22
d 21
q2
...
+ +
...
w21
gN(·)
d n2 + + d nN
¦
vn
u N (t 0 )
Architecture of the proposed neural network in (8)
in neural network community, the proposed neural network possess lower hardware complexity but high performance. The solution to the time-varying QP problem using this neural network is simulated based on the PA10 robot manipulators, which validates the effectiveness of the proposed approach. R EFERENCES [1] A. A. Maciejewski and C. A. Klein, “Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments,” Intl. J. Robotics Research, vol. 4, pp. 109–117, 1985. [2] F. T. Cheng, T. H. Chen, and Y. Y. Sun, “Obstacle avoidance for redundant manipulators using the compact QP method,” Proc. IEEE Int. Conf. Robotics and Automation, pp. 262–269, 1993. [3] L. Sciavicco and B. Siciliano, “Modeling and Control of Robot Manipulators.” London, U.K.: Springer-Verlag, 2000. [4] K. Glass, R. Colbaugh, D. Lim, and H. Seraji, “Real-time collision avoidance for redundant manipulators,” IEEE Trans. Robotics and Automation, vol. 11, pp. 448–456, 1995. [5] H. Ding and S.P. Chan, “A real-time planning algorithm for obstacle avoidance of redundant robots,” Journal of Intelligent and Robotic Systems, vol. 16, pp. 229–243, 1996. [6] S. Chiaverini, “Singularity-robust task-priority redundancy resolution for real-time kinematic control of robot manipulators,” IEEE Trans. Robotics Automat., vol. 13, pp. 398–410, June 1997. [7] P. Chiacchio, S. Chiaverini, L. Sciavicco, and B. Siciliano, “Closed-loop inverse kinematics schemes for constrained redundant manipulators with task space augmentation and task priority strategy,” Int. J. Robot. Res., vol. 10, no. 4, pp. 410–425, 1991. [8] C. Klein and C. Huang, “Review of pseudoinverse control for use with kinematically redundant manipulators,” IEEE Trans. Syst., Man, Cybern., vol. SMC-13, pp. 245–250, 1983. [9] J. Guo and T. C. Hsia, “Joint trajectory generation for redundant robots in an environment with obstacles,” J. of Robotics Systems, vol. 10, pp. 199–215, 1993.
[10] F. T. Cheng, Y. T. Lu, and Y. Y. Sun, “Window-shaped obstacle avoidance for a redundant manipulator,” IEEE Trans. Systems, Man, Cybernetics, vol. 28, pp. 806–815, 1998. [11] B. Allotta, V. Colla and G. Bioli, “Kinematic control of robots with joint constraints,” J. Dyn. Syst. Meas. Control-Trans. ASME, vol. 121, pp. 433–442, 1999. [12] H. Ding and J. Wang, “Recurrent neural networks for minimum infinitynorm kinematic control of redundant manipulators,” IEEE Trans. on Systems, Man and Cybernetics - Part A: Systems and Humans, vol. 30, pp. 269–276, 1999. [13] J. Wang, Q. Hu and D. Jiang, “A Lagrangian network for kinematic control of redundant robot manipulators,” IEEE Trans. Neural Networks, vol. 10, pp. 1123–1132, 1999. [14] Y. Zhang and J. Wang, “Obstacle Avoidance for Kinematically Redundant Manipulators Using a Dual Neural Network,” IEEE Trans. Syst. Man, Cybern. – Part B: Cybernetics, vol. 34, pp. 752–759, 2004. [15] D. W. Tank and J. J. Hopfield, “Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans. on Circuits and Systems, vol. 33, pp.533–541, 1986. [16] Y. Xia and J. Wang, “A general methodology for designing globally convergent optimization neural networks,” IEEE Trans. on Neural Networks, vol. 9, pp. 1331–1343, 1998. [17] M. Forti, P. Nistri and M. Quincampoix, “Generalized neural network for nonsmooth nonlinear programming problems,” IEEE Trans. Circuits Syst. I-Regul. Pap., vol. 51 pp. 1741–1754, 2004. [18] H. Ding and S. K. Tso, “Redundancy resolution of robotic manipulators with neural computation,” IEEE Trans. on Industrial Electronics, vol. 46, pp. 199–202, 1999. [19] Y. Xia, G. Feng, and J. Wang, “A primal-dual neural network for on-line resolving constrained kinematic redundancy in robot motion control,” IEEE Transactions on Systems, Man and Cybernetics - Part B: Cybernetics, vol. 35, no. 1, pp. 54–64, 2005. [20] E.A. Al-Gallaf, “Multi-fingered robot hand optimal task force distribution - Neural inverse kinematics approach,” Robot. Auton. Syst., vol. 54, pp. 34–51, 2006. [21] Z. Mao and T. C. Hsia “Obstacle avoidance inverse kinematics solution
6 0.4 0.3
z (m)
0.2
1.4
0.1
1.2
θ˙ (rad/s) 0
1
−0.1
0.8
−0.2
0.6
−0.3
0.4 −0.4
0.2
0
2
4
6
8
10
t (sec)
0 −0.4
−0.2
0
Fig. 8. 3
The PA10 manipulator in a circular motion
scheme in this paper scheme in [32]
−4
2.5
x 10
6
Joint velocities
y (m)
x (m)
Fig. 5.
0.2
0
−0.2
xe~t y ~t ze~t 1 ˙ 2 || θ|| 2
re (m)
2
e
4
2
1.5
0
1
−2
0.5
−4
0
2
4
6
8
10
t (sec)
Fig. 6.
with obstacle avoidance without obstacle avoidance 0.2
d (m)
0.15
0.1
0.05
4
4
6
8
10
of redundant robots by neural networks,” Robotica, vol. 15, pp. 3–10, 1997. [22] I. Ohya, A. Kosaka, and A. Kak, “Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing,” IEEE Trans. Robotics Automat., vol. 14, pp. 969–978, Dec. 1998. [23] S. Shoval and J. Borenstein, “Using coded signals to benefit from ultrasonic sensor crosstalk in mobile robot obstacle avoidance,” Proc. IEEE Int. Conf. Robotics Automat., vol. 3, pp. 2879–2884, 2001. [24] S. Liu and J. Wang, “A simplified dual neural network for quadratic programming with its KWTA application,” IEEE Trans. Neural Networks, vol. 17, no. 6, pp. 1500–1510, 2006.
Link 2
2
2
Fig. 9. Comparison of minimization of the objective function between two schemes
0.25
0
0
t (sec)
Position Error of the end-effector
0
0
6
8
10
t (sec)
Fig. 7. Comparsion of minimum distance between obstacle 2 and link 2 with and without obstacle avoidance