Robust Neural Force Control with Robot Dynamic

0 downloads 0 Views 220KB Size Report
In this paper, a new robust robot force tracking impedance control scheme that uses neural network as a compensator is proposed. The proposed neural com-.
Robust Neural Force Control with Robot Dynamic Uncertainties under Totally Unknown Environment Seul Jung and T. C. Hsia Robotics Research Laboratory Department of Electrical and Computer Engineering University of California, Davis Davis, CA 95616 e-mail:[email protected] and [email protected]

Abstract

In this paper, a new robust robot force tracking impedance control scheme that uses neural network as a compensator is proposed. The proposed neural compensator has the capability of making the robot to track a speci ed desired force as well as of compensating for uncertainties in environment location and sti ness, and the uncertainties in robot dynamics. The neural compensator is trained separately for free space motion and contact space motion control using two different training signals. The proposed training signal for force control can be used regardless of the environment pro le in order to achieve desired force tracking. Simulation studies with three link rotary robot manipulator are carried out to demonstrate the robustness of the proposed scheme under uncertainties in robot dynamics, environment position and environment sti ness. The results show that excellent force tracking is achieved by the neural network.

I Introduction Recent developments in robot control have provided better productivity and eciency in manufacturing industry. The necessity of achieving these bene ts has drawn special attention in developing sophisticated control algorithms for robot manipulators. The constrained motion control is one of the areas that have been extensively studied. One of the main approach for constrained motion control is the well known impedance force control technique [1]. More recent focuses on impedance control research have been two folded : One is to have the force tracking capability that can follow the speci ed desired force without any knowledge of environment location and sti ness. The other is to compensate for uncertainties in robot dynamics.

Within this framework, many control algorithms have been proposed to tackle these problems [2]. Lasky and Hsia [3] have proposed the inner/outer loop control scheme that uses the integral control of the force error between desired force and actual force to modify the reference trajectory. Seraji and Colbaugh [4] have proposed the adaptive techniques of using force tracking errors to deal with uncertainties of environment sti ness and location. The authors in [5] have proposed the simple trajectory modi cation scheme with robust position control algorithm [6, 7] that compensates for uncertainties in robot dynamics and environment sti ness. In the paper [8], the authors also have developed neural network control schemes that can compensate for uncertainties in robot dynamics and environment sti ness based on impedance force control. In the latest paper [9], the authors have proposed a nonlinear impedance function that has capability of dealing with all the uncertainties when the environment position inaccuracy falls within user speci ed bounds. Here, as an extension to our previous researches [5, 8, 9], we are proposing a robust impedance control scheme using neural network as a compensator that compensates for all the uncertainties: inexact robot dynamic model, unknown environment sti ness, and unknown environment location. There have been many researches on compensating for uncertainties in robot dynamics using a neural network within position control frame work [10, 11, 12, 13, 14, 15, 16, 17]. In order for single neural network to deal with all uncertainties in free space as well as in contact space, two separate training signals for training the neural compensator is suggested to deal with the transition from free space control to contact space control. In free space, the goal of a neural compensator is to compensate for uncertainties in robot dynamics and to ensure the robot to make contact with the environ-

where D^  ; ^h are estimates of D ; h and the control input U is given by _ + K(Xr ? X) ? Fe ] (4) U = Xr + M ?1 [B(X_r ? X)

ment so that an accurate position control is achieved. In contact space, a neural compensator is required to deal with both position and force control. In force controlled direction, the contact force is regulated to follow the speci ed desired force. The neural compensator compensates for all the uncertainties including environment sti ness, position and robot dynamic while maintaining the dynamic relation between the robot and the environment. The proposed training signal fd ? fe in force controlled direction eliminates force tracking error occurred when the environment does not have a at surface. Simulation studies with three link rotary robot manipulator are carried out to demonstrate the proposed controller's robustness under uncertainties in robot dynamics, environment position, and environment sti ness.

where M B and K are diagonal n x n symmetric positive de nite matrices of desired inertia, damping, and sti ness gains , respectively, and Xr is the reference end-point trajectory. The amount of force exerted Fe depends on the value of the reference end-point trajectory Xr . Combining (2),(3), and (4) yields the closed loop tracking error equation E +M ?1 [B E_ +KE ? Fe ] = D^ ?1 [Dx +h +Ff ] (5)      ^ ^ where D = D ? D ; h = h ? h , and E = (Xr ? X). In the ideal case where D = h = 0, and Ff = 0, the closed loop robot satis es the target impedance relationships Fe = M E + B E_ + KE (6)

II Impedance Force Control The dynamic equation of an n degrees-of-freedom manipulator is given by : Dq + h + f =  ? e (1) where D is an n x n inertia matrix, h is an n x 1 Coriolis ,centrifugal torque and gravity, f is an n x 1 joint friction torque,  is the joint torque, and e is the external joint torque. We have the robot dynamic equation model in Cartesian space as (2) D X + h + Ff = F ? Fe _ _ ?1 X, where D ?=1 J T ?1 DJ ?1 , h = J T ?1 h ? D JJ T  Ff = J f ,X is an end-point position vector in Cartesian space, F is a torque in Cartesian space, and Fe is the exerted force on the environment. The robot dynamic equation (2) represents a highly nonlinear, coupled, and multi-input multi-output system. In most practical cases, the model is not exactly known. Thus only nominal estimates of the model are available for controller design. Impedance control allows us to specify the robot sti ness for a given task under contact with the environment. The interaction between the environment and the end-point of the robot in terms of a desired mechanical sti ness, called impedance is regulated. Thus the impedance control regulates the relationship between force and position by setting suitable gains of impedance parameters. The control law F for impedance force control then is F = D^  U + h^  + Fe (3)

In order to achieve a desired value of Fe , Xr must be carefully speci ed based on the desired force Fd , the environment position Xe and environment sti ness Ke . Since there are always uncertainties in the robot and environment models, the ideal target impedance relationships (6) can not be achieved in general. Thus the impedance based force control (3) (4) is not robust in practice. To improve robustness, NN controller can be introduced in (3) to compensate for the disturbances in (5) due to model uncertainties. So the ideal impedance function (6) can be realized only when the environment is known exactly. These issues are addressed in the next two sections.

III Proposed Robust Neural Force Control Scheme In this section, we present an NN controller design for the impedance control system as shown in Figure 1. The compensating signal  is added to the control input U so that the control law becomes F = D^  (U + ) + h^ + Fe (7) where U is the same as (4). Combining (7) with (2) yields the corresponding closed loop error system as E + M ?1 [B E_ + KE ? Fe] = M ?  (8) where M = D^ ?1 (D x + h + Ff). In order to achieve the ideal impedance function, the output 2

Fe



Switching Law

Σ +

. Xr

M +

. E

Σ

.. Xr -1

M B



Xr

+

V

+

+ Σ

+

.. X

-

+

-1

Fd

function (6). Since we assume that ke is unknown, ke can be estimated based on measured force fe and position x and xe from the relationship fe = ke(x ? xe) [5]. Replacing ke by ? f"e in (12) yields

X(t) X(t-1) X(t-2)

Neural Network

+ + +

Φ +

U

Σ

+

Σ

" + m1 (b"_ + k"(1 ? ffd ) + fd ? fe ) = m ? f ?  (13)

Finite Difference

Σ

^ D*

+

+

Σ

J

T

τ

Robot

+

e

. X X Fe

And the control input u is u = xe + m1 (b"_ + k"(1 ? ffd ) + fd ? fe ) (14) e Note that input to the system in force controlled direction is changed from the reference trajectory xr to the actual trajectory xe while the reference position trajectories are unchanged as shown in Figure 1. This is a force tracking impedance function. When the robot is in free space before contact to environment is made, fe = 0. Furthermore fd = 0 in the position controlled direction, and fd 6= 0 in the force controlled direction. The training signal v in free space can be partitioned into two parts such that  + m1 (b"_ + k" + fd ) force control axis vfree = vvf == e" + 1 position control axis p m (b_e + ke) (15) where vf is used for force controlled direction and vp is used for position controlled direction. Here vfree is a 3 x 1 vector such that vfree = [vf vp1 vp2 ]T . At convergence, vfree ! 0 and  ! m ? f which means that NN completely compensates for the uncertainties m ; f . We note that the desired force fd in vf of (15) serves as a driving force that ensures the robot to make contact with the environment from free space motion. In contact space, since the impedance function given in (13) is nonlinear, we have two solutions for the contact force at steady state [9]. Careful investigation of (13) suggests that the nonlinear term (1 ? ffde ) be dropped by setting k = 0 in order to obtain the unique solution of fe = fd in steady state. Therefore we propose the following modi ed impedance function for the force control direction in contact space.

h^ * E

Σ

M

-1

K

M

-1

− Fe

Figure 1: NN Force Control Structure of a neural compensator is required to be  = M . Uncertainty in robot dynamics is compensated. Let us make a neural compensator compensate for other uncertainties as well. For simplicity, let fd ; fe ; m; b; k; e; m;  be elements of Fd ; Fe; M; B; K; E; M ;  respectively. Then, (8) can be represented by one dimensional equations as (9) e + m1 (b_e + ke ? fe ) = m ?  where e = xr ? x. The reference trajectory xr is calculated from the values of the environment sti ness, the environment location, the desired force, and the user speci ed sti ness gain as xr = xe + kfeffd where e keff = kkk +ke . Let xr = kfeffd . Substituting xr = xe + xr , x_ r = x_ e + x_ r and xr = xe + xr into (9) yields _ xr + m1 [b_xr +kxr ? fe ] = m ?  "+ m1 [b"+k"]+ (10) where " = xe ? x in which xe is the environment trajectory. Note that the closed loop error equation (9) based on e is changed to the equation (10) based on ". When we de ne f = xr + m1 bx_ r for simplicity, equation (10) becomes " + m1 [b"_ + k"] + m1 [kxr ? fe ] = m ? f ?  (11) Substituting xr = kfeffd into (11) yields " + m1 (b"_ + k" + fd kk + fd ? fe ) = m ? f ?  (12) e The reason that we carry xr in (11) is to give a force tracking capability to the conventional impedance

m" + b"_ + fd ? fe = m(m ? f ? )

(16)

The NN output  is trained to cancel out the uncertainty terms m ? f . Since fe = ?ke ", (16) becomes m" + b"_ + ke" = ?fd

(17)

when m ? f ?  = 0. This means that impedance function (17) is stable and that fe = fd in steady state. Thus, in contact space, the training signal vf for the force controlled direction can be selected from 3

The training signal vf in (18) becomes  + m1 (b"_ + fd ? fe ) force control axis vf = ?x + m1 (?bx_ + fd ? fe ) (22) vcontact = vvf == e" + 1 position control axis p m (b_e + ke) (18) However, the training signal given in (22) is only valid The position controlled direction training signal is unwhen the environment is at and perpendicular to the changed as vp given in (18). The transient e ects force controlled axis. If the training signal de ned in of switching learning law are minimal because the (22) is used directly when the real environment surimpedance function (17) is maintained at initial conface is not at such as x 6= 0; x_ 6= 0 then the values tact. of x; x_ that satisfy equation (22) will not vanish so The training signal vf in (18) is based on the as_ that the force tracking will have an error of (x + m1 bx) sumption that the environment position xe is known. due to specifying xe and x_e incorrectly. In order to However, in practice, it is not easy to know xe exactly compensate for this force tracking error, we propose so that approximationsof xe will have to be used. This to use the following training signal rather than (22) environment position uncertainty should be compensated in order to achieve the desired force tracking. A vf0 = fd ? fe = m ? f + p ?  (23) solution is proposed in the following section. where p = x + m1 bx._ When vf0 ! 0; fd = fe and  =  ?  +  . Therefore, the training signals in IV Learning Algorithm for En- contactm spacef forpany environment surfaces are  0 = f ?f vironment Inaccuracy force control axis d e vcontact = vvf = p e + m1 (b_e + ke) position control axis In the position controlled direction, the desired tra(24) jectory can be easily speci ed. However, in force conwhere vcontact = [vf0 vp1 vp2 ]T . So a neural compentrolled direction, there is always uncertainty from insator compensates for all the uncertainties m ? f +p accurate estimation of the environment location. associated with environment position as well as robot Let us de ne x0e = xe + xe as an estimation of dynamics. One important feature of the proposed NN environment position so that there is uncertainty xe . force controller is that one neural network can comThen, the proposed impedance function (16) becomes pensate for all the uncertainties. The proposed force control is stable since the underlined impedance relationship is still governed by (20) even though the "0 + m1 (b"_0 + fd ? fe ) = m ? f ?  (19) training signal vf0 is selected. where "0 = "+ x e , "_0 = "+ _ x_ e and "0 = "+xe . And the control input u is V Neural Network Controller Design (20) u = x0e + m1 (b"_0 + fd ? fe ) The two-layer feedforward neural network is proposed Now trajectory input to the robot control system in to be used as the controller. It is composed of an input force controlled direction is an estimated environment bu er, a non-linear hidden layer, and a linear output trajectory x0e instead of xr in Figure 1. The referlayer. As shown in Figure 2, the delayed inputs X = ence input trajectories in position controlled direction [x(t)T x(t ? 1)T x(t ? 2)T ]T are used [16], and they remains same as xr . are multiplied by weights wij1 and summed at each Since the environment location is not known exhidden node(or neuron). Then the summed signal at a actly, the simple and worst case of estimating the enhidden node activates a nonlinear function f(), called vironment position is to assume that the environment a sigmoid function, which is bounded in magnitude is a at surface inside the actual environment and is between ?1 and 1: perpendicular to the force controlled axis. That is exp(?()) x0e = constant > xe, and xe 0 = x_e 0 = 0. Then, in (25) f() = 11 ? + exp(?()) contact space, equation (20) becomes 2 and The outputs from all f() are weighted by wjk 1 ?x + m (?bx_ + fd ? fe ) = m ? f ?  (21) summed at each output node. Thus, the kth output (17) as

4

1

w

x1 (t)

B

x1(t-1)

B

x1(t-2)

B

s

x2(t)

B

s

ij yj1

s

w

x2(t-1)

B

s

x2 (t-2)

B

s

x3 (t)

B

x3(t-1)

B

x3 (t-2)

B

-

2

q

3

jk 2 k

y L

φ

z

l3

l2

1 q

L

φ

L

φ

2

- f

2

x

e

Environment

3 l

1

s 2

bk

q1

1

1

bj

x

1 B : Buffer

L : Linear function

y

xe = 0.6109 m x r

0

s : Sigmoid function

Figure 3: Sine Wave Tracking on Flat Surface Environment

: Bias

Figure 2: Neural Network Compensator Structure

mentum term is

k at a linear output node can be calculated from its inputs as follows: P I x w1 + b1)) nH X 1 ? exp(?(Pni=1 2 k = [ wjk( 1 + exp(?( nI xiwij1 + bj1)) )] + b2k i=1 i ij j j =1 (26) where nI is the number of inputs, nH is the number of hidden nodes, xi is the ith element of input of X, wij1 is the rst layer weight between ith input and jth 2 is the second layer weight between hidden node, wjk jth hidden node and kth output node, b1j is a biased weight for jth hidden node and b2k is a biased weight for kth output node. If n is the number of output nodes, the total number of weights(wT ) is wT = (nI + 1)nH +(nH +1)n. The total number of nodes is nT = nH + n. The weight updating law minimizes the objective function J which is a quadratic function of the training signal v derived above as either vfree or vcontact. (27) J = 12 vT v Making use of the de nition of v (15) or (24) yields the gradient of J as @J = @vT v = ? @T v @w @w @w

T w(t) =  @ @w v + w(t ? 1)

(29)

where  is the update rate and is the momentum coecient. Speci cally, the following algorithm can be derived by making use of (29). n X 1 2 ] + w1 (t ? 1) wij (t) = sj (1 ? sj )xi [ vk wjk ij k=1

2 (t) = vk sj + w2 (t ? 1) wjk jk n X 2 ] + b1(t ? 1) b1j (t) = sj (1 ? sj )[ vk wjk j k=1

b2k (t) = vk + b2k(t ? 1) P I x w1 + b1 )) 1 ? exp(?( ni=1 sj = 1 + exp(?(PnI xi wij1 + bj1 )) sk =

nH X j =1

i=1 i ij

2 + b2 sj wjk k

j

(30)

where sj is the output of the jth hidden node and sk is the output of the kth output node.

VI Simulations A comprehensive simulation study has been carried out using a three link rotary robot manipulator whose parameters are taken from the rst three links of PUMA 560 arm. The nominal system parameters are ^ used as the basis in forming the robot model D(q)

(28)

T

in which the fact @v@wT = ? @@w is used. The backpropagation update rule for the weights with a mo5

VI.1 Flat Environment Surface

Environment Stiffness Profile

4

x 10

Task 1 is when the environment surface is at. The environment position is not correctly estimated as x0e = xe +0:01m which is inside the environment. The reason for x0e being inside the environment is to make sure the robot to contact with the environment. In order to test the force tracking capability the desired reference force is speci ed as (32) fd = 10 + 2sin(  4 t ) which is a sinusoidal. Sample tracking results for a

at surface are plotted in Figures 5 and 6. The simulation data showed that the force tracking result is very e ective and the corresponding position tracking is also good. The initial overshoot due to contact happens after 3 secs. Another overshoot occurs when the environment sti ness is abruptly changed at 6 secs. But the force is settled within 0.5 sec.

10

Environment Stiffness ke (N)

9 8 7 6 5 4 3 2 1 0 0

2

4

6 time t (sec)

8

10

12

Figure 4: Time Varying Environment Sti ness Pro le ^ q). and H(q; _ Model uncertainties included a 10 Kg mechanical tool attached to the third link, Coulomb friction and viscous friction forces f (q)_ added to each joint where f (q)_ = 0:2sgn(q)_ + 0:3q._ For the NN controller, we have chosen six hidden neurons (nH = 6). The back propagation algorithm parameters are:  = 0:01 and = 0:9. Weights are initially randomly selected and adjusted every sampling time in on-line fashion. At transition from free space to contact space, weights are adjusted without reinitializing. The performances of the proposed schemes are tested by sine wave tracking on at environment surface as shown in Figure 3. The controller gains in free space are selected as KD = diag[60; 40; 40] and KP = diag[100; 100; 100] which give over-damped motions at the three joints. In contact space, the controller gains are KD = diag[400; 40; 40] and KP = diag[0; 100; 100]. In order to show the robustness to unknown environment sti ness of the proposed scheme, we tested the system performance for abruptly changing environment sti nesses with the sti ness pro le as ke =



50; 000(N=m) 0  t < 6 100; 000(N=m) 6  t < 12

Force Tracking : Reference force (−−−) and Actual force (___) 15

Force fe (N)

10

5

0 0

2

4

6 time t (sec)

8

10

12

Figure 5: Flat Environment Surface : Force Tracking For this task, the control algorithm is governed by the equation (21) which does not require x_e; xe informations (x_e = xe = 0) since the environment is at and parallel to estimated environment position x0e . As we expected from the previous analytical section the performance is excellent. The proposed neural controller works well even when the environment location is not estimated correctly.

(31)

VI.2 Sinusoidal Environment Surface

which is shown in Figure 4. Two tasks are performed: One is when the environment surface is at, and another is when the environment surface is not at but sinusoidal.

The more interesting task is when the environment surface is not at but sinusoidally shaped. As same as Task 1, the estimated environment position x0e which 6

Desired force (−−−) and Actual force (___)

Estimated xe’(_._.), Real xe(−−−) and Actual x(___)

x axis (m)

0.65

18 16

0.6

14 0.55

2

4

6

8

10

Force fe (N)

12 0.5 0

12

Position Tracking : Reference (−−−) and Actual (___)

z axis (m)

0.69

10 8

0.68

6

0.67

4

0.66

2

0.65 0

2

4

6 time t (sec)

8

10

0 0

12

2

4

6 time t (sec)

8

10

12

Figure 6: Flat Environment Surface : Position Tracking

Figure 7: Sinusoidal Environment Surface : Force Tracking

is speci ed inside the environment and perpendicular to the force control axis is only available. The environment location is estimated to be x0e = 0:661m. But the real environment is a sinusoidal which has the magnitude of 4 cm as shown in Figure 8. Other informations such as x_e0 ; xe0 are set to zero since we make assumption that the environment surface is at. Using the training signal de ned in (24) the force tracking is shown in Figure 7 which is very good. The initial contact happens just after 4 secs. The force settling time is less than 0.5 sec. The e ective position tracking plots are shown in Figures 8 and 9. The robot end e ector follows the sinusoidal environment very well maintaining a desired force with 10 N.

tracking error occurred due to lack of environment surface pro le. Simulation results prove that the training signal is suitable to achieve the good force tracking under the situation that the environment is totally unknown. The proposed NN controller converges within a half second even when the environment sti ness is abruptly changed. The system performances under the proposed control scheme are excellent. Thus the proposed NN controllers are feasible for on-line robot force control.

References [1] N. Hogan, \Impedance control : An approach to manipulator, part i, ii, iii", ASME Journal of Dynamic Systems, Measurement, and Control, vol. 3, pp. 1{24, 1985. [2] Sukhan Lee and Hahk Sung Lee, \Intelligent control of manipulators interfacing with an uncertain environment based on generalized impedance", Proc. of IEEE Symposium on Intelligent Control, pp. 61{66, 1991. [3] T. Lasky and T.C. Hsia, \On force-tracking impedance control of robot manipulators",

VII Conclusions A robust neural network scheme for position/force control of robot manipulator is presented in this paper. The proposed force control technique with a neural compensator has the capability of tracking a desired force under uncertainties in robot dynamics and in the environment. One very good thing about the proposed force control is that a single NN can take care of all the uncertainties. The proposed neural network training algorithms for the transition from free space to contact space work very well maintaining the dynamic relation between the robot and the environment. The developments of training signals for a neural compensator are addressed. The training signal vf = fd ? fe in force controlled direction that uses force error directly is proposed to eliminate force

Proc. of the IEEE International Conference on Robotics and Automation, pp. 274{280, 1991.

[4] H. Seraji and R. Colbaugh, \Force tracking in impedance control", Proc. of the IEEE International Conference on Robotics and Automation, pp. 499{506, 1993. 7

Estimated xe’(_._.), Real xe(−−−), Position Tracking x(___)

Position Tracking in z axis : Reference (−−−) and Actual (___)

0.68

0.69

0.66 0.685

0.64 0.68

0.62 z axis (m)

x axis (m)

0.675

0.6 0.58

0.67

0.665

0.56 0.66

0.54

0.655

0.52 0.5 0

2

4

6 time t (sec)

8

10

0.65 0

12

2

4

6 time t (sec)

8

10

12

Figure 8: Sinusoidal Environment Surface : Position Tracking in x axis

Figure 9: Sinusoidal Environment Surface : Position Tracking in z axis

[5] S. Jung, T. C. Hsia, and R. G. Bonitz, \On force tracking impedance control with unknown environment sti ness", Proc. of IASTED Interna-

[11] T. Yabuta and T. Yamada, \Learning control using neural networks", Proc. of the IEEE In-

ternational Conference on Robotics and Automation, pp. 740{745, 1991.

tional Conference on Robotics and Manufacturing, pp. 181{184, Cancun, June, 1995.

[12] T. Fukuda and T. Shibata, \Theory and applications of neural networks for industrial control systems", IEEE Trans. on Industrial Electronics, vol. 39, pp. 472{489, 1992. [13] A. Ishiguro, T. Furuhashii, S. Okuma, and Y. Uchikawa, \A neural network compensator for uncertainties of robot manipulator", IEEE Trans. on Industrial Electronics, vol. 39, pp. 61{ 66, December, 1992. [14] F.L. Lewis, K. Liu, and A. Yesildirek, \Neural net robot controller with guaranteed tracking performance", IEEE Symposium on Intelligent Control, pp. 225{231, 1993. [15] J. Yuh, \A neural net controller for underwater robotic vehicles", IEEE J of Ocean Engineering, vol. 15, pp. 161{166, 1990. [16] S. Jung and T.C. Hsia, \A study on new neural network schemes for robot manipulator control", Robotica, vol. 14, pp. 7{16, 1996. [17] S. Jung and T. C. Hsia, \A new neural network control technique for robot manipulators", Robotica, vol. 13, pp. 477{484, 1995.

[6] T.C. Hsia and L. S. Gao, \Robot manipulator control using decentralized linear time-invariant time delayed joint controllers", IEEE Conference on Robotics and Automation, pp. 2070{2075, 1990. [7] T.C. Hsia, \Simple robust schemes for cartesian space control of robot manipulators", International Journal of Robotics and Automation, pp. 167{174, 1994. [8] S. Jung and T. C. Hsia, \On neural network application to robust impedance control of robot manipulator", Proc. of IEEE International Conference on Robotics and Automation, pp. 869{ 874, Nagoya, 1995. [9] S. Jung and T. C. Hsia, \Neural network techniques for robust force control of robot manipulators", Proc. of IEEE Symposium on Intelligent Control, pp. 111{116, Monterey, August, 1995. [10] H. Miyamoto, M. Kawato, T. Setoyama, and R. Suzuki, \Feedback error learning neural network for trajectory control of a robotic manipulator", IEEE Trans. on Neural Networks, vol. 1, pp. 251{265, 1988. 8