Position and Orientation Control of an Omni-Directional Mobile ...

8 downloads 0 Views 2MB Size Report
controller is utilised in form of a discrete-time linear quadratic regulator. The sensor ..... considerable time delay of tdm ≈ 70 ms whereas the wheel velocities are ...
2012 IEEE International Conference on Control Applications (CCA) Part of 2012 IEEE Multi-Conference on Systems and Control October 3-5, 2012. Dubrovnik, Croatia

Position and Orientation Control of an Omni-Directional Mobile Rehabilitation Robot Dongfeng Luo1,3 ,Thomas Schauer2 , Michael Roth2 and Jörg Raisch1,2 upper limb functions after stroke. A first attempt is given by the Arm-Skate system [7], a semi-passive table-top device that is lightweight and less cumbersome. The ARMassist device [8] is another passive device that can be operated in a home environment on a standard table. Both devices are simple to set up and affordable for home-based rehabilitation. However, their capabilities are still limited by their lack of active movement support. The Reha-Maus, which is a novel upper limb rehabilitation system developed by the Control Systems Group at TUBerlin, poses the first concept of a portable rehabilitation robot that actively provides different levels of patient assistance. Its compact design and reduced complexity facilitate user friendly home training as well as clinical use, and the implemented control strategy offers guidance and assistance for patient’s arm training. Furthermore, its estimated low price allows for group therapy with several robots in use. Prospective training applications include assisted or perturbed motion of shoulder and elbow induced by controlled robot motion. Moreover, the utilised sensors and the use of computer control allow for advanced training programs and continuous patient assessment and monitoring. The design of the Reha-Maus is based on a mobile robot driven by omni-directional wheels. The table-placed robot allows simultaneously and independently controlled rotational and translation motion in a plane guiding the hand/lower arm. In the application of omni-directional mobile robot the motion control is the essential component. In [9], a simple resolved-acceleration control with PI and PD feedback has been developed to control the robot velocity and orientation angle. This design was sufficient for mobile service robots. In [10], an integrated control scheme has been developed for robot motion and internal forces based on extension modelling. The designed controller was only tested in simulation. Liu et al. [11] proposed a two-loop controller architecture based on trajectory linearisation. For the nonlinear controller design a precise dynamic nonlinear model of the robot is required. A sensor fusion method, which combines the on-board sensors and the vision system data, was employed to provide accurate and reliable robot position and orientation estimates, thereby reducing the wheel slippage induced tracking error. This contribution describes the motion control system of the Reha-Maus. In the first part of this paper, the concept of this robotic system, the mechanical design, the electronics and sensors as well as the software environment are

Abstract— Position and orientation control for an omnidirectional mobile robot are investigated. The table-placed robot shall be used for arm and shoulder rehabilitation of stroke patients. The position and orientation of the device are determined by means of a modified Kalman filter which encompasses a kinematic model of the robot. By fusing information from incremental encoders at the robot and from an infrared camera at the ceiling, accurate and reliable estimates of robot position and orientation can be obtained. A cascaded position controller is designed for the mobile robot to allow the tracking of arbitrary translational reference movements and the stabilisation of the robot orientation. On the inner control loop, individual angular velocity controllers have been implemented for the three wheels to achieve maximum actuator performance and to facilitate a simplified state-space description of the robot dynamics. For the outer loop, a multi-variable state-space controller is utilised in form of a discrete-time linear quadratic regulator. The sensor fusion scheme and control system are validated in experiments and the obtained results are discussed.

I. I NTRODUCTION Robot-aided neuro-rehabilitation has been widely studied in recent years. Robots are not only able to provide repetitive functional movement training, but also can monitor the movement objectively and quantitatively. A variety of upper limb rehabilitation robots has been developed to assist, enhance, evaluate, and document neurological and orthopaedic rehabilitation of movement, including MIT-Manus [1], [2], GENTLE/s [3], and ARMin [4], [5]. Some of these devices have been proven to reduce motor impairment in the hemiparetic upper limbs of stroke patients. However, these approaches exclusively focused on the clinical setting which entails a lack of mobility, high acquisition costs and limited patient training times. To address some of these issues, concepts that allow stroke patients with upper limb impairments to continue rehabilitation therapy at home must be investigated [6]. In order for the technology to support therapist independent home rehabilitation, the robot must be designed in a way that the system is less complex so that a patient can use the system without therapist input. These strong requirements, compact design and therapist independence, are currently not met by any active robotic device for training 1 D. Luo and J. Raisch are with the Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany 2 T. Schauer, M. Roth and J. Raisch are with the Technische Universität Berlin, Control Systems Group, Berlin, Germany,

[email protected] 3 D. Luo is with the Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an, China

978-1-4673-4504-0/12/$31.00 ©2012 IEEE

50

presented. In the second part, as basis for the following sections, the kinematics of mobile platform are described in detail. Thirdly, a sensor fusion scheme, which combines global absolute position measurements and encoder data to provide an accurate and reliable estimation of robot position and orientation, is presented. In addition to the work in [11], a delay between the measurement systems was taken into account. In the fourth place, the paper describes the design of the cascaded multi-variable position control scheme for the tracking of arbitrary reference motions. In contrast to other studies, the design is only based on the kinematic model and an experimentally identified simple transfer function model of the wheel speed dynamics. Finally, experimental results of sensor fusion and position controller are given.

In the current version of the Reha-Maus, separate external power electronics (4-Q-DC servo amplifier ADS, Maxon, Switzerland) provide analogue current controllers for each motor. The configuration is capable of producing a force of up to 51 N and a torque of up to 5 Nm. Velocities are limited to 0.6 m/s for translations and to 9 rad/s for rotations. The robot weighs 2.8 kg and has a diameter of 300 mm. The control system has been designed and implemented on a i686 processor platform under real-time Linux (RTAI1 ) using Scilab/Scicos2 with RTAI-Lab and the HART toolbox3 . QRtaiLab4 is used for interaction with the real-time process to update controller parameters and to monitor and record process signals. A data acquisition card (Model 626, Sensoray, United States) is used to read information from the wheel encoders and current monitors as well as to provide the command signals (reference current for each servo amplifier). Information from the infrared camera is gathered via a Bluetooth link.

II. O MNI - DIRECTIONAL ROBOT The Reha-Maus is designed to allow patients to train their hemiparetic arm. Figure 1 shows a prospective application scenario of the Reha-Maus. The lower right arm of a patient is pivoted on the robotic platform. The system is used to assist or resist the patient’s arm and shoulder movements during the training. The patient’s task during the treatment is to track a pre-specified set of trajectories. Human-device interaction forces/moments can be measured by a 6D force/torque sensor (JR3 multi-axis load cell, JR3, Inc., United States) underneath the arm support, and the movement is monitored by an infrared camera (Wii remote control, Nintendo, Japan) above the table and incremental encoders at the motors.

III. K INEMATICS The Reha-Maus is a 3 DoF mobile system and has a fixed body frame [xR , yR ], aligned to the centre of mass (cf. Figure 2). The description of the kinematics and dynamics takes place in generalised coordinates q = [x, y, θ]T , where x and y represent the position of the robot on the planar workspace and θ is the robot orientation. The kinematic model is derived based on the assumption that the wheels have no slippage in the direction of traction force. The ϑ2 yR

Arm fixture - free to rotate on the robot

ϕ

ϑθ2

y ω2

yR xR

x

Load cell

α3

y Bottom view 3 DC motors with incremental encoders

α2 xR

ω3

3 omni-wheels

ϕ ϑ3

θ

ϑθ1

α1

Fig. 1. An application scenario of the Reha-Maus. The patient’s arm is pivoted on the mobile platform and can move in the planar table plane.

rr

ϕ ϑθ3

x ϑ1

ω1

Arbitrary translational and rotational motion on the table surface is facilitated by three DC motor driven omni-wheels. A compact design is achieved by mounting the drives along the edges of an equilateral triangle. The used omni-wheels have eight passive rollers aligned around its rim to enable low friction rolling perpendicular to the attached motor shaft axis. Each omni-wheel is driven by a brushed DC motor (Amax 20 Watt, Maxon, Switzerland) with a nominal voltage of 24V and a maximum continuous moment of 44.4 mNm. Furthermore, a planetary gear head (GP 32 A, Maxon, Switzerland) interconnects each wheel and motor shaft, and a 512-line incremental encoder (MRL, Maxon, Switzerland) is mounted on each motor shaft to record the wheel rotation.

Fig. 2.

The geometry and coordinate systems of the Reha-Maus.

angular velocities of the three omni-wheels are forming the vector ω(t) = [ω1 (t), ω2 (t), ω3 (t)]T . In order to calculate the transformation from wheel level ω to system level q, a kinematic model is derived. The tangential velocities of the three wheels, v(t) = [v1 (t), v2 (t), v3 (t)]T , can be mapped to the robot coordinates. The translational velocities x˙ R (t) and 1 http://www.rtai.org/ 2 http://www.scilab.org/ 3 http://hart.sourceforge.net/ 4 http://qrtailab.sourceforge.net/

51

y˙ R (t) of the robot are then obtained by x˙ R (t) = y˙ R (t) =

3 X i=1 3 X

vi (t) cos(αi ) = vi (t) sin(αi ) =

i=1

3 X i=1 3 X

by odometry which is, however, subject to drift and lacks accuracy. Fast and accurate estimates of the generalised position vector q are obtained by a sensor fusion scheme. The utilised Kalman filter combines the absolute position measurement q m from the camera with wheel speed information derived from the incremental encoders. The wheel velocities are processed by a time-discrete kinematic model in order to get first rough positions estimates. The existence of differing sampling times has to be taken into account. In this work, the camera-based sensor system operates at a sampling time tsm = 50 ms and possesses a considerable time delay of tdm ≈ 70 ms whereas the wheel velocities are available at a much shorter sampling time of ts = 1 ms. The ratio between long and short sampling periods is given by km = tsm /ts = 50.

rω ωi (t) cos(αi )

rω ωi (t) sin(αi ).

i=1

with α1 = 0, α2 = 2/3π and α3 = 4/3π. The scalar rω is ˙ the radius of the omni-wheel. The orientation velocity θ(t) of robot is a function of the velocities vθi (t), i = {1, 2, 3}, as depicted in Fig. 2 and is given by 3 3 X 1 X ˙ = 1 vθi (t) = rω ωi (t) sin(ϕ), θ(t) rr i=1 rr i=1

where rr = 10.5 cm is the robot rotation radius, and ϕ = 39◦ is a construction specific angle. These transformations can also be written in vector/matrix form:     x˙ R (t) ω1 (t)  y˙ R (t)  = Γr ω2 (t) (1) ˙ ω3 (t) θ(t)   cos(α1 ) cos(α2 ) cos(α3 ) Γr =rω  sin(α1 ) sin(α2 ) sin(α3 )  sin(ϕ) rr

sin(ϕ) rr

A. Discrete-time kinematic model Because of this different sampling times and the presence of a time-delay the suggested sensor fusing algorithm differs from the classical Kalman filter. The following nonlinear state-space model builds the basis for the design of the modified Kalman filter: q(k) = q(k − 1) + Γ(θ(k − 1))ω(k − 1) + µ(k − 1) (5)

sin(ϕ) rr

where Γ(θ(k − 1)) is given by

In most cases, it is convenient to control the robot in generalised coordinates q. The transformation matrix from robot to generalised coordinates is given by   cos(θ(t)) − sin(θ(t)) 0 (2) Γq (θ(t)) =  sin(θ(t)) cos(θ(t)) 0 . 0 0 1

Γ(θ(k − 1)) = rω ts ·  cos(θ) cos(θ + 2π 3 )  sin(θ) sin(θ + 2π 3 ) sin(ϕ) rr

This model has been obtained from (3) by Euler discretisation for the sampling period ts . Inputs to the discretetime system are the process noise vector µ(k − 1) with the corresponding covariance matrix Q and wheel velocity vector ω(k) which is obtained from numerical differentiation of the encoder signals. Let k be the index of the fast sampling instants related to the sampling period ts . The output vector y∗ is available only every 50 ms and represents the time-delayed system state obtained from the camera measurement. The output equations is

Finally, the kinematic relation between the wheels’ angular ˙ velocity vector ω(t) and the generalised velocity vector q(t) is defined as ˙ q(t) = Γ(θ(t))ω(t) Γ(θ(t)) = Γq (θ(t))Γr  cos(θ) = rω  sin(θ) sin(ϕ) rr

(3) (4) cos(θ + 2π 3 ) sin(θ + 2π 3 ) sin(ϕ) rr

The inverse of Γ is given by  2 cos(θ) Γ−1 =

3 √ 1   − 3 sin(θ)+cos(θ) 3 rω  √3 sin(θ)−cos(θ) 3

 cos(θ + 4π 3 ) . sin(θ + 4π 3 ) sin(ϕ) rr

y∗ (k = ikm ) 2 sin(θ) 3√ − sin(θ)− 3 cos(θ) 3√ − sin(θ)+ 3 cos(θ) 3

sin(ϕ) rr

(6)  cos(θ + 4π 3 )  sin(θ + 4π 3 ) sin(ϕ) rr



rr 3 sin(ϕ)  rr  3 sin(ϕ)  rr 3 sin(ϕ)

and will be used in the position control of the mobile robot.

= q(ikm − dm ) + ν(ikm ) = q m (ikm ), i = 1, 2, . . .

where ν represents the measurement noise with the corresponding covariance matrix R. B. Case: missing absolute position measurement If no new camera position measurement is available, a pure model-based prediction is carried out to determine an ˜ (k): up-to-date a priori position estimate q

IV. S ENSOR FUSION SCHEME In the mobile robot design, as indicated above, two types of position sensors are used: an infrared camera and incremental encoders at the three motors. The infrared camera provides low-frequent and partly noise-corrupted measurements that contain the absolute position and orientation of the mobile robot. Encoder information provides a fast way to determine a high-frequent position and orientation estimate

˜ − 1))ω(k − 1) ˜ (k) = q ˜ (k − 1) + Γ(θ(k q

(7)

The most recent angular velocity vector ω(k − 1) is multi˜ − 1)). This matrix is defined by plied with the matrix Γ(θ(k ˜ −1) of (6) and depends on the robot orientation estimate θ(k 52

ωi (k), i ∈ {1, 2, 3}. The relation between the speed of a wheel and the corresponding reference current is approximately described by a simple pulse transfer function model

the last time step. Hence, Eq. (7) is computed for sampling instants where only ω is updated. The update equation for the a priori covariance matrix of the state estimate is given by ˜ ˜ − 1) + Q P(k) = P(k

ωi (k) =

(8)

Please note, that in the Eqs. (7) and (8) a priori estimates also appear on the right sides of the equations. Because of the time delay dm > 1, a posteriori estimates do not exist for the time instant k − 1.

(12)

with the polynomials A(q −1 ) = 1 + a1 q −1 and B(q −1 ) = b0 q −1 . Here, k is the sample index, e(k) represents a noise input and q −1 is the back-shift operator (q −1 s(k) = s(k−1)). A sampling time ts of one millisecond is used as before in the sensor fusion. The transfer function (12) can be experimentally identified [12]. Only one wheel of the robot with the arm load will be activated by a square wave reference current signal. This signal has to be chosen carefully to not operate the motor at maximal speed in order to allow linear system identification. In order to track the demanded reference wheel speeds ωri , i ∈ {1, 2, 3} (generated from the outer control loop), the following digital controller in polynomial form is applied for each motor [13]:

C. Case: present absolute position measurement Once a camera measurement y∗ (k) = q m (k) = q(k − dm ) + ν(k) is available, one can determine the Kalman gain ˆ (k − dm ) K(k − dm ) as well as the a posteriori estimates q ˆ − dm ) of the state and the covariance matrix and P(k respectively for the past time instant k − dm : ˜ − dm )(P(k ˜ − dm ) + R)−1 , K(k − dm ) = P(k ˆ − dm ) = (I − K(k − dm ))P(k ˜ − dm ) P(k

1 B(q −1 ) Ir (k) + e(k) A(q −1 ) i A(q −1 )(1 − q −1 )

(9) (10)

ˆ (k − dm ) = q ˜ (k − dm )+ q ˜ (k − dm )) . (11) K(k − dm ) (y∗ (k) − q

Iri (k) =

These calculations require that old a priori estimates and wheel velocities are temporally stored. Based on the obtained ˜ can ˜ and P a posterior estimates, new a priori estimates q be iteratively calculated from the time instant k − dm + 1 to the current time instant k:

1 (−S(q −1 )ωi (k) + T(q −1 )ωri ). R(q −1 )

(13)

The feedback part of the controller possesses a pole at one (to have integral action) and a zero at minus one (to eliminate the effect of noise at the Nyquist frequency):

R(q −1 ) = R(q −1 )(1 − q −1 ), (14) ˆ − dm ))ω(k − dm ) ˜ (k − dm + 1) = q ˆ (k − dm ) + Γ(θ(k q S(q −1 ) = S(q −1 )(1 + q −1 ). (15) ˜ − dm + 1) = P(k ˆ − dm ) + Q P(k The factors R and S of the controller polynomials are ˜ (k − dm + i) = q ˜ (k − dm + i − 1) + q determined by pole-placement. The characteristic polynomial ˜ Γ(θ(k − dm + i − 1)) · Acl (q −1 ) of the closed-loop system (inner loop) is given by ω(k − dm + i − 1), i = 2, . . . , dm ˜ − dm + i) = P(k ˜ − dm + i − 1) + Q, Acl = AR · (1 − q −1 ) + BS · (1 + q −1 ) , Adcl . (16) P(k i = 2, . . . , dm

The polynomials R and S have to be determined in such a way that a desired, pre-specified, closed-loop polynomial Adcl is obtained. For a minimal degree controller we chose deg Adcl = 3, deg S = 1 and deg R = 1. The polynomial Adcl will be factorised into a first and second order polynomial

The proposed algorithm accounts for a non-negligible transmission delay in the camera-based measurement system. V. P OSITION CONTROL To execute arbitrary robot movements, a cascaded discretetime position controller is developed while assuming a neg˜ (k). ligible dynamics for the state observer, i.e. q(k) = q The inner loop consists of three independent motor speed controllers to operate the system at maximum actuator capacity and to facilitate a simplified state-space description of the robot. For the outer loop, an optimal position controller is used based on a discrete-time linear quadratic regulator (LQR) design.

Adcl (q −1 )

=

Ao (q −1 )Ar (q −1 )

)

=

1 − ar1 q −1 = 1 − ets /trr q −1

Ao (q −1 )

=

1 − 2e−ω0 ts q −1 + e−2ω0 ts q −2

Ar (q

−1

with ω0 = 3.2/tro . Both polynomials, Ar and Ao , yield of course stable system behaviour and characterise the closedloop dynamics. Tuning parameters of the wheel speed control loops are the rise times trr and tro associated with polynomials Ar and Ao respectively. The desired transfer function from the reference wheel speed to the actual wheel speed shall be of first order with the polynomial Ar (q −1 ) as denumerator and an unity DC gain:

A. Inner loop: wheel speed control Each DC motor is powered by an individual power electronics offering analogue motor current control. Consequently, the reference motor currents Iri , i ∈ {1, 2, 3}, represent the manipulable process input signals. Controlled signals at the inner loop level are the motor wheel speeds

Tω (q −1 ) = 53

q −1 Ar (1) q −1 (1 − ar1 ) ωi = = ωri Ar (q −1 ) 1 − ar1 q −1

(17)

To fulfil (17) the pre-filter polynomial T(q −1 ) of the controller has to be chosen as Ao (q −1 )Ar (1) . (18) T(q −1 ) = B(1)

Figure 3, both time-varying transformation matrices Γ(k)−1 and Γ(k) are interconnected through the wheel velocity subsystem. The actual design procedure can now be carried out for one linearised plant, i.e. one pair of Γ(k) and its corresponding inverse, evaluated at a fixed robot orientation θ0 . This approach is valid if the desired orientation angle is equal θ0 and if the controlled orientation angle remains close to the desired one. In this work, an orientation of θ0 = 0 rad is chosen as linearisation and reference angle. The corresponding matrix −1 Γ0 and its inverse Γ0 are constant. The resulting linear time-invariant model for the controller design can now be formulated as      q(k + 1) q(k) Γ0 I ≈ + ω(k + 1) ω(k) O ar1 I   O ˙ r (k) . (23) + −1 q (1 − ar1 )Γ0

Each speed controller has been implemented with an antiwindup-scheme as proposed in [13]. B. Outer loop: state space position controller q˙ r (k) Γ(k)−1

ω r [k] (1 − ar1 )I

inverse kinematic transform

q(k)

ω(k)

1/q

Γ(k)

1/q

ar1 I simplified ω control loop

I kinematic model

Fig. 3. Time-variant state-space model with the reference velocity as input and the actual position as output (both in generalised coordinates).

The relationship between reference wheel velocity vector ω r and actual wheel velocity vector ω can be expressed by a single state-difference equation ω(k + 1) = (1 − ar1 )ω r (k) + ar1 ω(k) .

The output equation (21) remains unchanged. The utilised state-space feedback control is based on a Linear Quadratic Regulator (LQR) problem [14]. The control design objective is to minimise the following cost function:

(19)

Combining the kinematic model (7) (assuming a negligible ˜ (k)) with dynamics for the state observer, i.e. q(k) = q the wheel speed dynamics (19) yields a linear time-varying model with the state   q(k) x(k) = . ω(k)

Ju =

∞  X

 xT (k)Qc x(k) + uT (k)Rc u(k)

(24)

k=k0

where Qc and Rc are symmetric weighting matrices for the state x(k) and the control input u(k) = q r (k). The matrix Qc is assumed to be non-negative definite, whereas Rc must be positive definite. The resulting optimal control signal is obtained as a state-feedback law given by

Let I and O be the 3 × 3 identity and zero matrix, respectively. Then, the combined state-difference equation can be formulated as      q(k + 1) q(k) I Γ(k) = ω(k + 1) ω(k) O ar1 I   O + ω r (k) (20) (1 − ar1 )I

L = u(k)

(B0 P∞ B + Rc )−1 B0 P∞ A,

= −Lx(k) + Fq r (k).

(25) (26)

P∞ is the solution of the following discrete-time Ricatti equation  P∞ = A0 P∞ − P∞ B(B0 P∞ B + Rc )−1 B0 P∞ A+Qc . (27) The linear system matrices for controller design, A, B and C, can be extracted from the nominal model given by (21) and (23). Furthermore, a reference filter F is designed to provide an unity DC gain for the command response. As the regarded system has six states and three manipulable inputs, the resulting controller matrix L is of dimension 3 × 6. The reference filter is a 3 × 3 square matrix. Figure 4 illustrates the cascaded control scheme.

The whole state x(k) is accessible by measurement or estimation. The desired output is the generalised position. Hence, the output equation is given by    q(k) y(k) = I O . (21) ω(k) In order to improve disturbance rejection and facilitate the use of a linear state feedback design, the plant is approximately linearised. The linearisation is accomplished by augmenting the existing model with an inverse of the kinematic transformation ω r (k) = Γ(k)q˙ r (k). The corresponding state-space representation is then given by      q(k + 1) q(k) I Γ(k) = ω(k) ω(k + 1) O a r1 I   O + q˙ (k). (22) −1 (1 − ar1 )Γ (k) r

VI. E XPERIMENTAL EVALUATION The sensor fusion and position/orientation control have been experimentally validated. During the experiments a healthy test person was having the arm attached to the RehaMaus. The test person could not see the reference trajectories and was instructed to keep the arm muscles relaxed.

The input signal for the augmented system is now the reference velocity q˙ r (k) in generalised coordinates. As shown in 54

controller ω1 qr

F

q˙ r

inverse kinematics

controller

ωr

ω2

−1

Reha-Maus

+ Kalman Filter

Γ



Ir

ω ˜ q=q

controller ω3

L (q, ω)T Fig. 4.

Cascaded position control scheme of the Reha-Maus.

A. Sensor fusion

x [m]

At first the function of the sensor fusion was investigated. Therefore arbitrary motor control signal were applied and the corresponding movement monitored. Figure 5 shows the estimated position and orientation together with the raw camera measurements and the results obtained from a pure odometry. Odometric position and orientation are based exclusively on wheel information and the kinematic model. Estimates of the Kalman filter and camera measurements have a similar shape while the odometric signals clearly diverge.

Kalman filter estimate camera measurement measurement - delay shifted

Fig. 6. Comparison of measured and estimated x-position of the robot. The absolute measurement was obtained by an IR camera.

y [m]

x [m]

t [s]

t [s]

Wheel speed ω1 [rad/s]

t [s]

θ [rad]

Kalman filter estimate camera measurement odometric estimate

t [s]

measured output ω1 model output

Fig. 5. Monitoring of the robot position/orientation during arbitrary motion.

Reference current Ir1 [A]

Time [s]

Figure 6 grants a closer look at the camera measurement and Kalman filter estimate of the x position during arbitrary robot motion. The sensor fusion signal exhibits a definite advance in comparison to the delayed camera measurement. B. System Identification Figure 7 shows the result of an experiment for identifying the wheel speed dynamics. In the upper part of the graph the measured angular speed for the wheel under investigation is shown together with the simulated output of the estimated transfer function. The lower graph shows the applied input signal (reference motor current).

reference current Ir

1

Time [s] Fig. 7.

55

System identification result for the wheel speed dynamics.

no complex modelling of the robot dynamics required. The design is straightforward and can be easily adapted to other omni-directional robots. The sensor fusion scheme takes also into account the time delay of the camera system and reduces the wheel slippage induced tracking error. The realised position/orientation control builds the basis for the development of haptic control schemes. A first step into this direction has been already performed [15]. Further work includes the replacement of the ceilingmounted infrared camera with a robot-mounted absolute position measurement (e.g. camera looking down at coded mat on the table).

xr x

x [m]

y [m]

reference output

x [m]

t [s] yr y

θr θ

y [m]

θ [rad]

R EFERENCES

t [s]

Fig. 8.

[1] H. I. Krebs, M. Ferraro, S. P. Buerger, M. J. Newbery, A. Makiyama, M. Sandmann, D. Lynch, B. T. Volpe, and N. Hogan, “Rehabilitation robotics: pilot trial of a spatial extension for MIT-Manus,” Journal of NeuroEngineering and Rehabilitation, vol. 1, no. 5, pp. 1–15, 2004. [2] N. Hogan, H. I. Krebs, J. Charnnarong, P. Srikrishna, and A. Sharon, “MIT-MANUS: a workstation for manual therapy and training. i,” in Proceedings of IEEE International Workshop on Robot and Human Communication., 1992, pp. 161–165. [3] R. Loureiro, F. Amirabdollahian, M. Topping, B. Driessen, and W. Harwin, “Upper limb robot mediated stroke therap — GENTLE/s approach,” Auton. Robots, vol. 15, no. 1, pp. 35–51, 2003. [4] T. Nef and R. Riener, “ARMin - design of a novel arm rehabilitation robot,” in IEEE 9th International Conference on Rehabilitation Robotics (ICORR), 2005, pp. 57–60. [5] T. Nef, M. Mihelj, G. Kiefer, C. Perndl, R. Muller, and R. Riener, “ARMin - exoskeleton for arm therapy in stroke patients,” in IEEE 10th International Conference on Rehabilitation Robotics (ICORR), 2007, pp. 68–74. [6] W. S. Harwin, J. L. Patton, and V. R. Edgerton, “Challenges and opportunities for robot-mediated neurorehabilitation,” Proceedings of the IEEE, vol. 94, no. 9, pp. 1717–1726, 2006. [7] A. Peattie, A. Korevaar, J. Wilson, B. Sandilands, X. Q. Chen, and M. King, “Automated variable resistance system for upper limb rehabilitation,” in Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, 2009, pp. 1–8. [8] J. C. Perry, H. Zabaleta, A. Belloso, and T. Keller, “ARMassist: a lowcost device for telerehabiltation of post-stroke arm deficits,” in World Congress on Medical Physics and Biomedical Engineering, Munich, Germany, 2009, pp. 64–67. [9] K. Watanabe, Y. Shiraishi, S. G. Tzafestas, J. Tang, and T. Fukuda, “Feedback control of an omnidirectional autonomous platform for mobile service robots,” J. Intell. Robotics Syst., vol. 22, no. 3-4, p. 315–330, 1998. [10] D. Zhao, X. Deng, and J. Yi, “Motion and internal force control for omnidirectional wheeled mobile robots,” IEEE/ASME Transactions on Mechatronics, vol. 14, no. 3, pp. 382–387, 2009. [11] Y. Liu, J. J. Zhu, R. L. Williams,II, and J. Wu, “Omni-directional mobile robot controller based on trajectory linearization,” Robot. Auton. Syst., vol. 56, no. 5, pp. 461–479, 2008. [12] L. Ljung, System Identification: Theory for the User, 2nd ed. Prentice Hall, 1999. [13] K. J. Åström and B. Wittenmark, Computer-controlled systems: theory and design. Prentice Hall, 1997. [14] G. C. Goodwin, S. Graebe, and M. Salgado, Control System Design. Addison Wesley, 2000. [15] L. Dongfeng, J. Carstens, T. Schauer, and J. Raisch, “Haptic control of a table-placed mobile robot for arm/shoulder rehabilitation,” in Proceedings of the 3rd European Conference Technically Assisted Rehabilitation - TAR, 2011.

t [s]

Results for tracking a circular trajectory.

C. Position control Based on the identified model of the wheel speed dynamics the individual speed controllers for the three DC motors have been designed. For all motors the following rise times have been set: trr = 0.015 s and tro = 0.010 s. The resulting inner loop has a bandwidth of approximately 45 Hz. The weighting matrices of the LQR design for the outer loop have been chosen to obtain a bandwidth of approximately 1 Hz for the entire position control system. This performance is satisfactory for the rehabilitation application. Figure 8 shows the control performance for tracking a circular path q r with constant robot orientation and a time period of five seconds. The observed position error is in the millimetre range. For the orientation an error smaller than five degree could be registered. VII. C ONCLUSIONS A novel type of rehabilitation robot for the upper limb has been developed and presented. A sensor fusion scheme for position and orientation estimation as well as a cascaded position control scheme have been proposed. Both schemes were successfully validated in experiments. Basis of the design is a simple kinematic model of the robot as well as a transfer function model of the motor dynamics. The latter can be easily experimentally identified. Compared to other published control schemes (e.g. [10], [11]) there is

56