Paper
Combining force control and visual servoing for planar contour following* Johan Baeten , Walter Verdonck , Herman Bruyninckx and Joris De Schutter Abstract: The limited bandwidth of sensor-based feedback control restricts the execution speed of a force controlled planar contour following task, if the shape and/or pose of the workpiece are unknown. This paper shows how feedforward control of the contour orientation, calculated on-line from an eye-in-hand camera image, results in a faster or more accurately executed task. However, keeping the contour in the camera field of view imposes an additional requirement on the controller which already has to maintain a force controlled contact. This double control problem is specified in the Task Frame formalism and executed in a hybrid position/force control environment. It is solved using the redundancy for rotation in the plane, which exists for rotationally symmetric (tracking) tools. The orientations of task and end effector frames can then be controlled independently. Experimental results are presented to validate the approach. Keywords: Integration Vision/Force, Vision Based Feedforward, Visual Servoing, Hybrid Position/Force Control
1. Introduction In planar force controlled contour following, the robot is holding a tool while following the contour of a workpiece. When pose and shape of the workpiece are unknown, the force sensor is used to modify, or even generate, the tool trajectory on-line. Due to the limited bandwidth of the sensor-based feedback control loop, the execution speed of the task is restricted in order to prevent loss of contact, or excessive contact forces. This paper shows how the performance of a contour following task improves w.r.t. a pure force feedback controlled task by combining force control (tracking) and visual servoing and feedforward. While maintaining the force controlled contact, the controller has to keep the camera, also mounted on the robot end effector, over the contour at all times. Then, from the on-line vision-based local model of the contour, appropriate feedforward control is calculated and added to the feedback control in order to reduce tracking errors. The approach presented in this paper can be applied to all actions that scan surfaces along planar paths with a rotationally symmetric tool: cleaning, polishing . . . It is especially useful for one-off tasks in which accurate positioning or calibration of the workpiece is costly or impossible. 1. 1 Framework Hybrid position/force control [3]: In case of a point contact between tool and workpiece, the force controlled contour following task can easily be defined in the task frame formalism [2], [4], [12]. In this formalism, desired actions are specified separately for each direction of an orthogonal frame attached to the tool: the “Task Frame”. Each direction ( , , ) is considered once as an axial direction (translation; linear force) and once as a polar direction
Received February 24, 2000; accepted April 20, 2000. Department of Mechanical Engineering, Katholieke Universiteit Leuven, Celestijnenlaan 300B, B-3001 Heverlee, Belgium. E-mail:
[email protected]
(rotation; torque). Basically, there are three types of directions: velocity, force and tracking directions. Velocity and force directions are specified by setting a desired velocity or contact force respectively. They correspond with control loops ( ) and ( ) of Figure 1. For a tracking direction ( ), however, no desired velocity is specified. The controller will use this direction to automatically adjust the pose of the task frame w.r.t. the workpiece. If a rotationally symmetric tool is used, the task frame and the robot end effector frame are not necessarily rigidly fixed to each other. Hence, a redundancy for the task frame orientation exists. For example, reorienting the task frame can be done ( ) by rotating the robot end effector while keeping the task frame fixed to the end effector or ( ) by redefining the relation between task frame and robot end effector. Adding the vision system: The camera is mounted on the end effector ahead of the force sensor with the optical axis normal to the plane. Mounting the camera in this way, results in local images, normally free from occlusion, and in a controllable camera position. The image feature of interest is placed on the optical axis (center of the image) by visual servoing , which makes the feature measurement less sensitive to calibration errors or distortions in the camera/lens system. Moreover, incorporating the vision information in the position control loop of the robot makes the control robust against kinematic or structural errors [10]. Instead of the vision system, a distance sensor could be mounted in the plane of the contour to measure the contour. This would require much less computing power. Compared to the vision system described above, however, such a sensor and mounting can not directly measure the orientation of the contour, can not look around a contour at outer curves or corners and is more susceptible to collision with the contour at inner curves. Furthermore, the contour following task is often only a part of a complete robot task [1] in which the vision system can also be used to globally locate the contour and search for a starting position. For the combined vision/force contour following, as pre
According to the taxonomy introduced by [14] the used visual servoing type is a dynamic look-and-move control. c 2000 Cyber Scientific
Paper No. 0xxx–YYYY/99/010020-01
(iv)
(*) +
S - -
Vision system
Matching and Vision based feedforward (v) + + S Task Frame velocities
Hybrid position/force - velocity controller Tracking Measured velocities Measured contact forces
(iii)
Robot with joint controller
(ii)
Robot position +
S
-
Workpiece position
High level Task Specification interpreter
Relative position of contour in image
Force sensor
(i)
Fig. 1 Hybrid force/position control scheme; (*)Difference of corresponding pairs only
sented in this paper, the camera needs to be calibrated with respect to the robot end effector. After all, with the camera looking ahead and not seeing the actual contact point , the calibration is essential to match the vision information to the contact situation at hand. Due to the calibration, the vision algorithms too will benefit from the combined vision/force setup. If the relative position of camera and force sensor is known and if the (calibrated) tool is in contact with the object, the depth from the image to the object plane is known. Hence, the 3D position of the image feature can be calculated easily (with mono vision). This makes the adopted approach position-based [10]. The calibration of camera pose (position and orientation) and intrinsic camera parameters, such as focal length, lens distortion and center of image, according to a pin-hole model is adapted from [16]. But we use only one fixed feature which is first measured by pointing to it with the robot and afterwards observed from about 80 different camera positions. Additional control issues: Two new control specification issues arise. First, the contour has to be kept in the camera field of view by visual servoing. The task frame direction used to this end is a vision (controlled) direction ( ). Second, feedforward signals ( ) calculated from the local vision information of the contour, are added in order to reduce errors in both tracking and vision controlled directions. The better the feedforward velocity anticipates the motion of the task frame, the smaller the tracking error [5]. Since the camera is looking ahead, the delay times in the image capture and processing, and the low bandwidth of the vision system can be compensated. Hence, they will not affect the stability of tracking and vision control loops ( , ). However, generating the correct feedforward signal at the correct time instant, requires careful matching between the contour information, extracted in advance, and the actual tool position. 1. 2 Related work Unlike previously reported hybrid position/force control with force sensor and camera [8], [19], our work uses an eye-in-hand camera, rather than a fixed one. We do, however, need a calibrated camera (pose and intrinsic camera parameters).Only workpiece and environment may be fully
Referred to as Endpoint Open-Loop (EOL) in [10].
c 2000 Cyber Scientific
uncalibrated. As in [19], (which gives a good overview of recent work in hybrid position/force control) vision and force data are used complementary instead of fusing them as redundant data. [8] suggests the need for a common coordinate frame for multiple external sensor-based controllers. In our opinion, the task frame lends itself very well to this purpose for numerous tasks, especially when multiple sensors are mounted on the end effector. The use of the task frame formalism to merge both vision and force sensors results by definition in orthogonal force and vision control spaces. This, in contrast to [8], [19], avoids the extra effort to decouple these spaces. The use of a task frame also distinguishes our work from visual servoing such as [7], [10]: no identification of a vision Jacobian or optical flow are necessary. [13] introduces the concept of vision and force resolvability as a means to compare the ability of the two sensing modes to provide useful information during robotic tasks. In our approach, the high level task description determines the use of each sensor. Through the hybrid control scheme, it also determines the way in which the sensor data is fused, instead of using a probabilistic weighting method as e.g. [9]. Many researchers report on vision based feedforward control. In contrast to this work, however, they mostly use off-line generated models (e.g. [6]) or partially known paths or workpieces (e.g. [11]). 1. 3 Notation Figure 2a) gives the global experimental set-up defining the task, the camera and the robot end effector frames. They are denoted by subscripts , and , respectively. Following notations are used: indicates an absolute pose [mm] or [rad]; is the relative pose (or error) of the contour w.r.t. the frame ; is a force [N] or torque [Nmm], is a generalized velocity [mm/sec] or [rad/sec] with superscripts , and referring to commanded, measured and desired signals, respectively; superscript ff indicates the feedforward signal. and are control gains [sec ] and "!$#&% is the inverse stiffness or compliance of the robot tool [mm/N] or [rad/Nmm]. The concerned direction is indicated between brackets with , and for translations and '&( , '*) and '*+ for rotations. If, for simplicity, the subscript ‘ + ’ is omitted, ' indicates a rotation around the -axis. Machine Intelligence & Robotic Control, 2(2), 3–9 (2000)
ycam
Camera field of view
End Effector
utc(x)
B
yee xee
xee Force Sensor
Workpiece contour
yt
A
Contour
c)
ycam
ycam Workpiece
xee xt
xcam
m DPcam(q)
zt
b)
xt
m
DPcam(x) (Pos.)
(Neg.)
r Curvature
A yt
uct(y)
x cam ycam
B
xcam
yt Tangent
yee
Camera zcam
xt
xt
zee
Force Probe (Tool) a) Task Frame
c c DPt (q) ~ ut (y) / u t (x)
xcam
Kuka 361 Robot
yee
d)
k = -1/r
yt
Fig. 2 a) Global overview, b) Top view of contour for two time instants, c) Top view of contact variables in task frame and d) Local variables relative to the camera
Figure 2c) shows how the tracking error ' between the task frame and the workpiece tangent and normal, is identified based on the velocities and . Figure 2d) defines the local contour variables as seen by the camera. 1. 4 Overview of the paper Section 2 gives the details of the control approach. It deals with the problem of keeping the contour in the camera field of view in addition to maintaining the force controlled contact. It further explains how the vision data is matched to the actual position, and how the feedforward signal is calculated from it. Section 3 describes the experimental set-up and the image processing. It also presents the experimental results. Finally, Section 4 summarizes the major conclusions and contributions.
2. Detailed control approach Complete task description: The task consists of following a planar contour with force feedback and vision based feedforward. The objectives are ( ) to keep a constant normal contact force between tool and workpiece (direction & ), ( ) to move along the workpiece contour with given (tangential) velocity, (direction ), ( ) to keep the task frame tangent to the contour (direction ' ), ( ) to keep the contour in the camera field of view (direction ' ) and ( ) to generate the correct feedforward to rotate the task frame in the plane in order to reduce tracking errors (direction ' ). These desired actions, visualized in Figure 3, are specified in a high level task description as used in our experimental control environment COMRADE [17]. The syntax and units used in this specification clearly indicate the control type. As an example, the program for the above task, expressed in the task frame indicated in Figure 2a looks like: TASK EXAMPLE:
with task frame: variable - visual servoing : velocity 25 mm/sec : force 30 N : velocity 0 mm/sec ' ( : velocity 0 rad/sec ' ) : velocity 0 rad/sec '*+ : track & feedforward until relative distance 550 mm ,
(iv) (ii)
(iii) + (v)
2. 1 Double contact control
(i) Fig. 3 Different control actions for the contour following task. To the force based tracking action ( ) vision based feedforward ( ) is added
c 2000 Cyber Scientific
Keeping the contour in the camera field of view, corresponds to keeping a “virtual double contact”. The first contact (point in Figure 2b) is between workpiece and force probe, which must move at the specified velocity. The second (virtual) contact (point in Figure 2b) coincides with Machine Intelligence & Robotic Control, 2(2), 3–9 (2000)
the intersection of the optical axis and the contour plane. To maintain this double contact following control actions are taken: (i) Force control: Compensate the normal force error by applying a velocity in the -direction of the task frame:
!# %
ject to robot) in the -direction of the task frame. (ii) Velocity control: Move the task frame (point ) with the desired tangential velocity:
" ' '
(3)
with ' the proportional control gain for the tracking direction ' in [sec ]. (iv) Visual servoing: Rotate the end effector around the direction by: '
with
(4)
'
(5) controls point towards the contour. Component fixed distance between points and [mm] is the . Component ! moves tangent to the contour in with velocity (direction known, magnitude un-
known) while compensating for the velocity of point
. (This is in fact a feedforward signal on the position control of point ). Its value follows from:
!
! ! ! #" (6) According to the notations of Figure 4 and neglecting the velocity , is solved as: %$'&)( ' +* -,. ! $ ' with * the angle between and .
(7)
From a practical point of view, the task frame is related to the end effector and not to some absolute world frame. Hence, moving or rotating the end effector frame will also move or rotate the task frame. To make independent from the rotation of the orientation of the task frame the end effector (the visual servoing action), we have to redefine to compensate. the relation between task and end effector by
5768891324
c 2000 Cyber Scientific
/%0 1324
:;5 a look-up table. Finally, the calculated predicted pose of the task frame for the next time instant is used as a (interpolating) pointer into the look-up table. From each image, the position and orientation of only one contour point is measured and logged. This measurement is unaffected by the camera/lens distortion, since the visual servoing control (equations (4) to (7)) keeps the optical axis of the camera close to the contour at all times. 2. 3 Calculating the feedforward signal The last step implements the feedforward ( in Figure 1) to reduce/eliminate the tracking angle error ' using the following feedforward control signal:
@?
'
BA
(8)
This equation uses the curvature A of the contour in the current contact point. An obvious way to calculate A is the use of a fitted contour model. This poses however some problems: for a start, the calculation of the curvature from the fitted contour is very noise sensitive due to the second derivative. Furthermore, the fitted contour may differ from the real contour, especially for simple models or may be Machine Intelligence & Robotic Control, 2(2), 3–9 (2000)
Kuka 361 Robot
and transputer link. The image processing and calculations are limited by the non interlaced frame rate of 25 Hz. Since the robot controller and image acquisition rates are different, the robot controller uses the most recent DSP calculations 4 times in a row. The force probe is about 600 mm long with compliance !$#&% = 0.09 mm/N. The distance is 55 mm. The camera is placed about 200 mm above the workpiece resulting in a camera resolution of about 3 pix/mm. Figure 6 gives an overview of the set-up.
6 DOF Force Sensor Video Signal
Camera
Force Probe Task Frame RS422
CPL1
DSP2 J G L T0 F0 F2
E P D T1 F1 F3
VSP1
BIVOLT PK60
B E
COMRADE : Robot Controller on Transputer
DC-12V-2A
5V/6A, 12V-15V/2A
3. 2 Implementation of image processing
VSP - DSP module (Frame grabber & Digital processor) Host PC
Video monitor
Fig. 6 Experimental set-up
computationally expensive for more complicated models. Finally, not the curvature in one point is needed but the mean curvature for the arc length traveled during one time interval. In order to avoid all of the previous mentioned problems, the curvature is calculated as the mean change in orientation of the contour over the traveled arc length :
AA
'
(9)
results from the Total Least Squares [18] solution of a A set of first order equations ' in nine ( ' ) pairs, lying symmetrically around the matched position (i.e. position in Figure 5). The feedforward velocity calculated according to equations (8) and (9), is then added to the feedback control actions described in Section 2. 1.
The image processing algorithm determines the contour position in four steps starting from a 256 grey level 128 by 128 image. First an Infinite Symmetric Exponential Filter (ISEF), proposed by [15] as an optimal linear operator for step edge detection, extracts some local contour points. This operator works on a row or column of grey values. It combines an infinite smoothing filter, which efficiently suppresses noise, with a differential operator. The accuracy of the edge detection is in the order of 1/3 of a pixel. After the first two scanned lines (or from a previous image), the orientation of the contour is roughly known. This information is used to make the scanning of the next contour points more robust and faster by applying a narrowed search window. In total, nine contour points are extracted lying symmetrically around the center (horizontal line) of the image. Second, a Total Least Squares fit through these nine extracted contour points determines one single measurement of the relative position and orientation
' of the contour w.r.t. the center of the image. The third step corrects and logs this single measurement in absolute coordinates. With a tangential velocity of 25 mm/sec, the mean distance between the logged contour points is 1 mm. The fourth and last step calculates the feedforward according to equations (8) and (9) as explained in Sections 2. 2 and 2. 3. Note that this calculation uses image measurements from nine consecutive images.
3. Experiments First, the experimental set-up and the image processing software are described briefly. Then the results are presented. 3. 1 Experimental set-up The experimental set-up consists of a KUKA 361 robot, with a SCHUNK force sensor together with an eye-in-hand SONY CCD XC77 camera with 6.15 mm lens. The CCD camera consists of 756 by 581 square pixels with 0.011 mm pixel width, from which however only a sub-image of 128 by 128 pixels is used. Instead of the commercial controller, our own software environment COMRADE [17] is used, running on a T801 transputer board. The robot controller and force acquisition are running at 100 Hz. The image processing, the calculation of the camera position control signals (equations (5) and (7)), the matching and logging (subsection 2. 2) and the feedforward calculation (subsection 2. 3) are implemented on a TI-C40 DSP unit, with frame grabber
c 2000 Cyber Scientific
3. 3 Results Figure 7 shows the (uncorrected) paths traveled by camera and task frame. The plotted task frame and end effector frame directions illustrate the variable relation between them during task execution. The maximum error between logged vision and traveled tool paths (after corrections) is about 1 mm. This validates the used matching method. Figure 8 (top) compares the measured contact forces in the & direction for two experiments with tangential velocity set to 25 mm/sec: the desired normal contact force of 30 N is maintained very well when feedforward is used; without feedforward, both contact loss and contact forces of about 60 N occur. The tangential velocity is limited to 25 mm/sec when only feedback is used, but can be increased to 75 mm/sec without loss of contact for the given contour when applying vision based feedforward. Figure 8 (bottom) shows the good correspondence between actually calculated and ideal feedforward signals for Machine Intelligence & Robotic Control, 2(2), 3–9 (2000)
-450
xee
-500
Curve 2
Camera path
y
yabs [mm]
t
Task frame path
Curve 1 xee
-550
y
xee
y
t
t
-600 : camera positions at equidistant time steps : task frame positions at equidistant time steps
Curve 3
-650 250
300
350
400 450 x abs [mm]
500
550
600
Fig. 7 Paths travelled by camera and task frame
[N] 60 50 40 30 20 10 0 -10
m
Ft (y) versus time
Without feedforward
d
With feedforward
Ft (y) = 30N
1 0.5
[r/s]
ff
u (q) t
0
Ideal
Curve 2
-0.5 -1
Curve 1 0
Curve 3 5
10
Time [sec]
15
20
14 0
Fig. 8 Top: Measured normal contact forces without and with using vision based feedforward on the tracking direction . Bottom: Actual and ideal feedforward signals for the given contour
2
the given contour. The used method for the curvature calculation, however, levels out peaks in the curvature profile. This gives a less noisy feedforward signal but also causes small remaining contact force errors at positions of high curvature.
4. Conclusions This paper presents a combined force tracking/visual servoing task. It shows how the quality of force controlled planar contour following improves significantly by adding vision based feedforward on the force tracking direction. This reduces tracking errors resulting in a more accurate or faster executed task. The feedforward signal is calculated from an on-line generated local data set of the contour. Keeping the contour in the camera field of view, while
c 2000 Cyber Scientific
maintaining a force controlled contact, however, imposes additional requirements on the controller. This double control problem is solved using the redundancy for rotation in the plane, which exists for rotationally symmetric tools. The key in this solution is redefining the relation of the task frame with respect to the end effector in order to keep the task frame tangent to the contour, while rotating the end effector to position the camera above the contour. Hence, the orientations of task and end effector frames are controlled independently. The experimental results validate the used approach.
Acknowledgments This work is partly supported by the Belgian Interuniversity Poles of Attraction Programme, initiated by the BelMachine Intelligence & Robotic Control, 2(2), 3–9 (2000)
gian State, Prime Minister’s Office, Science Programming and partly by Concerted Research Actions GOA/99/04. The scientific responsibility is assumed by its authors. H. Bruyninckx is Postdoctoral fellow of the F.W.O.Vlaanderen.
References [1] J. Baeten, H. Bruyninckx and J. De Schutter, “Combining eye-inhand visual servoing and force control in robotic tasks using the Task Frame,” in Proc. of the 1999 IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, Taipei, Taiwan, August 1999, pp. 141-146. [2] H. Bruyninckx and J. De Schutter, “Specification of force-controlled actions in the ”Task Frame Formalism” - A synthesis,” IEEE Trans. On Robotics and Automation, vol. 12, no. 4, pp. 581–589, 1996. [3] C., Canudas De Wit et al., Theory of Robot Control, London, Springer, pp.150-170, 1996. [4] J. De Schutter and H. Van Brussel, “Compliant robot motion. I. A Formalism for specifying compliant motion tasks,” Int. Journal of Robotics Research, vol. 7, no. 4, pp. 3–17, 1988. [5] J. De Schutter and H. Van Brussel, “Compliant robot motion. II. A control approach based on external control loops,” Int. Journal of Robotics Research, vol. 7, no. 4, pp.18–33, 1988. [6] S. Demey, S. Dutre, W. Persoons, P. Van De Poel, W. Witvrouw, J. De Schutter, and H. Van Brussel, “Model based and sensor based programming of compliant motion tasks,” In Proc. Int. Symposium on Industrial Robots, Tokyo, Japan, 1993, pp. 393–400. [7] B. Espiau, F. Chaumette, and P. Rives, “A new approach to visual servoing in robotics,” IEEE Trans. on Robotics and Automation, vol. 8, no. 3, pp. 313–326, 1992. [8] K. Hosoda, K. Igarashi, and M. Asada, “Adaptive hybrid control for visual and force servoing in an unknown environment,” IEEE Robotics and Automation Magazine, vol. 5, no. 4, pp. 39–43, 1998. [9] G. D. Hager, Task-Directed Sensor Fusion and Planning - A Computational Approach, Norwell, MA: Kluwer, 1990. [10] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo control,” IEEE Trans. On Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996. [11] F. Lange, P. Wunsch, and G. Hirzinger, “Predictive vision based control of high speed industrial robot paths,” In Proc. of IEEE Int. Conference on Robotic and Automation, Leuven, Belgium, May 1998 pp. 2646–2651. [12] M. Mason, “Compliance and force control for computer controlled manipulators,” IEEE trans. on Systems, Man and Cybernetics, vol. 11, pp. 418–432, 1981. [13] B. J. Nelson and P. K. Khosla, “Force and vision resolvability for assimilating disparate sensory feedback,” IEEE trans. on Robotics and Automation, Vol. 12, pp. 714–731, 1996. [14] A. C. Sanderson and L. E. Weis, “ Image-based visual servo control using relational graph error signals,” Proc. IEEE, pp. 1074-1077, 1980. [15] J. Shen and S. Castan, “An optimal linear operator for step edge detection,” Graphical Models and Image Processing, vol. 54, no. 2, pp. 112–133, 1992. [16] R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987. [17] P. Van De Poel, W. Witvrouw, H. Bruyninckx, and J. De Schutter, “An environment for developing and optimising compliant robot motion tasks,” In 6th Int. IEEE Conference on Advanced Robotics, Atlanta, 1993, pp. 112-126. [18] S. Van Huffel and J. Vandewalle, The total least squares problem: computational aspects and analysis, SIAM Philadelphia (Pa.), 1991. [19] D. Xiao, B. K. Gosh, N. Xi, and T. J. Tarn, “Intelligent robotics manipulation with hybrid position/force control in an uncalibrated workspace,” In Proc. of IEEE Int. Conference on Robotics and Automation, Leuven, Belgium, May 1998, pp. 1671–1676.
Universiteit Leuven, Belgium. Since, September 1992 he teaches control theory, measurement systems and robotics at the Katholieke Hogeschool Limburg, Department Industrial Sciences and Technology. In 1996 he started (part time) at the Department of Mechanical Engineering of the Katholieke Universiteit Leuven as a Doctoral Student in Applied Sciences in the area of combined vision/force control in robotics. Walter Verdonck obtained the degrees of Burgerlijk Ingenieur (“Master”) in Mechanical Engineering - Mechatronics (1999) from the Katholieke Universiteit Leuven, Belgium. Since, September 1999, he is PhD Student in Applied Sciences in the area of experimental robot identification at the Department of Mechanical Engineering of the Katholieke Universiteit Leuven, sponsored by a specialisation grant of the Flemish Institute for Support of Scientific and Technological Research in Industry (IWT). Herman Bruyninckx obtained the degrees of Licentiate (“Bachelor”) in Mathematics (1984), Burgerlijk Ingenieur (“Master”) in Computer Science (1987) and in Mechatronics (1988), all from the Katholieke Universiteit Leuven, Belgium. In 1995 he got his Doctoral Degree in Applied Sciences from the same university, with a thesis entitled “Kinematic Models for Robot Compliant Motion with Identification of Uncertainties.” Since October 1994 he is a Post-Doctoral Researcher with the Fund for Scientific Research-Flanders (F.W.O.) in Belgium, and since 1998 he is Assistant Professor at the K.U.Leuven. He held visiting research positions at the Grasp Lab of the University of Pennsylvania, Philadelphia (1996), and the Robotics Lab of Stanford University (1999). His research include on-line estimation of model uncertainties in sensor-based robot tasks, kinematics of serial and parallel manipulators, geometric foundations of robotics, and Bayesian probability for robotics. Joris De Schutter obtained the degree of Burgerlijk Ingenieur (“Master”) in Mechanical Engineering from the Katholieke Universiteit (K.U.) Leuven, Belgium, in 1980, the M.S. degree from the Massachusetts Institute of Technology, in 1981, and the Ph.D. degree in Mechanical Engineering, also from the K.U.Leuven, in 1986. Following work as a control systems engineer in industry, in 1986, he became a Lecturer in the Department of Mechanical Engineering, Division PMA (Production Engineering, Machine Design and Automation), K.U.Leuven, where he has been Full Professor since 1995. He teaches courses in kinematics and dynamics of machinery, control, and robotics, and is the coordinator of the study program in mechatronics, established in 1986. He has published papers on sensor based robot control (in particular force control), position control of flexible systems and drive systems, and on robot programming.
Biographies Johan Baeten obtained the degrees of Burgerlijk Ingenieur (“Master”) in Mechanical Engineering - Mechatronics (1991) and in Business Administration (1992), all from the Katholieke
c 2000 Cyber Scientific
Machine Intelligence & Robotic Control, 2(2), 3–9 (2000)