IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 5, OCTOBER 1996. 67 I. Dynamic Effects in Visual Closed-Loop Systems.
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 5, OCTOBER 1996
67 I
Dynamic Effects in Visual Closed-Loop Systems Peter I. Corke, Member, ZEEE, and Malcolm C. Good
Abstract-In this paper we argue that the focus of much of the visual servoing literature has been on the kinematics of visual control and has ignored a number of fundamental and significant dynamic control issues. To this end the concept of visual dynamic control is introduced which is concerned with dynamic effects due to the manipulator and machine vision sensor which limit performance. These must be explicitly addressed in order to achieve high-performance control. The paper uses simulation and experiment to investigate feedback control issues such as choice of compensator, and the use of axis position, velocity or torque controlled inner-loops within the visual servo system. The limitations of visual feedback control lead to the investigation of target velocity feedforward control, which combined with axis velocity control, is shown to result in robust and high performance tracking. Index Terms-Robot control, visual servoing, feedforward control.
I. INTRODUCTION
R
OBOTS today can perform assembly and material handling jobs with speed and precision yet, compared to human workers robots, are hampered by their lack of sensory perception. To address this deficiency considerable research into force, tactile and visual perception has been conducted over the past two decades. Much of this vision research work has concentrated on the ‘high level’ vision problem where an image of a scene i s interpreted, a plan formulated and then executed. Capabilities of this sort are now available in offthe-shelf vision systems available from major robot vendors. However the feature in common with all these systems is that they are static, with image processing times in the order of 0.1 to 1 s. Visual sensing and manipulation are combined in an open-loop fashion, ‘looking’ then ‘moving’. The accuracy of the ‘look-then-move’ approach depends directly on the accuracy of the visual sensor and the robot manipulator. An alternative to increasing the accuracy of these sub-systems is to use a visual-feedback control loop which will increase the overall accuracy of the system. Visual sewoing is no more than the use of vision at the lowest level in the control hierarchy, using fast but simple image processing to provide reactive or reflexive behavior’. The great benefit of feedback Manuscript received March I , 1995; revised January 8, 1996. This paper was recommcnded for publication by Associate Editor S. Hutchinson and Editor S. E. Salcudean upon evaluation of reviewers’ comments. P. I. Corke is with the CSIRO Division of Manufacturing Technology, Kenmore, 4069 Australia. M. C. Good is with the Department of Mechanical and Manufacturing Engineering, Univcrsity of Melbourne, Parkville, Australia. Publisher Item Identifier S 1042-296X(96)07247-3. ‘Note that not all ‘reactive’ vision-based systems are visual servo systems. Anderson’s impressive ping-pong playing robot [ I ] , although fast, is based on a real-time expert system for robot path planning using ball trajcctory estimation and considerable domain knowledge. It is a highly optimized version of the look, plan, then move architecture.
control is that the accuracy of the closed-loop performance can be made relatively insensitive to calibration errors and nonlinearities in the open-loop system. However the inevitable downside is that introducing feedback admits the possibility of closed-loop instability. Issues of stability, performance and control system design are the main topics of this paper and are introduced in next section of this paper. The remainder of the paper presents a brief recapitulation of results from [2] that are relevant to this theme. Simulation and experiment are used to illustrate the arguments, and these are based on the characteristics of a testbed system, described in Section 11, which comprises a Puma 560 robot with an eye-in-hand camera (acting as a panhilt head) and a high performance vision system. Sections I11 and IV are concerned with systems based on visual feedback. Section I11 defines the feedback control problem and introduces performance metrics, and then compares the performance of different types of compensator such as PID and pole-placement. Section IV investigates the performance of visual servo systems based on position, velocity or torque controlled actuators, and concludes that velocity controlled actuators offer a number of advantages. Visual feedforward control is introduced in Section V as a means of overcoming the limitation of pure feedback control. It is shown to offer markedly improved tracking performance as well as great robustness to parameter variation. The paper concludes with a summary of the key points relevant to achieving high-performance closed-loop visual control. A. Dynamic vs Kinematic Visual Control
In this paper we introduce a distinction between visual kinematic and visual dynamic control. The former is concerned with how the manipulator should move in response to perceived visual features. In the latter, dynamic effects due to the manipulator and machine vision sensor which limit performance are explicitly addressed in order to achieve highperformance control. To achieve such control it i s necessary to have accurate dynamic models of the system to be controlled (the robot) and the sensor (the camera and vision system). The literature on robotic visual servoing has grown substantially in recent years from its beginnings over two decades ago (see [3] for a recent review), and the kinematics of visual control have been well covered. In particular, much has been written about position-based and image-based visual servo systems. The issue of instability in visual closed-loop systems was first reported in the late 1970s [4] and attributed to open-loop latency in the system, caused by long image processing delays. However these same effects are present in many contemporary reports (papers and videotapes) of visual servo systems which often appear slow and shaky. Reported results and comments
3042-296)(/96$0.5.00 0 1996 IEEE
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 5 , OCTOBER 1996
672
xt i
Fig. I .
( z ) target position
I
8
ix ( z )
System block diagram of of a visual feedback control syatem
such as “slight instability” are clear indications of closedloop systems close to their stability limit. Another common comment is of “noticeable lug” in the observed performance, which is indicative of a poorly designed control system, inappropriate control architecture or both. The demonstrated performance of most visual servo systems is far less than would be expected from the sample rates achieved. Franklin [SI suggests that the sample rate of a digital control system be between 4 and 20 times the desired closed-loop bandwidth. For the case of a 50 Hz vision system this implies that a closedloop bandwidth between 2.5 Hz and 12 Hz is achievable. These observations lead us to conclude that the whole area of dynamic modeling and control design has been overlooked. In many respects the performance of visually closed-loop robot systems is similar to early force-controlled robot systems in the 1960s; viz. the motion is slow and close to stability limits. Researchers in robotic force control learned that in order to achieve high-bandwidth closed-loop control it was necessary to have accurate dynamic models of the robot actuators and structure as well as the sensor [6].By contrast, the controllers implemented to date in visual servoing research have generally been simplistic-stability has been ensured by detuning at the expense of performance. It is true that as a feedback sensor, machine vision has a number of significant disadvantages which include: a relatively low sample rate, significant latency (one or more sample intervals) and coarse quantization. While these characteristics present a challenge for the design of high-performance motion control systems they are not insurmountable. Latency is the most significant dynamic characteristic and has many sources which include: transport delay of pixels from camera to vision system, image processing algorithms, communications between vision system and control computer, control algorithm software, and communications with the robot. Unfortunate most ‘off-the-shelf‘ robot controllers have low-bandwidth communications facilities. In many reported visual servo systems the communications path between vision system and robot is a serial data communications link [7], [8]. Higher communications bandwidth can be achieved by means of transputer links [9]-[I I], Ethernet [12], or a common computer backplane [21, [121, [131. If the target motion is constant then prediction can be used to compensate for latency, but combined with a low sample rate results in poor disturbance rejection and long reaction time lo target ‘maneuvers’, that is, unmodeled motion. Grasping objects on a conveyor belt [14] or a moving train [lS] are however ideal applications for prediction.
The effect of system dynamics on stability is generic to all visually controlled machines no matter what approach is taken to feature extraction or solving the visual kinematic problem. Increasing sample rates or speeding up a communications link are only partial solutions. The best approach is to firstly reduce latency as far as possible, then on the basis of modeled dynamics synthesise a controller with the desired dynamic characteristics.
11. EXPERIMENTAL SETUP AND MODELLING
The robot controller and 50 Hz vision system used in this work have been described in detail elsewhere [2], [16]. Briefly, it comprises a VMEbus rack holding a single CPU (25 MHz 68 030) running VxWorks, a Datacube digitizer and APA-5 12+ binary feature extractor, and a custom interface to the Puma robot’s arm-interface board. The significant feature is that while all components are individually capable of high performance, they share a common backplane providing a high-bandwidth visual feedback path. The vision system processes image data ‘on the fly’, overlapping the serial pixel transfer and processing in order to reduce latency to the minimum possible. Centroid data is available during the frame time, as soon as the region has been completely raster scanned into the vision system. This contrasts with many reports where image processing does not commence until the frame has been completely loaded. A block diagram of a simple visual feedback system is shown in Fig. 1. The output of the vision subsystem is the image plane pixel error of the target centroid, i X , which is input to a compensator, D ( z ) , whose output is the task-space (Cartesian) velocity command, kd. This allows various target tracking strategies, such as camera translation or camera rotation, to be readily implemented. Robot and target ‘position’, respectively xT and xt. in this work can be considered to be one- or two-dimensional translation or rotation depending on the tracking strategy used. The arrangement of the camera with respect to the robot’s wrist joints is shown in Fig. 2. The robot controller is such that the robot appears as a Cartesian velocity controlled device, for reasons discussed further in section IV. Previous work [2], [I71 has reported on modeling the closedloop dynamics of this visual feedback control system, for both single- and multi-axis robot control configurations. Using simple proportional control, D ( z ) = K p , on a Puma wrist axis with the gain set empirically results in a settling time of 400 ms, see Fig. 3. This corresponds to a bandwidth of
CORKE AND GOOD. DYNAMIC EFFECTS IN VISUAL CLOSED-LOOP SYSTEMS
613
I Fig. 2.
Image plane
Relevant coordinate systems. The camera is mounted such that joints 5 and 6 control camera tilt (pitch) and pan (yaw) respectively.
0.02-
1
I I I
OO
0.1
I
0.2
1
I
I
0.3 Time (s)
0.4
0.5
6
= 708, IC!, = 1 . 3 X lo4) Fig. 3. Response of visual feedback system with proportional control to a step target change (lilclrs
approximately 2.5 Hz, the lower limit mentioned above for a 50 Hz sample rate. The controller is in fact a multi-rate system: the sampling interval of the Unimate position servos is 14 ms while the video feature update period is 20 ms, the CCIR field interval. To simplify analysis the system has been approximated by a single-rate model operating at the video sampling interval, T. The multiple rates can be considered as introducing a nonconstant latency, the distribution of which can be computed [2]. The single-rate model, multi-rate simulation and experimental results have been previously shown to compare well [2, 171. The single-rate closed-loop transfer function between the target position, xt and robot position x, is of the form
where the loop gain K = KpKlcns,and Klen4is a scalar image Jacobian relating target error to image plane error. This model is consistent with V ( z )= l/z,a unit delay, and R ( z ) = l / ( z - 1) an integrator (from velocity demand to position output) and a unit delay. Since the camera is mounted on the robot end-effector, it always ‘sees’ the target with respect to the robot ‘position’, that is = I(lenz(Zt - x,).
(2)
The overall delay is Ord(denominator) - Ord(numerator), in this case three sample periods or 60 ms. This delay can be seen clearly in Fig. 3 between the visual step input and the robot commencing to move. The single-rate model can be used for stability analysis as well as state-estimator and controller design using classical digital control theory. A
TEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL 12, NO. 5 , OCTOBER 1996
674
because the system rapidly becomes unstable as the loop gain is increased 2) Increasing the Type of the system [ 5 ] ,by adding openloop Integrators. Type 1 and 2 systems will have zero steady state error to a step and ramp demand respectively. 3) Introducing feedforward of the signal to be tracked, a common practice in machine-tool control. Options 1 and 2 are explored in Section 111-C and option 3 i$ discussed In Section V
1 0.8 0.6
0.4
s a
E
02
0
-0 2
B. Performance Metrics -0 4 -0.6
-0 8 -1-1
-0 5
0
05
1
Real Axis Fig 4 Root-locus of single-rate model (tor varying loop gain) Closed-loop poles for gain Ir, = 1 3 x l o p 4 are marked ( I C I ~=, 708) ~~
root-locus diagram of the system (1) is given in Fig. 4 and clearly indicates instability as the loop gain is increased. The measured and simulated transfer function are shown in Fig. 5 and indicate the system will exhibit considerable phase error, for example 45" when tracking a target moving at 1 Hz. 111. FEEDBACK CONTROL This section is concerned with the dynamics of the general visual feedback system shown in the block diagram of Fig. 1. A. Control Formulation
It is currently difficult to compare the temporal performance of various visual servo systems due to the lack of any agreed performance measures. At best only qualitative assessments can be made from examining video tape presentations and in time axis scaling of published results Traditionally the performance of a closed-loop system is measured in terms of bandwidth However in this work we have chosen to the magnitude Image plane error, either peak-to-peak or RMS, in order to qualitatively measure performance This is appropriate since the tracking task is defined in terms of image plane error, and is ideally zero. Image plane error is computed by the controller and can easily be logged, and this is considerably simpler than estimating closed-loop bandwidth. It should be noted that plotting robot and target paths, rather than image plane error, can often mask effects due to phase lag. The image plane target motion will of course depend upon the target motion and this is application specific. We have chosen to use sinusoidal motion since it is a nonpolynomial, it has significant acceleration, and can be easily generated in the laboratory using a pendulum or turntable. Sinusoidal excitation clearly reveals phase error which is a consequence of openloop latency, and is relevant to applications such as grasping a swinging object.
'*
Two useful transfer functions can be written for the system of Fig. 1. ' X ( 2 )/'Xd ( 2 ) describes the response of image plane centroid to demanded centroid, and is important if the task is C. Compensator Design and Evaluation to be expressed in terms of image plane feature trajectories This section summaries results from [2] regarding the per[lS]. For a fixation task, where ' X Xis~ constant, ' X ( z ) / z t ( z ) of several feedback compensators, D ( 2 ) designed formance describes the image plane error response to target motion, and to increase closed-loop performance. The image plane error for is given by each compensator, to the the same sinusoidally moving target, is given in Fig. 6. The simulations are based on detailed multi( 3 ) rate nonlinear models of the position and velocity loops of the Unimate Puma servo system. The proportional controller described earlier has the lowest performance and would be unable to keep the target within the where D ( z ) = D , v ( z ) / D ~ ( xX) ; =" ~ Xd -'L X is the image +256 pixel bounds of the image plane. The PID controller plane error and iXd is assumed, for convenience, to be zero. (2 0.3)(. - 0.9) D ( z ) = Kp The motion of the target, x t , can be considered a disturbance z ( 2 - 1) input and ideally l i X ( z ) / z t ( z ) 1 would be zero, but with the raises the system Type to 2 and was tuned so as to place the previously identified dynamics can be shown to rise with z = 0.85 ~5 0.20j, equivalent damping closed-loop poles at frequency. To obtain better tracking performance we need factor = 0.52. I the target motion's frequency to reduce l i X ( z ) / x t ( ~ )over The pole-placement [ 191 controller range. Classical approaches to achieving this include: ~ ~ ( 7 .-0 5.65) 1 ~ 1) Increasing the loop gain, K . However for the system, D ( z )= Kp , (6) (l), this is not feasible without additional compensation zJ- 0 . 6 ~ 0.42 ~ - 0.8
+
I
I
I
I
2
4
6
8
10
Time (s) Fig. 14. Measured tracking velocity for target o n turntable revolving at 4.2 rads, showing axis velocity (solid) and estimated target velocity (dashed).
to a delayed target position estimate requiring prediction in order to estimate the current target position and velocity. In addition target position estimates (17) will be contaminated by spatial quantization noise in ‘ X . Both a - p and Kalman filters have been used successfully for velocity estimation, &, based on an assumption of constant velocity target motion. However this assumption is seriously challenged by evaluating tracking performance for sinusoidal target motion. Tuning the estimators requires some experimentation in order to trade off phase lag and ‘smoothness’ of the velocity estimate. For the o - tracking filter the Benedict-Bordner parameterization was found to yield the best results. The
Kalman filter offers no advantage for long duration tracking since it rapidly converges to fixed gains and then becomes a computationally more expensive version of an Q - ,4? or Wiener filter. C. Experimental Results
A number of experiments have been conducted to investigate the performance of the visual feedforward controller. To achieve a particularly challenging motion for the tracking controller a turntable has been built whose rotational velocity can be set at up to 40 rpm, or a reversing trapezoidal velocity profile. The latter results in target motion with a complex image plane acceleration profile.
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 5 , OCTOBER 1996
682
E
0-
g
\
2-
vo
?! -2-
2 -4-
,501
I
,
I
0.1
0.2
0.3
,
6
0
0.4 0.5 Time ( s )
0.6
0.7
0.8
Fig. 15. Measured tracking performance for flying ping-pong ball. Shown are centroid error, link velocity and observed area of target
Fig. 13 shows the image plane coordinate error in pixels when the camera is fixated on a target rotating on the turntable at 4.2 rads. It can be seen that the target is kept within +15 pixels of the reference. The performance of the earlier, feedback-only strategy, is compared with the feedforward controller in Fig. 6. Fig. 14 shows the measured and feedforward joint velocity for the pan and tilt axes. The joint velocity of up to 2 r a d s is close to the limit of the Unimate velocity loop, 2.3 rads. The feedforward velocity signal has the same magnitude as the velocity demand but is slightly lagging. The remainder of the velocity demand is provided by the PI feedback law which is necessary to achieve zero position-error tracking, since matching target and robot velocity still allows for an arbitrary constant position error; overcome lags in the estimated velocity feedforward estimate and the velocity loop itself. The controller is quite robust to errors in the lens gain estimate used in (17), since during fixation the centroid error ' X is small thus minimizing the significance of errors in Klens. A disturbance may be generated by using the teach pendant to move the first three robot joints. The fixation controller is able to counter this disturbance but the rejection dynamics have not been investigated. Another experiment serves to vividly demonstrate the achievable closed-loop tracking performance-using the same controller as above, the robot is able to fixate on a ping-pong ball thrown across the system's field of view. The results are shown in Fig. 15. The recorded event is very brief, lasting approximately 700 ms. From the centroid error plot it can be seen that the robot achieves an approximately constant centroid error of less than 30 pixels in the vertical, Y ,direction in which the ball has constant acceleration. In the horizontal, X , direction the robot is seen to have overshot the target. The measured joint velocities show a peak of over 4 r a d s on the
tilt axis, which has a limit due to voltage saturation of 5 rads. The apparent size of the ball (area) can be seen to vary with its distance from the camera. The tracking filter in this case is a Kalman filter. D. Summary
The advantage of the feedforward controller is that the problem is transformed from a control synthesis problem to a motion estimation problem. A good velocity estimate provides the majority of the demand to the velocity controlled axes. This reduces reliance on the feedback signal which can then be provided by a simple low gain PI compensator. Similar performance was obtained using a feedback only poleplacement controller but this required a detailed dynamic model and was found in practice to lack robustness. VI.
CONCLUSION
This paper has established a distinction between visual servo kinematics and visual servo dynamics. The former is well addressed in the literature and is concerned with how the manipulator should move in response to perceived visual feature$ The latter is concerned with dynamic effects due to the manipulator and machine vision sensor which must be explicitly addressed in order to achieve high-performance control. This problem is generic to all visually controlled machines no matter what approach is taken to feature extraction or solving the visual kinematic problem. We have examined the dynamics of visual servo systems based on actuators controlled in torque, velocity or position modes, and conclude that velocity mode has a number of advantages. Visual control of actuator torque leads to poor control due to the low sample rate and nonlinear axis dynamics, but closing a velocity loop around the actuator linearises the axis and considerably reduces the uncertainty
CORKE AND GOOD: DYNAMIC EFFECI‘S IN VISUAL CLOSED-LOOP SYSTEMS
in axis parameters. This supports the intuitive argument that positioning be considered as a steering problem. The key conclusions are that in order to achieve highperformance visual servoing it is necessary to minimize openloop latency, have an accurate dynamic model of the system and to employ a feedforward type control strategy. Prediction can be used to overcome latency but at the expense of reduced high-frequency disturbance rejection. Open-loop latency is reduced by choice of a suitable control architecture. Feedback-only controllers have a loop gain limit due to the significant delay in pixel transport and processing. Simple feedback controllers have significant phase lag characteristics which lead to poor tracking. More sophisticated feedback controllers can overcome this but the solution space becomes very constrained and the controller., are difficult to design and not robust with respect to plant parameter variation. By contrast, feedforward control results in a robust system with excellent tracking performance. REFERENCES R. L. Andersson, Real Time Experl System to Control a Robot Ping-Pong Player, Ph.D. Disscrtation, University of Pennsylvania, Philadelphia, June 1987. P. I. Corke, High-Performance Visuul Closed-Loop Robot Control, Ph.D. Dissertation, Dept. Mechanical and Manufacturing Engineering, Univcrsity of Melbourne, Australia, July 1994. -, “Visual control of robot manipulators-a review,” in Visual Servoing, K. Hashimoto, Ed., Robotics and Automuted Systems. New York: World Scientific, 1993, vol. 7 pp. 1-31. J. Hill and W. T. Park, “Real timc control of a robot with a mobilc camera,”in Proc. 59th /SIR, Washington, D.C., Mar. 1979, pp. 233-246. G. F. Franklin and J. D. Powcll, Digital Control oJ Dynamic Sysienzs. Reading, MA: Addison-Wcsley, 1980. D. E. Whitney, “Historical perspective and state of the art in robot force control,’. in Proc. IEEE Int. Conf: Robotics and Automation, 1985, pp. 262-8. N. Houshangi, “Control of a robotic manipulator to grasp a moving target using vision,” in Proc. IEEE Int. Conf Robotics und Automation, 1990, pp. 604-609. J. T. Feddema, Real Time Visual Feedback Control .for Hund-Eye Coot-dinated Robotir Systems, Ph.D. Dissertation, Purdue university, West Ldfayetk, IN, 1989. K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimurd, “Manipulator control with image-based visual servo,” in Pmc. IEEE Inc. Cmj: Robotic., and Aulornarion, 1991, pp. 2267-2272. A. A. Rizzi and D. E. Koditschek, “Preliminary experiments in spatial robot juggling,” in Pmc. 2nd In/. Symp. Experimental Robotics, Toulouse, Fmnce, June 1991. D. B. Westmore and W. J. Wilson, “Direct dynamic control of a robot using an end-point mounted camera and Kalman filter position estimation,” in Proc. IEEE In/. Con$ Robolics and Automation, 1991, pp. 2376-2384. P. 1. Corke and R. P. Paul, “Video-rate visual servoing for robots,” in Experimenful Robotics I , V. Hayward and 0. Khdtib, Eds. Beling, Germany: Springer-Verlag, 1989, vol. 139, pp. 429451. N. Papanikolopoulos, P. K. Khosla, and T. Kanade, “Vision and control techniques for robotic visual tracking,” in Proc. IEEE Int. ConJ Robotics and Automalion, 1991, pp. 857-864. D. B. Zhang, L. Van Gool, and A. Oostcrlinck, “Stochastic predictive control of‘ robot tracking systems with dynamic visual fccdback,” in Proc. IEEE Int. Conj; Robotics and Automation, 1990, pp. 610-615. P. K. Allen, A. Timcenko, B. Yoshimi, and P. Michelman, “Real-time visual servoing,” in Proc. IEEE h t . Conf: Robotics and Automation, 1992, pp. 1850-1856.
683
1161 P. I. Corke, “An experimental facility for robotic visual servoing,” in Proc. IEEE Region 10 Int. Con$, Melbourne, Australia. 1992. pp. 252-256. [17] P. I. Corkc and M. C. Good, “Dynamic effects in high-performancc visual servoing,” in Proc. IEEE Int. Con$ Robotics and Automation, Nice, Francc, May 1992, pp. 1838-1843. [I81 W. Jang and 2. Bien, “Feature-based visual servoing of an eye-in-band robot with improvcd tracking performance,” in Proc. IEEE Int. Con$ Robotics and Automation, 1991, pp. 2254-2260. [I91 K. J. .&stri(niand B. Wittenmark, Computer Controlled Systems: Theory and Design. Englewood Cliffs, NJ: Prentice Hall, 1984. [201 P. K. Allcn, B. Yoshimi, and A. Timcenko, “Real-time visual servoing,” in Proc. IEEE Int. Con5 Robotics und Automation, 1991, pp. 851-856. 1211 P. M. Sharkey, I. D. Reid. P. F. McLauchlan, and D. W. Murray, “Realtime control of a reactive stereo head/eye platform,” Proc. 29th CDC, pp. C0.1.2.1-C0.1.2.5, 1992. [221 A. J. Wavering, .I. C. Fiala, K. J. Roberts, and R. Lumia, “Triclops: A high-pcrformance trinocular active vision system,” in Pruc. IEEE in/. ConJ Robolics and Automation, 1993, pp. 410-417. [23] M. V. Roberts, “Control of’ resonant robotic systems,” M.S. Thesis, University of Newcastle, Australia, Mar. 1991. 1241 B. S. Armstrong, Dynamics f o r Robot Control: Friclion Modeling and Ensuring Excitation During Purumeter Zdentijicution, Ph.D. Dissertation, Stanford University, Stanford, CA, 1988. [251 T. A. Pool, Motion Control of a Citrus-Picking Robot, Ph.D. Dissertation, University of Florida, Gainesville, 1989. [26] P. I. Corke and M. C. Good, “Controller design for high-performance visual wvoing,” in Proc. IFAC 12th World Cong., Sydney, 1993, pp. 9-395 to 9-398. [27] P. I. Corke, “Experiments in high-performance robotic visual servoing,” in Proc. In/. Symp. Experimental Robotics, Kyoto, Oct. 1993, pp. 194-200. [28] L.E. Weiss, Dynamic Visual Servo Control of Robots: an Adaptive Image-Eased Approach, Ph.D. Dissertation, Carnegie Mellon University, Pittsburgh, PA, 1984. [29] J. De Schutter, “Improved force control laws for advanced tracking Int. Con$ Robotics and Automation, 1988, applications,” in Proc. pp. 1497- 1502.
Peter I. Corke (S’82-M’83) for a photograph and biography, see p. 670 of this TRANSACTIONS.
Malcolm C. Good received the BE., M.Eng.Sc., and Ph.D. degrees from the University of Melbourne, Australia. He is Professor and Head of the Department of Mechanical and Manufacturing Engineering, University of Melbourne. Prior to his current appointment, he was Program Leader for Integrated Manufacture, CSIRO Division of Manufacturing Technology. He has held visiting appointments at the University of Southampton, U.K., The University of Michigan, Ann Arbor, and the Corporate Research and Development Center of thc General Electric Company, Schenectady. NY. His research has been in the fields of fluid mechanics, vehicle and machine dynamics, highway geometrics, human factors of automobile and motorcycle control, vehicular impacts with roadside structures, dynamics and control of’ industrial robots and, most recently, the whole field of computer-integrated manufacture. His current research includes force- and vision-controlled robotic manipulation, machine-tool dynamics and control, computer aids for concurrent engineering and scheduling, tailor-welded blank technology, and vibration isolation and control. Dr. Good was President of the Australian Robot Association (1994-1995) and Australian Contact Person for the International Advanced Robotics Program (1988-1994).