Proceedingsof the 1998IEEE International Conferenceon Robotics& Automation Leuven,Belgium● May 1998
Visual Impedance Using lms Visual Feedback System Yoshihiro
Nakabo
and
Masatoshi
Department
Ishikwa
of Mathematical Engineering and Information University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
[email protected]
.u-tokyo.ac.jp
[email protected]
Abstract We introduce visual impedance, a new scheme for based control which realizes task-level dynamical robot control using a Ims m“sualfeedback system. This method is simply described as applying image features to the impedance equation so that integration of a visual servo and a conventional servo system can With visual impedance, an be naturally accomplished. adaptive motion is obtained for real robot tasks in dybased namically changing or unknown environments of impedance control. on the framework In such cases, very high rate visual feedback is necessa~ to control robot dynamics but most conventional vision systems using CCD cameras can never satisfy this condition because their sampling rate is limited by the video signal. To solve this problem, we developed a general-purposed vision chip SPE and the Ims visual feedback system which can achieve an adequate servo rate to control dynamics. In this paper, we first illustrate the concept of visual impedance. Then our Ims visual feedback system for a robot control system is described. Last we show some experimental results with some real robot tasks.
Introduction
Task-level visual feedback is most effective when robots are to work adequately in dynamically changing or unknown environments. Much research haa been widely made bssed on this. Recently realized direct vision-based control has been called visual servo [1, 2]. However, the robot systems developed in this research have not yet attained this aim. The problems are caused by a CCD camera used for capturing images in most systems. Because, using CCD cameras, images are scanned pixel by pixel and transmitted in the video signal so the video rate limits the image sampling rate up to the video field rate (60Hz in NTSC, 50Hz in PAL) even if fast image processing can be carried out. On the other hand it is generally accepted that a servo rate around lkHz is needed to control robot dynamics. Compared to the robot dynamics the sampling rate of the conventional vision systems
0-7803-4300-x-5/98
$10.00
.u-tokyo.ac.jp
is too slow.
vision
1
Physics
@ 1998
IEEE
2333
This limitation of the sampling rate leads most research attention to focus only on the problem of how the visual servo can be designed using, for example, prediction, known models and so on. It is rarely considered how the visual servo can be integrated into and fuse with the conventional robot system to apply it to real tasks. For this problem Castano and Hutchinson [3] proposed the concept of a visual compliance based on a framework of a hybrid vision/position control structure which lends itself to task-level specification of manipulation goals. However, in this method, directions of vision/position controls are restricted to strict orthogonal directions. And the visual servo is not used for dynamical control but only for position control in the image. Tuji et al. [4] realized a noncontact impedance control using visual information and they have controlled the robot dynamics. However, they use a position sensitive detector (PSD) as the vision sensor to obtain a high feedback rate. Thus the pattern information included in the image cannot be used. As we mentioned above, the limitation of the feedback rate in vision systems is the core problem which restricts the application of the visuaJ servo to real robot tasks and the integration of the visual servo into conventional sensor feedback systems. To solve this problem we developed a vision chip, the SPE (Sensory Processing Elements), in which all photo-detectors are directly connected to all processing elements. These pixels are integrated into one chip so that the bottleneck of the image transmission does not occurr. In addition we have developed the lms visual feedback system and demonstrated high speed visual tracking with a lkHz feedback rate [5, 6]. Using this SPE chip, we can implement various kinds of image processing algorithms with far higher performance compared with conventional vision systems using CCD cameras. In this paper, we propose the concept of visual impedance which realizes task-level visual feedback and dynamical robot control using the lms visual feedback system. With visual impedance, adaptive mo-
any point in the image plane except outer regions of the object, where (X, Y) are treated as coordinates of the point expressed in the image plane coordinate frame. The distance between P and the nearest point on the edge x ● C’Ocan be expressed as follows:
tion of the robot to the environment can be achieved through visually realized virtual contact based on the framework of a virtual impedance control. 2 2.1
Visual
Impedance
Concept
of visual
impedance q = min(lP
Virtual impedance control is a method to regulate the mechaniczd impedance of a manipulator to a desired vahe according to a given task [7]. In this method, the control law applied to the manipulator is not only the desired trajectory but also adequate dynamics in which compliant motion with an environment can be achieved. For example, when a manipulator needs to contact with an object in an environment few errors in motion or models make serious disturbances. On the other hand, using impedance control, task-level feedback information from the force sensor is used to control dynamics of the manipulator so that these errors are absorbed by compliant motion. To expand this method, we propose visual impedance in which the feature value extracted from the image is simply used in the equation of impedance control. Thus it can be applied to visually realized virtual contact instead of reaJ contact. In this method, we assume an imaginary virtual surface on a real object calculated by real-time image processing. Inside this surface an adequate impedance is set in which, if the manipulator contacts with the virtual surface, an interaction is made that is similar to the real contact motion.
– Zl)
(VZ G CO)
(1)
This v(P) describes the potential field and the value equsls the distance from the object surface in the image plane. Now we call a set of points C’v a virtual surfsze where Co = {Pv I p(Pv)
= c}
(c : const)
(2)
which has constant distance from real surface of the object. When the manipulator, in the image, comes inside the virtual surface we consider that a virtusJ contact has occurred. During the virtual contact, the vision system extracts the point Pr which describes the position of the manipulator and < is the contsct vector defined as follows: –Vp(c–
(= {
V)
o
if
C2P
if
C