Calibration-Free Visual Control Using Projective

11 downloads 0 Views 307KB Size Report
by placing visual feedback into a control loop have been developed. When properly de ned, these methods lead to calibration insensitive hand-eye coordination ...
Calibration-Free Visual Control Using Projective Invariance

Gregory D. Hager Department of Computer Science Yale University, P.O. Box 208285 New Haven, CT, 06520

Abstract

struction can perform hand-eye coordination tasks with a positioning accuracy that is independent of hand-eye calibration errors [11, 6, 7]. The key idea is to de ne a visual error between the manipulator in its current and desired position in image coordinates. This error must have the property that zero error in two images implies the desired end-e ector position has been reached regardless of camera location. Since the error function is independent of the hand-eye calibration, calibration errors can only a ect the trajectories the system follows to a setpoint, not nal positioning accuracy. Although such methods are less sensitive to calibration, they still rely on an oine calibration process to supply an estimate of the hand-eye relationship. This requires the cameras to be (nearly) static during performance of a task. Also, the range of positioning operations is limited to those for which an error function of the appropriate type can be de ned. To date, this has been a fairly limited set of positioning and alignment operations. This article uses ideas from projective geometry to overcome these obstacles. In particular, recent results on projective geometry applied to vision are explored in two contexts: the computation of projective imaging models from visual information for oine and online camera calibration; and, projective invariance as a means of specifying visual setpoints and motions without reference to the absolute coordinates of features or objects. In the next section, the basic ideas of feedback control applied to visual servoing are de ned, several positioning skills are developed, and methods to perform online calibration are described. In Section 3, projective geometry is used to develop methods for specifying positioning setpoints. Section 4 describes several examples of hand-eye coordination tasks that can be de ned with these setpoint constructions. Section 5 suggests some future research directions.

Much of the previous work on hand-eye coordination has emphasized the reconstructive aspects of vision. Recently, techniques that avoid explicit reconstruction by placing visual feedback into a control loop have been developed. When properly de ned, these methods lead to calibration insensitive hand-eye coordination. In this article, recent work on projective geometry as applied to vision is used to extend this paradigm in two ways. First, it is shown how results from projective geometry can be used to perform online calibration. Second, results on projective invariance are used to de ne setpoints for visual control that are independent of viewing location. These ideas are illustrated through a number of examples and have been tested on an implemented system.

1 Introduction

The goal of hand-eye coordination research is to provide a general-purpose mechanism for controlling a robotic mechanism from visual inputs. A natural paradigm to follow in this quest is to emphasize the reconstructive aspects of vision. The primary disadvantage to the reconstructive approach is that the positioning accuracy of the system is highly sensitive to errors in estimates of the camera intrinsic and extrinsic parameters, henceforth referred to as the hand-eye calibration. It has long been argued that proper use of visual measurements within a feedback loop can improve the accuracy and response of a hand-eye system [3]. Several authors have exhibited systems that employ visual feedback from a single end-e ector mounted camera [5, 13, 2]. However the use of only one camera places strong limitations on their capabilities. Systems employing feedback from stereo vision have been exhibited, but have focused on using reconstruction as the basis of the feedback system [1, 9]. As with all position-based systems, it is possible to exhibit cases where the accuracy of the system is a ected by camera calibration errors. Recent work been shown that feedback-based approaches employing stereo vision which avoid recon-

2 A Framework for Visual Control

Points in m of rank m, de ne M  to be the right inverse M  = M T (MM T )?1 : The projection of a point Pi to a homogeneous vector pi = (u; v; 1)T is given by p0i = (u0; v0 ; s0 )T = M P~i (1) pi = (u; v; 1)T = s10 p0i where M is a 3  4 projection matrix. Given the image coordinates of two points, pi and qi; li = pi  qi parameterizes the line joining pi and qi in the image. For any homogeneous vector pi and line projection li it is easy to show that pi  li is proportional to the perpendicular distance between the point and the projection of the line in the image plane. It follows that pi is on li if and only if pi  li = 0: When two cameras are involved, quantities related to the \right" camera are distinguished from those related to the \left" camera by placing a line over them. For example, the projection matrix for the right camera is M and the projection of Pi is pi:

2.1 Image-Based Control of Position

The following is a brief description of the general framework for image-based control of position. Let C1 and C2 denote the con guration spaces of points attached to a robot end-e ector and a target object in the world, respectively. A feature-based relative positioning problem is represented by an error function E : C1  C2 !

Suggest Documents