Vision Based Tracking and Navigation of Mobile ... - Google Sites

9 downloads 186 Views 61KB Size Report
The mobile robots used in the proposed application ... through a keyboard attached with a desktop computer. The desktop
Vision Based Tracking and Navigation of Mobile Robots in Pioneer-2 Platforms Kuntal Roy and Amit Konar and Ajit K. Mandal Electronics and Tele-communication Engineering Department Jadavpur University { [email protected], [email protected], [email protected] }

 Abstract - The paper addresses the classical problem of target tracking and provides a solution to the problem using two pioneer-2 mobile robots. The mobile robots used in the proposed application include facilities for online control of the pan angle, tilt angle and zoom of the camera. The above facilities compositely assist the tracker in localization of the moving target. A specialized software platform ARIA [5], that supports implementation of simple behaviors such as avoid obstacles, motion planning and motor status checking has been utilized to develop complex behaviors like object localization, controlled pathplanning and prediction of the target position. The tracking system has been designed and thoroughly tested to work in a factory environment. Index Terms – Visual Tracking, Pioneer-2 Mobile Robots, Back-Propagation Neural networks, Extended Kalman Filter. I.

INTRODUCTION

The paper provides a novel scheme for target tracking and interception by two mobile robots. The tracker robot identifies the location of the moving target robot by employing the color tracking scheme and online zoom control of the camera attached with the robot. The motion of the target robot is being online controlled through a keyboard attached with a desktop computer. The desktop computer receives video and sonar packets from the moving tracker, and generates control command for its motion planning. The proposed scheme employs a synergism of neural nets and Extended Kalman filter [1] to generate a route for the tracker based on the predicted direction of motion of the target. In one of our previous works, it has already been reported that among the classical neural algorithms for machine learning, the backpropagation algorithm outperforms all the rest at least in connection with the motion planning of the robot [4]. The back-propagation algorithm

used in the present paper thus is worthwhile. The Extended Kalman filter model provides an additional framework for the online detection of the target position with a very good accuracy. The work reported in the paper consists of five major sections. In Section II, we present the methodology of determining the necessary shift (both turn and forward movement) of the tracker toward the target. One important experimental aspect of this section is to determine the necessary forward movement of the tracker from the approximate camera-zoom change needed for approximate constant target-area (in pixel2 unit). This in particular has significance as it facilitates the scope of target localization in the grabbed image frame. Section III of the paper is concerned with the prediction of the moving target position using Extended Kalman filtering. In Section IV, we present a scheme for pathplanning of the tracker using back-propagation algorithm. Section V provides the integration of all the three schemes outlined earlier so as to design a complete target tracking system. II.

NAVIGATION OF TRACKER FROM ITS GRABBED IMAGE

The tracker robot grabs image successively with a sampling interval of about 400 ms (Sampling interval less than 400 ms will affect the accuracy in prediction by a Kalman filter) and attempts to locate the target robot in its grabbed imageframe. The tracker detects any shift of the target robot in the grabbed image-frame. The necessary shift in camera-position of the tracker and also the position of the tracker itself can be determined from the known shift of the center of mass of the target-robot (û[ û\  DV GHVFULEHG below. If the center of mass of the grabbed image is shifted in comparison to that of the previous

image (Fig. 1), we can determine the shift in the current center of mass (xc, yc) from its last value (xp, yp) by the following definition:

application, we do not generate any command for tracker movement in case the target approaches tracker i.e. when the image area of the target enhances. In order to get a measure of the target’s distance from the tracker, we need to adopt the following scheme. First we go on decreasing the camera zoom until the area of the target in the grabbed image of the current frame becomes equal to that of the previous frame. Typically the area of the target should be constant in the range (1400 – 1500) pixel2. An empirical relation between target to tracker distance (dist) and camera-zoom (zoom) of tracker has been developed experimentally as follows:

Fig. 1: Change in center of mass between the previous and current location of target-robot

and

û[ [c –xp û\ \c - yp

We can easily observe from Fig.1 that the center of mass of the target-robot shifts depending on the current position of the target-robot and the area of the target-robot viewed by the tracker. This area shrinks as the target moves far away from the tracker. Determination of the necessary changes in camera-pan and tilt of the tracker now can be accomplished using the above change in x- and y- position of the center of mass. The scale factors for pan- and tilt angle measure with respect to x- and y- shift of the center of mass have been determined offline experimentally. The experimental results envisage that for a 1° shift in camera pan-angle and tile angle, the xshift and the y-shift of the center-of-mass of target-robot between successive image frame are approximately 5 pixels and 4 pixels respectively. Consequently, change in pan-angle, ûSDQ-DQJOH   û[, and change in tilt-angle, ûWLOW-DQJOH   û\. Now, for efficient tracking the area of the targetrobot viewed by the tracker gradually shrinks as the tracker approaches the target. The speed of the tracker thus can be evaluated from the rate of shrink of the image area of the moving target. It is indeed important to note that in the present

dist = 74 + 9.0 * (zoom/50) + 0.1 * (zoom/50)2 where, ‘dist’ is measured in centimeter (cm). For small value of zoom (

Suggest Documents