Implementation of a Moving Target Tracking Algorithm ... - Google Sites

2 downloads 382 Views 485KB Size Report
Jun 24, 2010 - Using Eye-RIS Vision System on a Mobile Robot. Fethullah Karabiber & Paolo ..... it gives big advanta
J Sign Process Syst DOI 10.1007/s11265-010-0504-7

Implementation of a Moving Target Tracking Algorithm Using Eye-RIS Vision System on a Mobile Robot Fethullah Karabiber & Paolo Arena & Luigi Fortuna & Sabestiano De Fiore & Guido Vagliasindi & Sabri Arik

Received: 20 October 2008 / Revised: 24 June 2010 / Accepted: 24 June 2010 # Springer Science+Business Media, LLC 2010

Abstract A moving target tracking algorithm is proposed here and implemented on the Anafocus Eye-RIS vision system, which is a compact and modular platform to develop real time image processing applications. The algorithm combines moving-object detection with feature extraction in order to identify the specific target in the environment. The algorithm was tested in a mobile robotics experiment in which a robot, with the Eye-RIS mounted on it, pursued another one representing the moving target, demonstrating its performance and capabilities. Keywords Target tracking . Robotic . Analog system . Segmentation . Motion detection

1 Introduction Moving Target Tracking is an important area among image processing applications such as robotics, video surveillance and traffic monitoring. In general, there are two different approaches for object tracking: recognition-based tracking and motion-based tracking [1]. In recognition-based tracking system, the objects are identified by extracting the features of objects. Motion-based tracking approaches use the motion detection properties. In both cases, the tracker should be able to detect all the new targets automatically in F. Karabiber (*) : S. Arik Computer Engineering Department, Istanbul University, Istanbul, Turkey e-mail: [email protected] P. Arena : L. Fortuna : S. De Fiore : G. Vagliasindi Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi, Università degli Studi di Catania, Catania, Italy

a computationally simple way that can be implemented in real time. In the last decade, many approaches have been proposed in the literature for tracking moving-objects. In [2, 3], moving-object detection and tracking in traffic monitoring and video surveillance applications are presented. In these applications, the images taken from a stationary camera are processed. It is more complicated to track the objects in the image sequences acquired from a mobile camera because of having an apparent motion related to the camera motion. A number of methods have been proposed for detection of the moving targets using mobile camera. Jung and Sukhatme [4] developed a real-time moving object detection algorithm using probabilistic approach and an adaptive particle filter for an outdoor robot carrying a single camera. A method using background motion estimation and difference of two consecutive frames for detection and tracking of the moving-objects is proposed in [5]. In addition, some others techniques focusing on detecting and tracking movingobjects proposed for different applications [6–8]. Cars in front are tracked using a camera mounted on a moving vehicle in [7] and a single object in forward-looking infrared imagery taken from an airborne or a moving platform was tracked using an approach presented in [8]. In [9], object tracking is reviewed and classified into different categories. In this study, a system with both a motion and recognition-based tracking approaches is developed. The proposed algorithm is based on image-processing techniques such as segmentation, motion detection and feature extraction. Segmentation, which detects the objects in the image sequence, is the main part of the algorithms. The proposed segmentation algorithm is based on edge detection and morphologic operations. Motion detection is developed using difference of successive image-frames.

J Sign Process Syst

Finally tracking the detected moving objects is carried out using their positional information in the image. Featureextraction techniques are used to extract information about segmented and moving objects. Finally, a mobile robot tracks the moving objects using centroid points of the target. In order to implement the proposed algorithm in real time, we have developed a computationally simplified version of the algorithm. The proposed algorithm is implemented using capabilities of Eye-RIS Vision System [10] to execute the algorithm in very short time. Eye-RIS Vision System is designed to develop Real Time Vision Applications. In order to evaluate the performance of the proposed approach in real time, we tested the algorithm on Rover II robot [13] with the Eye-RIS v1.2 visual system to track Minihex robot [14]. The composition of this paper is the following: in Section 2, general information about Eye-RIS Vision System is presented. The proposed moving-target tracking algorithm is described in Section 3. Segmentation, motion detection and feature extraction of the detected objects are described in subsections in Section 3. In Section 4, experimental results and discussion are presented. Finally, Concluding Remarks are reported in Section 5.

2 Eye-RIS Vision System The Eye-RIS vision systems are conceived to implement real-time image acquisition and processing in a single-chip using CMOS technologies. A large variety of applications such as video surveillance, industrial production systems, automotive and military can be developed using Eye-RIS. Details of the system is given in [10]. A brief description of the Eye-RIS is given below. Eye-RIS system employs a bio-inspired architecture. Indeed, a key component of the Eye-RIS vision system is a retina-like front-end which combines signal acquisition and processing embedded in the same physical structure. It is represented by the Q-Eye chip [10], an evolution of the previously adopted Analogic Cellular Engines (ACE) [11], the family of stand-alone chips developed in the last decade and capable of performing analogue and logic operations on the same architecture. The Q-Eye was devised to overcome the main drawbacks of ACE chips, such as lack of robustness and large power consumption. The Eye-RIS Vision comprises three boards or levels. Each of them performs the specific functions. The first level contains the Focal Plane Processors, which performs the image acquisition and the pre-processing tasks. The second level contains the digital microprocessors, program and data SDRAM memory, flash memory and I/O connectors. The third level includes the debugging and communications

circuitry. In general, only the first two boards are needed for running vision applications. The program can be stored in a flash memory in Nios II board. Eye-RIS is a multiprocessor system and has two different microprocessors: Anafocus’ Q-Eye Focal Plane Processor in the first board and Altera’s Nios II Digital Microprocessors in the second board. AnaFocus Q-Eye Focal Plane Processor (FPP) acts as an Image Coprocessor. It acquires and processes images, extracting the relevant information from the scene being analyzed, usually with no intervention of the Nios II processor. Q-Eye is massively parallel, performing operations simultaneously in all of its cells in the analogue domain. Its basic analog processing operations among pixels are linear convolutions with programmable masks. Size of the acquired and processed image is the Quarter Common Intermediate Format (Q-Cif) standard 176×144. Altera NIOS II digital microprocessor is a FPGAsynthesizable digital microprocessor (32-bit RISC μP at 70 MHz- realized on a FPGA). It controls the execution flow and processes the information provided by the FPP. Generally, this information is not an image, but characteristics of images analyzed by Q-Eye. Thus, no image transfers are usually needed in Eye-RIS. The Eye-RIS Application and Development Kit (ADK) is an Eclipse-based software development environment required to write, compile, execute and debug imageprocessing applications on the Eye-RIS Vision System. The Eye-RIS ADK is integrated into the ALTERA Nios II Integrated Development Environment (Nios II IDE). In order to program the Q-Eye, FPP code, a specific programming language, was developed. The Nios II is programmed using C++ programming language. In addition, the Eye-RIS ADK includes two different function libraries to ease developing applications. The FPP Image Processing Library has many functions to implement some basic image processing operations such as arithmetic, logic and morphologic operations, spatio-temporal filters, and threshold. The Eye-RIS Basic Library has several C/C++ functions to execute and debug FPP code and to display images. All of above features allow Eye-RIS Vision System to process images at ultra-high speed but still with very low power consumption. This system offers a great opportunity to develop real-time vision applications.

3 Moving Target Tracking Algorithm A block diagram of the algorithm can be seen in Fig. 1. This algorithm is mainly divided into three parts. The first part, which is the most important one, is segmentation. In the second part, motion detection algorithm is performed to

J Sign Process Syst

Figure 1 Block diagram of the moving target tracking algorithm.

obtain motion detection mask using difference operation between image sequences. The third part of the algorithm is to merge the first two parts for obtaining centroid of the target using Feature Extraction for robot action. The images are acquired through the Sense Function in Eye-RIS ADK, which performs an optical acquisition in a linear integration way. The integration time is provided by the user as an input parameter. It is also possible to apply an analog gain to the sensed image. After obtaining the images, Gaussian Diffusion function is performed by using the Resistive Grid module available in the Q-Eye to remove noise. The bandwidth of the filter is specified by means of the rgsigma parameter, whose value is related to the standard deviation of an equivalent Gaussian filter. An exemplary raw acquired image and the output of Gaussian filter are shown in Fig. 2.

Figure 3 Sobel-based edge detection algorithm.

[12]. Here, the summary of the segmentation algorithm is given. The segmentation algorithm is implemented mainly in three steps. In the first step, Sobel operators based edge detection approach is implemented on the system. Then, morphologic operations are used to obtain the segmented image.

3.1 Segmentation Segmentation is the process of dividing a digital image into multiple meaningful regions for easier analysis. Segmentation is the most crucial part of moving target tracking algorithm. A new segmentation algorithm using the capabilities of Eye-RIS Vision System was presented in

Figure 2 a Acquired image b Output of Gaussian filter.

Figure 4 a Output of Sobel vertical filter b Absolute difference c Threshold d Edge detection.

J Sign Process Syst Figure 5 Block diagram of morphologic operations.

Figure 7 Segmentation results for a the aquired image, b a loaded image from computer.

3.1.1 Edge Detection Since an edge essentially demarcates two different regions, detecting edges is a very critical step for segmentation

algorithms. A Sobel Operators [1] based edge detection algorithm is implemented using the functions that hardware structure is permitted. Block diagram of Sobel-based edge detection algorithm is shown in Fig. 3. In the first step of proposed Sobel-based edge detection algorithm, Sobel convolution masks [1] are applied in different directions (horizontal(SFh), vertical(SFv), leftdiagonal(SFld), right-diagonal(SFrd)) using the templates in Eq. 1. Output of the Sobel filter in vertical direction is given in Fig. 4a. 8 9 8 9 < 1 0 1 = < 1 2 1 = ð1Þ SFh ¼ 2 0 2 ; SFv ¼ 0 0 0 ; : ; : ; 1 0 1 1 2 1 8 < 2 SFld ¼ 1 : 0

1 0 1

9 8 0=

Suggest Documents