Detecting and Tracking Moving Objects in Video Sequences Using ...

17 downloads 7010 Views 4MB Size Report
Faculty of IT and Computer Engineering. Azarbaijan Shahid ... applications in surveillance systems, human-computer in- teraction ... To fix any of these problems ...
Scientific Cooperations International Workshops on Electrical and Computer Engineering Subfields 22-23 August 2014, Koc University, ISTANBUL/TURKEY

Detecting and Tracking Moving Objects in Video Sequences Using Moving Edge Features Aziz Karamiani

Nacer Farajzadeh

Faculty of IT and Computer Engineering Azarbaijan Shahid Madani University Tabriz, Iran [email protected]

Faculty of IT and Computer Engineering Azarbaijan Shahid Madani University Tabriz, Iran [email protected]

Slim et al. [2] proposed a method for tracking pedestrians with a mobile camera using a color histogram. The problem of pedestrians overlapping was eliminated with considering histogram of people’s head on the overlapping region. Lee et al. [3] introduced a method for tracking a mobile robot by another mobile robot; in their method, for tracking the target robot by the tracker, they set up a view angle of mobile tracking camera for tracking desired position and target robot by means of position information and motion information about both robots. However, in their method, tracking may face problems such as barriers in front of the tracker robot camera, overlapping target robot and its disappearance. Yokoyama [4] used contour-based object method for tracking and gradient feature method for detecting and tracking objects based on optical flow and edge. Zhan et al. [5] used background difference method for moving object based on update background model.

Abstract—Detecting and tracking moving objects in a sequence of video images is an important application in the field of computer vision. This topic has many applications in surveillance systems, human-computer interaction, robotics, etc. Since the these systems require realtime processing, providing an efficient method with lower computational complexity is a challenge. In this paper, a fast and robust method for detecting and tracking moving objects is presented. This method is based on following mobility edge through fixed edges. The results show that the proposed method, further to its efficiency, is able to overcome challenges such as brightness variations and background changes over time. Keywords—Object tracking, edge detection, moving object.

I.

I NTRODUCTION

Detection and tracking moving objects has many applications in the field of machine vision such as: video compression, monitoring systems, industrial control and gesture-based computer interaction. Yilmaz et al. evaluated and classified moving object tracking methods [1]. According to their classification, tracking methods have been divided into three categories: point-based tracking, kernel-based tracking and Silhouette tracking. Point-based method is further divided into two groups: deterministic and statistical. Kernel-based method is also divided into two groups, which are pattern matching and classifier-use. The last one, Silhouette method, uses the shape of objects and evaluate object contour methods.

A noticeable improvement for background models is the use of statistical methods for pixel colors. For example, Stauffer and Grison [6] used a Gaussian matrix for pixel color. In this method, a pixel in a healthy frame was compared against the background model of Gaussian. If an adaptation accorded, then average and variance is updated; otherwise, Gaussian average is set to value of pixel color and initial variance. Jepson et al. [7] presented an object tracker that is a combination of three components including object appearance features, unstable features and noise process. The fixed component detects more reliable appearance for estimating object motion where regions of object do not change rapidly over time. Unstable components find rapid changes in pixels and, finally, control outliers noise point are created by noise.

According to Yilmaz et al., moving object tracking methods in various areas are faced with problems such as overlapping moving objects, change in brightness, little background motion, lack of motion stability in background and camera moving. To fix any of these problems, we should seek appropriate solutions [1].

In this paper, a fast and robust yet simple method

88

Scientific Cooperations International Workshops on Electrical and Computer Engineering Subfields 22-23 August 2014, Koc University, ISTANBUL/TURKEY

Fig. 2: Background image without moving objects.

Fig. 1: Block diagram of the proposed system.

for detecting and tracking moving objects is presented. This method is based on following mobility edge through fixed edges. The results show that the proposed method, further to its efficiency, is able to overcome challenges such as brightness variations and background changes over time. The rest of this paper is organized as follows. The proposed method is explained in section 2. Sections 3 presents experimental results, and Section 4 concludes our work. II.

Fig. 3: Canny edge detection for background image.

P ROPOSED M ETHOD B. Edge detection in background image

In this paper, for tracking moving objects in video sequence, we use edge features. The reason for using edges is that they are less sensitive to light changes. Thus, the system is able to act properly in different environmental conditions. The block diagram of the system is shown in Fig. 1.

The edges of the background image without the presence of moving objects are detected using Canny algorithm. Canny algorithm for detecting edges is one of the common methods that are used in many applications. Before detecting edges, we smooth the image using Gaussian method. Fig. 3 shows the result of Canny algorithm for Fig. 2.

A. Excluding the background In this step, an image of the background is created in the desired range without the presence of moving objects. The image obtained in this step is used to remove fixed edges throughout the entire frames. Fig. 2 shows a sample background image without moving objects.

C. Processing new frames In this step, the proposed method takes 2 successive frames of the incoming frames (t and t + 1) , for the next stage. Here, we again employ Gaussian smoothing

89

Scientific Cooperations International Workshops on Electrical and Computer Engineering Subfields 22-23 August 2014, Koc University, ISTANBUL/TURKEY

Fig. 6: Moving edges marked for frames 94 (left), 202 (middle), and 250 (right).

E. Thresholding In this step, the proposed method eliminates minimal movement as background noise, e.g. slight movement including moving leaves or someone making a permanent or temporary stop in the scene. To this end, we use difference between successive frames. This step results in removing edges that have no movement between consecutive frames (Eq. 1).

Fig. 4: Canny algorithm result on frames 376 (left) and 377 (right).

( 255 f (x, y) = 0

if ft (x, y) 6= ft+1 (x, y) , otherwise

(1)

where f (x, y) represents the pixel intensity at (x, y). F. Marking the moving edges In this step of the proposed method, the remaining edges of the previous frame is marked. The marked edges are obtained from the elimination of repetitive edges in the consecutive frames and background edges. These marked edges are used in the next step to identify moving objects. The results for frames 94, 202 and 250 are shown in Fig. 6. Obtained moving pixels are displayed in red.

Fig. 5: Removing background fixed edges from frames t (left) and t + 1 (right).

G. Tracking moving objects The final step in the proposed method is to cluster moving edges as moving objects which are attributable to a particular object. To this end, we use an initiative method as follows. We scan the image marked in the previous step from top-left to bottom-right. When a pixel marked as a moving edge is met, we search within a window of size 80 × 160 pixels, where the marked pixel (the red dot in Fig. 7) is located in the middle of upper side of the mentioned window. In this window, we will count the number of marked pixels. If this number is greater than a threshold, then the target object will be assumed within this window. And a rectangular box with a size of 80×40 pixels is considered as the moving object accordingly. Then, the counted pixels in the gray area,

and Canny edge detection algorithm on two new frames. Therefore, all the edges of the objects in the scene, which can include stationary objects and moving objects in the background, can be extracted. For example, Fig. 4 shows the resulting edges for frames 376 and 377. D. Removing the background fixed edges In this step, the edges of the background image without moving objects in frames t and t + 1 subtracted from each other to eliminate fixed edges. This can be seen in Fig 5.

90

Scientific Cooperations International Workshops on Electrical and Computer Engineering Subfields 22-23 August 2014, Koc University, ISTANBUL/TURKEY

Fig. 7: Particular rectangular area for the moving humanlike objects. Fig. 8: Samples of tracking results by the proposed method (left to right): (a) frames 94 to 97, (b) frames 203 to 206, (c) frames 334 to 337.

as is shown in Fig. 7, is removed to avoid recounting other objects. Note that, this particular rectangular area is assumed for human-like objects and may be changed according to the shape of the object being tracked. III.

In recent years, many methods have been proposed in the literature. Most of the existing methods are sensitive to changes in brightness and background. In this paper we proposed a method based on edge features, background subtraction, and frame difference for detecting and tracking moving objects. The results showed that the proposed method has a comparable performance with the competitors with a desirable computational complexity, and is robust to changes in illumination and background.

E XPERIMENTAL RESULTS

In the testing phase, we consider a video taken from a college [8]. This video, which is a low resolution video (288 × 384 pixels), contains a scene depicting a passer-by. It captures a scene of human movement that includes various types of motions, changes in global lighting conditions, and occlusion of people due to other people or fixed objects in the scene [8]. To implement the proposed method, EMGUCV library is used. The implemented method is ran on Lenovo B590 with 4GB of memory, video card NVIDIA GEFORCE 1 GB and an Intel (R) Core (TM) i3-3120M [email protected] processor. Tracking results are shown in Fig. 8 for the different frame sequences.

R EFERENCES [1] [2]

The total number of moving objects in the video is counted manually which is 400. The number of correct detections by the proposed method is 374 which results in 93.5% of accuracy. We also use average run time of 25 experiments to evaluate the processing time of the proposed algorithm. The average processing time for the test video with 2294 frames is equal to 60,020 ms. Therefore, the processing time per frame is 26 ms. As a result, 38 frames per second can be processed. As realtime processing requires to process 30 frames per second [9], the proposed algorithm has a desired running time. IV.

[3]

[4]

[5]

[6]

C ONCLUSION

[7]

Detecting and tracking moving objects in videos is an important application in the field of computer vision.

91

A. Yilmaz, O. Javed and M. Shah “Object tracking: a survey”, ACM Computing Surveys, vol. 38, no. 4, pp. 1-45, 2006. J.S. Lim and W.H. Kim, “Detection and tracking multiple pedestrians from a moving camera”, International Symposium on Visual Computing, pp. 527-534, 2005. C. Lee, “Vision tracking of a moving robot from a second moving robot using both relative and absolute position referencing methods”, 37th Annual Conference on IEEE Industrial Electronics Society, pp. 325-330, 2011. M. Yokoyama, “A contour-based moving object detection and tracking”, Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 271-276, 2005. R. Zhang “Object tracking and detecting based on adaptive background subtraction”, International Workshop on Information and Electronics Engineering, pp.1351-1355, 2012. C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747-757, 2000. A.D. Jepson, D.J. Fleet and T. Andelmaraghi, “Robust online appearance models for visual tracking”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1296-1311, 2003.

Scientific Cooperations International Workshops on Electrical and Computer Engineering Subfields 22-23 August 2014, Koc University, ISTANBUL/TURKEY

[8]

G. Doretto, T. Sebastian, P.H. Tu and J. Rittscher, “Gait-based identification of people in urban surveillance video”, Journal of Ambient Intelligence and Humanized Computing, vol.2, no.2, pp. 127-151, 2011. [9] “Video surveillance trade-offs, a question of balance: finding the right combination of image quality, frame rate and bandwidth ” (Available: http://www.motorolasolutions.com/web/Business/ Documents/staticfiles/VideoSurveillance WP 3 keywords.pdf).

92

Suggest Documents