Detection of the Mobile Object with Camouflage Color ... - Science Direct

10 downloads 0 Views 315KB Size Report
Jianqin Yin Yanbin Han Wendi Hou Jinping Li∗. Shandong Provincial Key Laboratory of Network Based Intelligent Computing ,School of Information Science ...
Available online at www.sciencedirect.com Available online at www.sciencedirect.com

Procedia Engineering

Procedia Engineering 00 (2011) 000–000 Procedia Engineering 15 (2011) 2201 – 2205 www.elsevier.com/locate/procedia

Advanced in Control Engineering and Information Science

Detection of the Mobile Object with Camouflage Color Under Dynamic Background Based on Optical Flow Jianqin Yin Yanbin Han Wendi Hou Jinping Li ∗ Shandong Provincial Key Laboratory of Network Based Intelligent Computing ,School of Information Science and Engineering, University of Jinan, Jinan, 250022, China

Abstract In order to realize the detection of the mobile object with camouflage color, a scheme based on optical flow model was put forward. Firstly, optical flow model was used to model the motion pattern of the object and the background. Secondly, the magnitude and the location of the optical flow were used to cluster the motion pattern, and the object detection result was obtained. At last, the location and scale of the object were used as the state variables, Kalman filter was used to improve the performance of the detection, and the final detection result was obtained. Experimental results show the algorithm can solve the mobile object detection satisfactorily.

© 2011 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and/or peer-review under responsibility of [CEIS 2011] Keywords:Mobile Object Detection; Optical Flow; Camouflage Color; Kalman Filter

1. Introduction Mobile object detection is a key problem in computer vision, which plays an important role in vision surveillance, man-computer interaction. But with the development of the vision surveillance, more and more illegal molecules through camouflage to engage in relevant criminal activities, at the same time, ∗

Corresponding author. Tel.: +86-136-5861-3338. E-mail address:[email protected].

1877-7058 © 2011 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. doi:10.1016/j.proeng.2011.08.412

2202 2

Jianqin / Procedia Engineering 15 (2011) 2201 – 2205 Jianqin YinYin et et al/al. Procedia Engineering 00 (2011) 000–000

camouflage also has very important applications in military field. And the current most video surveillance system uses color or gray as the basis for the object detection and tracking and behavior analysis, therefore, camouflage moving target detection problems have important research value and application value. Common target motion detection algorithm has three types: The first scheme comes the frame difference method [1], frame difference method used two or more adjacent frames to realize motion object detection, its dynamic background has better adaptability, without background model, but for slow moving targets, the test results easy appeared cavity, meanwhile, when the target and background have similar color or gray, it is difficult to segment the goals. The second algorithm is background subtraction [2-3]. The difference between the current image and the background is used to detect the motion. It does well in motion detection when the background model is good. So for these algorithms, the background model is the key. And the often-used background model is Gaussian Mixed Models (GMM) [4] and CodeBook Models [5]. And the gray or color information was often used to obtain the background model, so when the target is with camouflage color, the method will lose its effect. The third method is optical flow [6-7]. The velocity field is used to segment the object. The area with motion field is regarded as object, the other area is regarded as background. This method doesn't need know the scene information, but it has the disadvantages such as high computational cost and vulnerable to noise influence. In order to effectively segment the mobile object with camouflage color, the optical flow model based on velocity field characteristics was put forward. In order to reduce computational cost, corner detection was first used, then pyramid transformation was used to calculate its optical flow vector. Then the magnitude of the optical flow and the location of the feature points were used as the cluster vector, then cluster analysis algorithm was used to segment the motion target. Finally, in order to improve the accuracy of target detection, Kalman filter was used to smooth and the final target detection results were obtained. 2. Target motion detection based on optical flow 2.1 Motion pattern model based on optical flow There are lots of methods to calculate the optical flow. And the algorithm based on local gradient can achieve overall optimal performance results [7]. Therefore, LK model based on the gradient method was used to model the movement patterns. And in order to reduce computational cost, image corner detection was used, then pyramid implementation was used to obtain its optical flow. (1) Corner detection The extraction of the features points is the key to effectively track the object. Corner detection was used to track the target. The often used corner detection scheme include Harris corner detection [8] and KLT detection [9]. Considering the Harris scheme is sensitive to the noise, KLT detection was used to detect corners. Suppose the current image is A, the second order moment matrix G was constructed by KLT algorithm as following:

G=

u y +ϖ y

⎡ I x2 ∑ ∑ ⎢ x = u x −ϖ x y = u y −ϖ y ⎣ I x I y u x +ϖ x

IxIy ⎤ ⎥ I y2 ⎦

(1) Where,ωx andωy is the parameter of window size. And according to formula (1), the size of the window is [2ωx+1 2ωy+1]. (x,y) is the pixel location. And Ix and Iy can be computed as following:

2203 3

Jianqin Han Yin etWendi al. / Procedia Engineering 15 (2011) 2201 – 2205 Jianqin Yin Yanbin Hou Jinping Li / Procedia Engineering 00 (2011) 000–000

A( x + 1, y ) − A( x − 1, y ) 2 A( x, y + 1) − A( x, y − 1) Iy = 2 Ix =

(2) In order to get the corners, λmin, the minimum eigenvalue of the second order moment matrix of every pixel was calculated. And the minimum eigenvalue can be regarded as the feature value of the pixel λ, according to which the maximum eigenvalue of the whole image can be obtained. Then those points with larger λ were retained. At the same time the points with local maximum values were also retained. (2) Computation of optical flow After the corner detection, Lucas-Kanade optical flow algorithm based on pyramid implementation[10] was used to obtain quick optical flow. The basic idea is as following: Suppose I is the previous frame image, J is the current frame. I(x,y) represents the gray or color value of I at location (x,y). Suppose u=[ux uy]T is a pixel of image I. The computation of optical flow is to find the corresponding pixel of u in current frame J. Suppose the corresponding pixel is v, and v can be represented by v=[u+dx u+dy],then d=[dx dy]T is the optical flow of the images at location u. And the match error function can be defined as following:

ε (d ) = ε (d x , d y ) =

u y +ϖ

u x +ϖ x

∑ϖ ∑ϖ (I ( x , y ) − J ( x + d

x =ux −

y=u y −

y

x

, y + d y ))

2

(3) Where, ωx andωy is the parameter of window size. And according to formula (1), the size of the window is [2ωx+1 2ωy+1]. The solution of the optical flow model is to find the value of d which minimizes the match error. Pyramid decomposition is used to decompose the image. Suppose we decompose our image into Lm levels, compute the dL minimize the match error at level L, where L is the pyramid level, and the final optical flow vector is x

y

Lm

d = ∑2L d L L =0

(4)

2.2 Motion target detection After the optical flow was obtained, the motion pattern represented by optical flow was used to cluster. Suppose optical flow was represented by the three tuple Z=(x,y,ρ),where x,y are the pixel location parameters, and ρ is the magnitude of the optical flow. Based on the optical flow computed by formula (4), the motion object detection method was put forward and was illustrated as following: S1: The histogram of the optical flow magnitude was computed, and the horizontal axis represented the magnitude of the optical flow, and the vertical axis represented the number of the points with the value of magnitude. S2: Take the valley point as segmental value, by which the points were divided into two groups: Those located on the side of with larger number were classified into background flow, and the others were target points. S3: Based on the result of segmentation, Euclidean distance was used as evaluation criterion, if the segmented points had neighbors whose number are less than a specified threshold, these points were regarded as noise points, or they are were target. And then the target area can be obtained. 2.3 Results smoothing by Kalman filter

2204 4

Jianqin / Procedia Engineering 15 (2011) 2201 – 2205 Jianqin YinYin et et al/al. Procedia Engineering 00 (2011) 000–000

In order to augment the accuracy of the motion detection, Kalman filter was used to smooth the detection result. Because cluster analysis was used to establish the final target, the size and location may often have noises. Kalman filter was used to smooth the size and location separately. And it was difficult to obtain the motion model, and the time interval between the two consecutive frames was short, so the linear model was used as following:

X (k + 1) = AX (k ) + W (k ) Y (k ) = HX (k ) + V (k )

(5) Where, X is the state vector, Y is the observation vector. A is system matrix, H is observation matrix, W and V are process noise and observation noise separately, which are Gaussian distribution with zero mean. For the object location smooth, Xl = [x, y, x&, y& ]T , where (x,y)T is the location of the target, T T X s = h, w, h&, w& ,where (h,w) is the height and width of the object. And the system matrix and observation is as following:

[

]

⎡1 ⎢0 Al = A s = ⎢ ⎢0 ⎢ ⎣0

Hl = H

s

⎡1 = ⎢ ⎣0

0 1

Td

0 0

1 0

0 ⎤ T d ⎥⎥ 0 ⎥ ⎥ 1 ⎦

0

0

0

1

0

0⎤ 0 ⎥⎦

(6)

After the system models were established, Kalman filter equations were used to filter our problem [11]. 3. Experimental results

Because the lack of standard video for test the detection of motion camouflage object, the videos of ourselves were used. And the related results of optical flow were shown in Fig.2. In Fig.2, the green points represented the corners, and the red line segments represented the optical flow vector. And the result was magnified 5 times. From the images, we can see that although the color of the target was same as the background, the motion pattern was different obviously. The cluster threshold was 4, and the distance threshold was 20. And the final location results were shown in Fig.3. And we can see that the algorithm can realize the motion detection of the camouflage target.

Fig.2 the results of optical flow

Fig.3 the results of object detection

Yin etWendi al. / Procedia Engineering 15 (2011) 2201 – 2205 Jianqin Yin Jianqin Yanbin Han Hou Jinping Li / Procedia Engineering 00 (2011) 000–000

4. Conclusions and discussions

This paper proposed a motion target with camouflage color detection scheme based on optical flow. Due to camouflage target have color or gray which was similar to the background, the traditional target detection algorithm is difficult to work in dealing with such targets, but because the movement patterns of the target and background are different, the optical flow was first computed to represent the movement pattern, and then the field results were clustered. In order to improve the accuracy of target detection, Kalman filtering was used to smooth the detection results. Experimental results verified the validity of our algorithm. But optical flow was easily influenced by noise impact, so the future work will focus on how to improve the optical flow algorithm, so as to adapt to camouflage movement targets detection requirements and improve the effect of target detection. Acknowledgements

This work is supported the scientific research development plan of university in Shandong Province (J11LG01) and independent innovation plan of university in Jinan City (JNK1005). References: [1] A. Lipton, H. Fujiyoshi, R. Patil. Moving Target Detection and Classification from Real-Time Video, In Proceedings of IEEE Workshop Application of Computer Vision.1998:8-14. [2] C. Stauffer, W. Grimson. Adaptive background mixture models for real-time tracking. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, USA, 1999: 246-252. [3] M. Kilger. A shadow handler in a video-based real-time traffic monitoring system. Proceedings of IEEE Workshop on Application of Computer Vision, USA, 1992:11-18. [4]C. Stauffer, W. Grimson. Learning patterns of activity using Real-time tracking. Transactions on Pattern Analysis and Machine Intelligence.2000, 22(8):747-757. [5]K. Kim, T.H. Chalidabhongse, et al. Background modeling and subtraction by codebook construction. International Conference on Image Processing,2004:3061-3064. [6]A. Verri, S. Uras, E. DeMicheli. Motion segmentation from optical flow. The 5th Conference on Vision Conference, UK, 1989:209-214. [7]J. Barron, Fleet D, Beauchemin S. Performance of optical flow techniques. Computer Vision, 1994, 2(1):42-77. [8] C. Harris, M. Stephens. A combined corner and edge detector,In Proceedings of the 4th Alvey Vision Conference,1988: 147151 . [9]J. Shi and C. Tomasi. Good Features to Track, In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1994: 593 – 600. [10]J. Bouguet. Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm,Intel Corporation Microprocessor Research Labs,2000:1-8. [11] S. G. Mohinder, A. P.Andrews. Kalman filter. John Wiley &Sons, Incorparation. 2001.

2205 5

Suggest Documents