Moving Object Tracking Application: FPGA and Model ... - IEEE Xplore

0 downloads 0 Views 363KB Size Report
filter (c) Video image back ground removal (d) Video image thresholding (e) Video image edge detection (f) Video image height and width calculation (g) Video ...
2015 International Conference on Computing Communication Control and Automation

Moving Object Tracking Application: Fpga And Model Based Implementation Using Image Processing Algorithms. Sofia Nayak Shashank Sekhar Pujari P.G. Department of Embedded System Design Sambalpur University Institute of Information Technology Jyoti Vihar, Burla-768019, Odisha, India. [email protected] [email protected]

where as FPGA offers highly flexible, parallel processing for a System on Programmable Chip (SoPC) development for proof of concept at formative stage of the system design, leading to manufacturable prototype at a later stage before the final Application Specific Integrated Chip (ASIC) implementation. The FPGA contains logic components that can be programmed to perform complex mathematical functions making them highly suitable for the implementation of matrix algorithms. Therefore, FPGAs are an ideal choice for implementation of real time image processing algorithms.

Abstract— With increased resource size, powerful DSP blocks and large on-chip memory, Field Programmable Gate Array (FPGA) devices play a major role as hardware platforms for implementing compute intensive video image processing applications. In this paper, image processing algorithms are used for tracking a moving video object. The image processing algorithms used are (a) Noisy video generation with random motion (b) Video image median filter (c) Video image back ground removal (d) Video image thresholding (e) Video image edge detection (f) Video image height and width calculation (g) Video image center computation (h) Video image and center image overlay. The image processing algorithms are developed initially by Model Based Design Approach using Simulink models of MATHWORK’s MATLAB Tool. Then these algorithms are implemented on ALTERA CYCLONE-II FPGA device using TERASIC DE2 FPGA hardware kit and ALTERA QUARTUS-II software tool. The input video image is taken from a NTSC/PAL camera and processed in real time using the algorithms on the FPGA and the resulted tracked video image output is displayed on a VGA monitor. Keywords—fpga; tracking

simulink;

video

image

For a beginner it should be necessary to approach the problem at hand through model based approach using various modeling tools like MATLAB, LABVIEW and SILAB etc. MATLAB offers drag and drop Simulink modules to translate DSP algorithms to logical hardware entities to understand about, how signal and image processing algorithms work. Simulink model implementation is a good step for a learner on their respective domain to work. This paper covers the implementation of image processing algorithms for moving object tracking applications using Simulink model and FPGA. An intermediate stage between Simulink modeling and final FPGA implementation could be System Generator modeling for Xilinx FPGA, to arrive at ready HDL (VHDL or VERILOG) code generation for a Xilinx specific FPGA devices and hardware kits.

processing;

I. INTRODUCTION Moving object tracking is one of the fundamental components of computer vision; it can be very beneficial in applications such as unmanned aerial vehicle, surveillance, automated traffic control, biomedical image analysis, intelligent robots etc.The problem of object tracking is of considerable interest in the scientific community and it is still an open and active field of research. That’s why; this paper is a good step for a beginner to move towards moving object tracking application.

In present work the Simulink logic entities are translated to image processing modules and introduced into the video chain established on TERASIC DE2 FPGA hardware evaluation kit [7]. The video source is from PAL/NTSC compatible camera and the output display is on 640X480 resolution VGA monitor. The functional implementation of all processes are done using ALTERA QUARTUS-II tool.

Image processing is one of the major applications in embedded domain, which requires high effort in computation. In today’s world most sensing applications require some form of digital signal processing. The two major contenders for signal processing hardware platforms are Digital Signal Processing (DSP) processor and Field Programmable Gate Array (FPGA). DSP processor offers high compute intensive, serial processing for complete System on a Chip (SOC) embedded product development, 978-1-4799-6892-3/15 $31.00 © 2015 IEEE DOI 10.1109/ICCUBEA.2015.185

Section-II describes Related Works, Section-III describes Moving Object Tracking Algorithms, Section-IV describes Implementation of Algorithms, Section-V describes Experimental Results and Section- VI describes Conclusion and Future Scopes.

932

II.

RELETED WORKS

experiment purpose. These two types of noise are selected by the switches on DE2 board.

Several implementations of object tracking on FPGA platform exist. Monoj Pandey et al. [1] implemented “Kernel based Mean Shift Algorithm” on Xilinx Spartan-6 FPGA board using EDK for tracking a moving object. They simulated in MATLAB first and then implemented on Micro-blaze soft-processor based FPGA board. Here tracking is observed for two similar objects crossing each other moving with uniform speed in a stored video as well as real time video. V M Sandeep Rao et al. [2] implemented object tracking in a live video stream using 32 bit Risc soft-core processor embedded on FPGA. The HSV color model is used to make the algorithm robust to changing lightening scenario; in addition, the computeexpensive Color-Space transformation module is implemented. The algorithm used to track a moving object using averaging, dynamic thresholding and center-of-mass model for updating the current location of the target object. Jung Uk Cho et al. [3] described a real time visual tracking circuit using adaptive color histograms which is based on pattern matching algorithms where the appearance of the target is compared with a reference model in successive images and the position of the target is estimated. These are implemented on FPGA. Shashank Pujari et al. [6] described a cost effective FPGA based implementation of UAV (Unmanned Aerial Vehicle) flight control and object tracking system. The designed model is useful for UAV development and which have the importance in aerospace engineering Computing and Communication. The result obtained so far justifies the applicability of hardware/software co-design, which can substitute the conventional design approach of system development. This approach would lead to a very compact system with minimum resource requirement. III.

B. Video Image Median Filter In video and image processing, it is common practice to perform some kind of noise reduction, in order to produce a more suitable sample and get better results. The median filter is a nonlinear digital filtering technique, often used to remove noise [11]. The main idea of the median filter is to run through the 3x3 pixels image matrix, replacing the center pixel with the median of the 9 neighboring pixels. It is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise. Single pixel noise can be removed by 3x3 median filters and double pixel noise can be removed by 5x5 median filters. Video Generation with Random Object Motion& Noise Video Image Median Filter Video Back Ground Removal Video image thresholding Video Image Edge Detection Video image size (height & width) calculation

MOVING OBJECT TRACKING ALGORITHM Video Image Center Calculation

The image processing algorithms discussed in this paper can be summarized as follows: 1. Draw an internal image pattern of square block (100 pixels by 100 pixels) with parameter. i. Starting coordinate x/y. ii. Length-h (100 pixels) and depth-v (100 pixels). iii. Shift the position of the square block randomly by changing starting point x/y. iv. Add white Gaussian or salt & pepper noise. 2. Median filter for video noise removal. 3. Back ground removal. 4. Thresholding to remove back ground static image. 5. Edge detection of motion object. 6. Computation of length-h and depth-v of the edge detected image 7. Computation of center at h/2 and v/2. 8. Overlay a block/cross-hair cursor image at the center.

Video Image and Center Image Overlay Fig.1. Functional Modules of Moving Object Tracking.

C. Back Ground Removal This algorithm is used to remove all stationary objects leaving only objects which were changing from frame to frame along with some details. The latter are influenced usually by image noise and some background factors: swinging trees, moving grass, light shadows, clouds, rain, snow etc. In real conditions, time, season, weather and some others factors must also be taken into account. D. Video Image Thresholding In many applications of image processing, the gray levels of pixels belonging to the object are quite different from the gray levels of the pixels belonging to the background. Thresholding becomes then a simple but effective tool to separate objects from the background. Thresholding is one of the first low-level image processing techniques used, before image analysis step, for obtaining a binary image from its gray scale one. Improper thresholding causes blotches, streaks, erasures on the image confounding segmentation and recognition tasks.

A. Noisy Video Generation with Random Motion Image noise usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels, those are in error, often appeared to be widely different from their neighbors. Keeping this in mind salt and pepper type noise is generated and added to the input video stream as single pixel noise or double pixels noise for

933

The Constant block geneerates a real or complex constant value.

E. Video Image Edge Detection Edge detection is a fundamental toool used in image processing applications to obtain information from the frames. This process detects outlines of an object and boundaries between objects and the background b in the image. Here Prewitt Edge detection filterr is used [5].

Video Display block showss the video output. Create [x, y, width, height] vector for a rectangle and Draw Object are two designeed mask for generating the moving squared filled color objject.

F. Video Image Center Calculation To track object in a video it must needs to mark the center of the image in some way. One of thee ways is to mark object’s geometric center that is calculatted as

X

c

=

Yc =

¦

X

j

n

¦

Y

j

.

n

Fig.2. Simulink Model M for Moving object.

Where, Xc and Yc are object center cooordinates and Xj, Yj are coordinates of one of n image pooints from the area limited by object external contour. IV.

B. FPGA Based Implementatioon The real time object traacking is realized on the TERASIC DE2 Kit having a Cyclone-II FPGA. Altera Cyclone-II FPGA extend the loow cost FPGA density range to 68,416 logic elements and provide up to 622 usable I/O pins and up to 1.1Mbits of embedded e memory. Unlike other FPGA vendors who comppromise power consumption and performance for low cost, Altera’s A latest generation of low cost FPGAs Cyclone-II FPGAs offers 60% higher performance and half the powerr consumption of competing 90-nm FPGAs. Cyclone-II device family offers the features such as high density architecture with 4,608 to 68,416 LEs, Embedded Multiplliers, Advanced I/O support, Flexible clock managem ment circuitry, Device configurations, Intellectual prroperty; Nios-II embedded processor support. [7]

IMPLEMENTATION OF ALLGORITHM.

The image processing algorithm for moving m object was implemented using Model based desiign approach and Altera FPGA. For the implementatioon on these two platforms, one should be familiar with software s parts like MATLAB, Simulink, VHDL/ VERILOG G language, Altera Quartus-II tool and TERASIC DE2 boardd hardware. A. Model Based Implementation o a moving square In Simulink model implementation of object is generated. The logic entities used u to create this square object is translated into image prrocessing modules and gets introduced into video chaiin established on TERASIC DE2 evaluation kit. Here the Simulink model for moving object is implemented wherre a single square object is randomly changing its positionn. The blocks used to design this model is as follows: a. b. c. d. e. f. g.

m PAL/NTSC compatible The video source is from camera and the output display is 640X480 resolutions on VGA monitor. Fig.3 shows the test set up of moving object tracking application. Video proocessing HDL modules are implemented in Cyclone-2 FPG GA as shown in Fig.4. The tasks like coding, synthesis, simulation, s implementation, testing are done respectively.

Image From Workspace Constant Gain Create [x, y, width, height] vecttor for a rectangle Draw Object To Video Display Frame Rate Display.

Image from Workspace block is useed to import image from the MATLAB workspace. If the im mage is an M-by-N workspace array, the block outputs a binary b or intensity image, where M and N are the num mber of rows and columns in the image. If the image iss an M-by-N-by-P workspace array, the block outputs a coloor image, where M and N are the number of rows and coluumns in each color plane, P. Fig.3. Test Set Up for moving object traacking application.

b a constant value The Gain block multiplies the input by (gain).The input and the gain can each be b a scalar, vector, or matrix. It specifies the value by which to multiply the input.

In this video processing chain c video data source is NTSC camera, which is giveen to TV Decoder, which extracts the analog signal into i digitized output, the

934

block converts the YCrCb daata into RGB output. The VGA Timing Generator blockk generates standard VGA sync signals VGA_HS and VG GA_VS to enable the display on a VGA monitor.

digitized output is given as input to ITU-656 Decoder which, extracts the signals from the TV decoder and conversion of serial to parallel of the diggitized input video signal is performed. This output in IT TU-656 formats is given as input to YCrCb converter. YCrC Cb converter gives YCrCb format output which is given to dual buffer which converts the interlaced signal to de-interlace signal. The de-interlaced signal is given to YCrCb to t RGB converter. The luminance Y is applied to the videoo image processing chain for motion tracking. The processeed Y output of this chain including the additional overlaid center c clock cursor is again added with CrCb colour signal and a given to DAC which converts the digital form of thee input to analog format. Output of DAC is fed to VGA monitor, m which is of specifications 640*480. The motion objject tracked video image could be visible on VGA monitor.

Fig.5. shows the selection mode for moving object tracking application which is used to select the display pattern whether it is externaal video, internal pattern, filtered video or unfiltered viideo, edge detected video, center of the object detected viddeo etc.

Fig.5.Selection of switches for movving object tracking application

V. EXPERIMEN NTAL RESULTS

Moving Square Object

Fig.6.1. Filled square object random mly changing its positions. Fig.4.Video data path for moving object tracking appplication on fpga.

Fig.4. shows the data flow path. The major m blocks in this methodology consists of the ITU-656 decoder, d Dual Port Line Buffer, HsyncX2, YCrCb2RGB, and VGA Timing Generator and the motion tracking modules. The figure also shows the TV Decoder (ADV7181) and the VGA DAC v of the TV (ADV7123) chips used. The register values Decoder chip are used to configure the TV T decoder via the I2C_AV_Config block, which uses thee I2C protocol to communicate with the TV Decoder chhip. The ITU-656 decoder block extracts YCrCb (4:4:4) video v signals from the 4:2:2 data source sent from the TV V Decoder. It also generates a 13.5 MHz pixel clock withh blanking signals indicating the valid period of data ouutput. Because the video signal from the TV Decoder is inteerlaced, it needs to perform de-interlacing on the data sourcee. It used the Dual Port Line Buffer block and Hsyncx22 block to perform the de-interlacing operation where thhe pixel clock is changed from 27 MHz from 13.5 MHzz and the Hsync is changed to 31.4 KHz from 15.7 KHz. Innternally, the Dual Port Line Buffer uses a 1 Kbyte long duual port SRAM to double the YCrCb data amount (Y x 2, 2 Cr x 2, Cb x 2 signals in the block diagram). Finally, the YCrCb2RGB

Moving Square Object

Fig.6.2. Filled square object random mly changing its positions. Fig.6. [1-2] Simulink resullt of moving square object..

Fig.7.1. A square object randomly changing its position where noise is introduced by selecting SW-2=1 andd SW-3=1 of TERASIC DE2 board.

Fig.7.2. The noise of the screen is redduced when the selecting SW-7=0, SW-8=0 and SW-9=1. This is the fiiltered output where the object is randomly changing its position.

935

displayed as if the user is actuually looking at a cube from the right and if the user gets closer to the camera, then the cube should get larger. The prooject cube may be 2D or 3D. In next level this work could be extended to implement on X Zynq 7000 All Digilent Zed board and Xilinx’s Programmable SoC ZC702 Viddeo and Imaging Kit. ACKNOWLEDGMENT

Fig.7.3.Edge Detection of Moving Object is donee by selecting SW-7=1, SW-8=0 and SW-9=1.

to thank SOA The authors wish a SUIIT, Jyoti Vihar, University,Khandgiri,BBSR and Burla for the support of providding development tools, kits and laboratory infrastructure in carrying out this work. The authors would like to thannk Mr. Debesh Nayak, Mr. Prafulla Kumar Nayak, Mr. Lalit L Mohan Patel and Mr. Chitresh Bhargava for theirr assistance, support and constant encouragement.

o the edge detection of Fig.7.4.Center detection of the object and overly on the moving object is done by selecting SW-7=1, SW W-8=1 and SW-9=1.

NCES REFEREN

Fig.7. [1- 4] Experimental results of mooving object tracking application on Altera fpga. MMARY. TABLE I. DEVICE UTILIZATION SUM

Name Total Logic Elements Total Combinational functions Dedicated Logic Register Total Register Total Pins

Used 7484

Available 33216

Percentage 23%

6582

33216

20%

6212

33216

19%

6212 426

475

90%

Total Virtual Pins Total Memory Bits Embedded Multiplier 9bit elements Total PLL

0 211872

483840

44%

30

40

43%

1

4

25%

[1] Manoj Pandey, Dorothi Boorgohain, Gargi Baruah, J.S. Ubhiand Kota Solomon Rajuu“Real Time Object Tracking: Simulation and Implementtation on FPGA Based Soft Processor”, ICSSITE,2013. [2] V M Sandeep Rao, Aravinnd Natarajan, S.Moorthi and M.P.Selvan “ Real-Time Objeect Tracking in a video Stream using Field Programmable Gaate Array”, IEEE,2012. [3] Jung Uk Cho, Seung Hun Jinn, Xuan Dai Pham, Dong kyun Kim, and Jae Wook Jeon , “F FPGA-Based Real-Time Visual Tracking System U Using Adaptive Color Histograms”,IEEE,2007. [4] Mohammad I. AlAli, Khaldooon M. Mhaidat, and Inad A. Aljarrah, “Implementing Imaage Processing Algorithms in FPGA Hardware”, AEECT,IE EEE,2013. [5] P.K Dash,S.S Pujari and Soffia Nayak “Implementation of Edge Detection Using FPGA A and Model Based Approach”, ICICES,IEEE,2014,in press. [6] Shashank Pujari,Sheetal Bhandari, B Sudarsan Chandak “FPGA Controlled Visionn System for Survillance Robot[UAV]”, CSI Communnication, Robotics,Nov. 2008, Vol32. [7] ALTERA DE2 KIT user maanual, Cyclone-II User Guide, www.altera.com. [8] Daniel Chillet, Michael Hubbner, “Special Issue on design and Architecture of real time image processing in embedded system”,Springer,22014. [9] R.C. Gonzalez, and R.E E. Woods, "Digital Image Processing.", 3rd Edn., Prentice Hall, New Jersey, USA.ISBN: 9780131687288,, 2008, pp. 954. [10] Yahia Said, Taoufik Saidanii, Fethi Smach, Mohamed Atri and Hichem Snoussi “Embedded Real-Time Video Processing System on FPGA A”A. Elmoataz et al. (Eds.): ICISP 2012, LNCS 7340, pp. p 85–92, 2012. © SpringerVerlag Berlin Heidelberg 20112. [11] B.S.S.V. Ramesh Babu, K.Sita, K K. Akhilesh, Chetan Deokar, Manish Patil, Srikaant Thiagarajan, S. S. Pujari, Sheetal Bhandari, Ketan Raaut : Realtime Video Filter Implementation on FPG GA, Proceedings “National conference on Emerging treends in Signal processing and Communication”. (ETSPC), Pune, P India, Dec. 27-29, 2007, pp. 175-177. [12] Maimun Huja Husin, Fauziliana Osman,Mohamad Faizrizwan Mohd Sabri, Waan Azlan Wan Zainal Abidin, Al-Khalid Othman, Ade Syaheda Wani Marzuki, “Development of shape patterrn recognition for FPGA based object tracking system”,ICCA AIE,2010,IEEE

From the above table it can be seenn that for Moving Object Tracking Application system haardware resources needed to take up only a small part in i the rich FPGA hardware resources. So the system cann also be used for more complex video image processing allgorithms. VI. CONCLUSION AND FUTUREE SCOPE. This work has been successfully carrieed out based on the design with Altera Quartus- II and Simulink model h various image included in MATLAB tool. It explains, how processing algorithms for the purpose of object tracking like noisy video image generation, filteering image, edge detection of the image, center of the imagge detection, video image center object overlay are computed and implemented on FPGA successfully. This work could potentially be furtherr extended to more complicated work on object tracking moore effectively like track the motion of a human face and diisplay a projection cube on a VGA monitor that would chhange according to the motion of the user’s face i.e. The projection should be

936

Suggest Documents