Implementation of an FPGA-Based Vision Localization - Springer Link

1 downloads 0 Views 1MB Size Report
addition, target tracking control method with Sobel filter on edge detection, region of interest and motion control. Experimental results show the effectiveness and.
Implementation of an FPGA-Based Vision Localization Wen-Yo Lee, Chen Bo-Jhih, Chieh-Tsai Wu, Ching-Long Shih, Ya-Hui Tsai, Yi-Chih Fan, Chiou-Yng Lee and Ti-Hung Chen*

Abstract The robotic version has been widely used in various industry motion control applications, such as object identification, target tracking or environment monitoring, and etc. This paper focuses on studying the real-time FPGA-based implementation of object tracking for a three axes robot. In this work, a unified FPGA implementation for both object identification and target tracking, including basic image processing, image display and target tracking control, is proposed. In addition, target tracking control method with Sobel filter on edge detection, region of interest and motion control. Experimental results show the effectiveness and versatile application ability of the implementation algorithm in target tracking control. Due the flexibility and speed of FPGA hardware, the generated tracking command can be running in very high precision and very high frequency. Keywords FPGA · Target tracking · Object identification · Robot vision

W.-Y. Lee() · C. Bo-Jhih · C.-Y. Lee · T.-H. Chen Department of Computer Network and Engineering, Lunghwa University of Science and Technology, Taoyuan, Taiwan e-mail: [email protected] C.-L. Shih Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan C.-T. Wu Chang Gung Memorial Hospital, Linkou, Taiwan Y.-H. Tsai · Y.-C. Fan Mechanical and System Research Laboratory, Industrial Technology Research Institute, Hsinchu, Taiwan © Springer International Publishing Switzerland 2016 T.T. Zin et al. (eds.), Genetic and Evolutionary Computing, Advances in Intelligent Systems and Computing 388, DOI: 10.1007/978-3-319-23207-2_23

233

234

1

W.-Y. Lee et al.

Introduction

The vision tracking of a manufacturing system, for both pick-place motion control and target tracing, is as important as other aspects of a robot system design, such as fast image processing, high performance object identification and precision motion control. Precision of image localization and speed of the image processing are important facts in design an object tracing control in order to increase manufacturing yield and to reduce production cost and the system settling time. Due to the advanced development of very large scale integration technology, the field-programmable gate array (FPGA) has been widely used to implement image processing and motion control systems because of its simplicity, programmability, short design cycle, fast time-to-market, low power consumption, and high density. The computing time of an FPGA-based controller can be relatively short regardless of the complexity of the control algorithm because of its parallel processing architecture. A motion controller can be implemented using a single FPGA chip. Therefore, a compact system with low power and a simple circuit is possible. Nowadays, the real-time processing is important in an object tracking system especially in a vision-based motion control system. Chiuchisan have tried to implement a new FPGA-based real-time configurable system for medical image processing [1]. Rodriguez-Araujo, implemented a low cost system-on-chip for localization of UGVs in an indoor iSpace [2]. Chen showed a real-time FPGA-based template matching module for visual inspection for LED defect detection [3]. Hsu gave an idea on FPGA implementation of a real-time face tracking system [4]. Marin discussed about remote programming of network robots within the UJI industrial robotics telelaboratory [5]. Amanatiadis designed a fuzzy area-based image-scaling technique for dynamic neighborhood average image processing [6]. Chinnaiah showed how to implement a shortest path planning algorithm without track using FPGA robot [7]. Ghorbel introduced both hardware and software implementation on FPGA of a robot localization algorithm [8]. Hagiwara, they, used FPGA to research on real-time image processing system for service robots [9]. Saeed showed how to implement the FPGA based real-time target tracking on a mobile platform [10]. Singh based on the Sobel algorithm to implement real-time FPGA based color image edge detection module [11]. Saqui applied the mathematical morphology in object tracking on position-based visual servoing [12]. Because of the prosperous development of advanced vision-based control technology and its simplicity, image processing algorithms and vision servoing are now common used for industry machine motion control applications. Precision positioning machines are required to run with higher speed and higher accuracy. A typical vision servo based on microprocessor or microcontroller is suffering in the speed and precision. Thus the complex programmable logic device, such as field programmable logic device (FPGA), application-specific integrated circuit (ASIC) and system on programmable chip (SoPC), has been stimulating the demand for researcher to develop vision servo capable of very high frequency and very high precision. This paper focuses on studying the FPGA-based vision tracking and target localization robot. In this work, a unified FPGA implementation for both image processing and robot tracking models, including basic image processing model, edge

Implementation of an FPGA-Based Vision Localization

235

detection model and object tracking model, is proposed. Experimental results show the effectiveness and versatile application ability of the implementation algorithm in image servoing. Due the flexibility and speed of FPGA hardware, the generated tracking command can be running in very high precision and very high frequency. The rest of this paper is organized as follows. Section 2 describes system design and image processing methods, basic image processing and target identification. Section 3 presents the FPGA implementation results and experimental results of robot control models. Finally, Section 4 summaries the outcome of the paper and discusses possible implementations for further works.

2

System Design

The experimental set-up of the FPGA-based three axes vision tracking system, as shown in Fig. 1, included the three axes robot manipulator with a CMOS sensor at the end-effector. The Altera DE2-115 Development Board is introduced to implement the tracking system. The three axes robot is built by stepping motors, and the motion commands are generated by pulse analyzer skill which is realized by the Verilog code. A resolution 2,752×2,004 pixels CMOS senor is used to detect the moving target.

Fig. 1 The FPGA-based three axes vision tracking system.

2.1

The SOC System

The Nios II version 14.1 is introduced to implement the SOC system which composed the basic image process, Sobel edge detection, region of interest, high efficiency memory access, motion control, etc. The SOC-based image process module is shown in Fig. 2.

236

W.-Y. Lee et al.

Fig. 2 The SOC-based image process module.

For the moving average filter, the image masking process is solved by LineBuffer module which is a RAM-based shift register. It can perform a high efficiency memory access for the image filtering process. The simulation result of the Linebuffer is shown in Fig. 3. It is similar to the pipeline process, when the clock coms to the 6th clock, then the mask can be calculated simultaneously.

Fig. 3 LineBuffer simulation

The average filter definition is as follows ( , )= ∑



( , ) ( + , + ) ⁄9 .

(1)

The major functions of the SOC are CMOS image controller, SDRAM controller for the LineBuffer module, image preprocessing and post processing, and display. The image preprocessing module responses for the basic image processes such that RAW to RGB, Gray image, Sobel edge detection, average filter, region of interest, and centroid, etc. The image data preprocessing block diagram is shown in Fig. 4.

Implementation of an FPGA-Based Vision Localization

237

Fig. 4 The image data preprocessing.

There is one important parameter should be mentioned is the ROI. The definition of the region of interest (ROI) is shown by equation (2) and equation (3) which can easily to remove the noise block and get the target on the captured image. ( (

, , , ,

2.2

100) − 100 100) + 250 ( (

100) − 100 100) + 200

0 799 0 599

10 789 10 569

(2)

(3)

The Target Localization

In a vision-based robot system, the vision sensor gets the image of the target and sends to the image processing center to calculate the target position. The manipulator will get the command to track the target and feedback the target position. This technique is wildly applied on the pick and place application. For demonstration of this idea, we take a post image processing module to rotate and centralize the tracking target. The target image will be shown on the middle of the screen with same orientation of the reference object. The four corners of a rectangular target are taken to calculate the angle of the rotation. It is important to get the long side of the rectangle for rotate the robot vision, since the rotation angle calculation algorithm is based on the information of the long side of it. A simple algorithm is shown in Fig. 5.

238

W.-Y. Lee et al.

Fig. 5 The algorithm of searching the long side of a rectangle.

According to the Table 1, the vision rotation angle can be find by giving the coordinate (x1,y1) and (x2,y2). The result of the vision rotation experiment is shown in the Fig. 6. Table 1 The vision rotation formula cos( ) =

2

cos( − ) =

sin( ) =

2

1

1 = ∙ cos( − ) 1 = ∙ cos( ) ∙ cos( ) + ∙ sin( ) ∙ sin( ) x1 = x2 ∙ cos( ) + 2 ∙ sin( ) 1 = ∙ sin( − ) 1 = ∙ sin( ) ∙ cos( ) − ∙ cos( ) ∙ sin( ) y1 = −x2 ∙ sin( ) + y2 ∙ cos( )

Implementation of an FPGA-Based Vision Localization

(a) Original image

239

(b) Vision rotate with an angle

Fig. 6 The centralization and rotation result of the target.

3

System Level Design and Experimental Result

The proposed vision system includes three major modules witch are system control module, image process module, and motion control module. It offers a lot of convenient on system design, if the operation system is involved in the original design. The NIOS II is the highest level controller and plays as a user interface. The internal bus delivers the NIOS II comments between the image process module and motion control module. Every module processes in parallel mode and shares data in the SDRAM, which increases the image process efficiency. Since there are individual processes for each module, there is no latency caused by the image process. The system level design block diagram is shown in Fig. 7.

Fig. 7 The system level design block diagram

There are 17 function blocks have been implemented for this study. For speeding up the process time and reducing the latency, the function block is coded by Verilog HDL. The major different from the high level language, such as C++, is that all the function blocks take its own clock to process, so every function block executes their job simultaneously. Fig. 8 shows the system function blocks.

240

W.-Y. Lee et al.

Fig. 8 The system function blocks.

3.1

Experimental Results

A target is put in front of the COMS sensor with a stick holder. Moving the target with the stick the robot will track the target in real-time. The tracking angle of each joint is shown in Fig. 9. The tracking angle is held at a stable point when the target stops moving. The angles from left to right are , , and , respectively.

Fig. 9 The tracking angles of each joint.

The experimental result is shown in Fig. 10. It should be mentioned that after the robot locks the target then the robot will rotate it vision angle to let the bottom side of target image parallel to the screen.

(a) Initial state (b) Target move into the vision (c) Lock the target (d) Rotate the vision angle Fig. 10 Robot tracking result on FPGA

Implementation of an FPGA-Based Vision Localization

4

241

Conclusion

This paper proposes a FPGA-based vision tracking algorithms to implement a vision localization robot. The proposed FPGA-based module processes three functions that construct a high speed vision based robot. It is fair to say, the vision based robot is a precision robot, but it integrate the three modules: system control module, image process module, and motion control module which is the fundamental technique for a vision based robot. On the other hand, it offers a whole page of the design skills for how to implement a FPGA-based vision tracking robot, and it can help the researcher to figure out the parallel modules processing in very steps. The proposed implementation method may apply to the industrial markets and remote monitoring markets. The future works are of implementing more function blocks to let the vision tracking robot can be more accuracy. At meantime, we will pay more attention on the message transformation time latency issue on the robot control. Acknowledgment The work of W.Y Lee is supported by the Taiwan National Science Council (NSC) under Grant NSC102-2221-E-262-018 and Chang Gung Memorial Hospital under Grant CMRPD2C0063.

References 1. Chiuchisan, I.: A new FPGA-based real-time configurable system for medical image processing. In: 2013 IEEE International Conference on E-Health and Bioengineering (EHB), pp. 1–4 (November 2013) 2. Rodriguez-Araujo, J., Rodríguez-Andina, J.J., Fariña, J., Chow, M.Y.: FieldProgrammable System-on-Chip for Localization of UGVs in an Indoor iSpace. IEEE Transactions on Industrial Informatics 10(2), 1033–1043 (2014) 3. Chen, J.Y., Hung, K.F., Lin, H.Y., Chang, Y.C., Hwang, Y.T., Yu, C.K., Chang, Y.J.: Real-time FPGA-based template matching module for visual inspection application. In: 2012 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 1072–1076 (July 2012) 4. Hsu, Y.P., Miao, H.C., Tsai, C.C.: FPGA implementation of a real-time image tracking system. In: Proceedings of IEEE SICE Annual Conference 2010, pp. 2878–2884 (August 2010) 5. Marin, R., León, G., Wirz, R., Sales, J., Claver, J.M., Sanz, P.J., Fernández, J.: Remote programming of network robots within the UJI industrial robotics telelaboratory: FPGA vision and SNRP network protocol. IEEE Transactions on Industrial Electronics 56(12), 4806–4816 (2009) 6. Amanatiadis, A., Andreadis, I., Konstantinidis, K.: Design and implementation of a fuzzy area-based image-scaling technique. IEEE Transactions on Instrumentation and Measurement 57(8), 1504–1513 (2008)

242

W.-Y. Lee et al.

7. Chinnaiah, M.C., DivyaVani, G., SatyaSavithri, T., Rajeshkumar, P.: Implementation of Shortest path planning algorithm without track using FPGA robot: A new approach. In: 2014 International Conference on Advances in Electrical Engineering (ICAEE), pp. 1–4 (January 2014) 8. Ghorbel, A., Amor, N.B., Jallouli, M., Amouri, L.: A HW/SW implementation on FPGA of a robot localization algorithm. In: Systems, Signals and Devices (SSD), pp. 1-7 (March 2012) 9. Hagiwara, H., Asami, K., Komori, M.: Real-time image processing system by using FPGA for service robots. In: 2012 IEEE 1st Global Conference on Consumer Electronics (GCCE), pp. 720–723 (October 2012) 10. Saeed, A., Amin, A., Saleem, S.: FPGA Based Real-Time Target Tracking on a Mobile Platform. In: 2010 International Conference on Computational Intelligence and Communication Networks (CICN), pp. 560–564 (November 2010) 11. Singh, S., Saini, A.K., Saini, R.: Real-time FPGA Based Implementation of Color Image Edge Detection. International Journal of Image, Graphics and Signal Processing (IJIGSP) 4(12), 19–25 (2012) 12. Saqui, D., Sato, F.C., Kato, E.R., Pedrino, E.C., Tsunaki, R.H.: Mathematical Morphology Applied in Object Tracking on Position-Based Visual Servoing. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 4030–4035 (October 2013)