Vision-based UAV Landing on the Moving Vehicle - koasas - kaist

1 downloads 0 Views 728KB Size Report
camera to install on the UAV, we select the Cameleon 3 USB camera made by Pointgrey, which is the light weight and high resolution camera as shown in Fig.
Vision-based UAV Landing on the Moving Vehicle Hanseob Lee, Seokwoo Jung and David Hyunchul Shim Department of Aerospace Engineering Korea Advanced Institute of Science and Technology Daejeon, South Korea hslee89, tjrdnzz123, [email protected] Abstract — This paper describes the tracking guidance for the autonomous UAV landing and the vision-based detection of the marker on the moving vehicle with the real-time image processing system. The detection of the marker is based on the color detection algorithm with morphological filter and the tracking guidance is based on the relative distance between the UAV and the vehicle. In order to track and land on the moving vehicle, we have built the integrated system on the UAV platform that conducts the landing mission with the on-board image processing system. The flight test was performed with the moving vehicle in the outdoor environment, and we validated the algorithm for the vision-based autonomous landing.

detection algorithm is developed with the color marker on the vehicle. We try to make the color detection algorithm by using morphological filter which can reduce the background noise in the image. The target tracking system is based on the relative distance between the UAV and the vehicle. The quadrotor guidance algorithm is developed with reference to the algorithm suggested in [7]. We consider the moving vehicle as the target in this paper.

Keywords — Unmanned Aerial Vehicle (UAV), Vision-based Detection, Morphological Filter, Target Tracking, Trajectory Guidance, On-board Image Processing.

In this section, we introduce the hardware systems which are the one with the flight control system for the autonomous flight based on the waypoint navigation, and the other with the vision system for detecting the target in order to perform the mission.

II. HARDWARE SPECIFICATION

I. INTRODUCTION The UAV researches have been performed in many aspects such as navigation, path planning, sensor fusion, task allocation etc. Those researches have been quite mature, so it makes technology readiness level (TRL) of the UAV to be getting higher. So, many practical and applied researches are performed recently. Especially, many researches aimed to perform the vision-based mission with the unmanned system. For example, the vision-based indoor localization, navigation using geo-referenced information, detection and tracking the obstacles, automatic landing and so on [1-6]. However, it’s not easy to perform the outdoor test with the vision-based system because of the uncertainty in the outdoor environment. In order to overcome the uncertainty, the study was performed about increasing the performance of tracking the trajectory in the uncertainty environment by using adaptive controller [7]. Also there is a study of the vision-based precise landing on the target [8]. But this research didn’t carry out the automatic landing outdoor test due to the uncertainty of the wind. Some researches of the vision-based landing in the indoor test are performed with using the indoor positioning system (IPS) such as VICON [8-9]. Some of them are conducted in the simulation environment with checking the possibility of the automatic landing in real environment [4, 10]. The research is performed about landing on the moving vehicle with the estimation geolocation of the target [5]. But they performed the image processing through the ground PC, not on the UAV. In this paper, we aim to build the integrated system with the on-board vision detection and the target tracking guidance for landing on the moving vehicle in the outdoor environment. The

A. Flight Control System In many researches, the quadrotor-type UAV is well used due to the high stability, good control performance and simple design to install the variety of sensors. The flight platform we used in this research is also the quadrotor-type UAV as shown in Fig. 1. The platform is a DJI S500 frame, that the distance between both edge motors is 500mm. It can be equipped by 12×4.7 inches blades. Total weight including an electric power plant, the flight control system, the camera sensor and the embedded computer, is about 3kg. The flight avionics system is built in the flight control box, which is located on the top of the UAV. It includes a flight control computer (FCC), a GPS module, a 72MHz receiver, an inertial measurement unit (IMU), and the servo controller function board as shown in Fig. 2.

Fig. 1 Vision-based detection quadrotor-type UAV

Fig. 3 Cameleon3 USB camera (left) and 1.4mm fish-eye lens (right) Fig. 2 Inner components in the flight control box TABLE 1 The detail specification for the flight system Size

Diameter : 0.5 [m], Height : 0.33 [m]

Weight

Empty weight: 2.0 [kg] (with batt) Takeoff Gross Weight: 3.0 [kg] (with payload)

Power plant Duration

DYS D4215-650KV (1.62 [kg] per 1 EA) Max. 10 [min] (Batt. - 4600mAh, 14.8V, 30C)

Flight Control System

Gumstix Verdex pro (600MHz, 128Mb) PWM generation board, Ublox LEA-6H Small Custom board Microstrain 3DM-GX3-25, GPS-aided Navigation 2.4GHz Wi-Fi Access Point Network

Position Accuracy

Horizontal: 2.0 [m], Vertical: about 3.0 [m]

Autonomy

Attitude, Altitude, Heading, Speed hold/command Waypoint/Path following/Hovering

The flight control computer we used is named Verdex-Pro made in Gumstix, which has the good performance such as 600MHz CPU processor, 128MB DDR RAM and the built-in 2.4GHz Wi-Fi module. Its dimension is 100mm×20mm including the servo controller function board to control multiple motors on the platform with PWM signals. The high performance processor allows us to perform the flight test for the high-level algorithms like a task allocation, a simple image processing and so on. The navigation system is built with GPS and IMU to be a part of GPS-aided INS. The extended kalman filter (EKF) estimates the attitude and position of the target with GPS updating at 5Hz while it compensates the measurement from IMU. The communication with the ground control system (GCS) is performed through the 2.4GHz Wi-Fi antenna to check the current flight state and change the flight mode for the autonomous mission. The detail specification of the flight system is in the TABLE 1. B. Vision Detection System In order to achieve the objective, the camera sensor should be equipped on the UAV. In the past, most of researches were using the standard field of view (FOV) camera lens. Nowadays, the fish-eye lens is used occasionally so that it is continuously detecting the target even when the target is located far away and highly close. Hence, the fish-eye lens plays the important role in the vision-based mission.

TABLE 2 The detail specification for 1.4mm fish-eye lens

Focal-Length Weight FOV

1.4 mm 135 g 1/2’’ : 185 × 185

In order to detect the target through the wide view, we use the fish-eye lens as shown in Fig. 3. A fish-eye lens is an ultrawide angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image. The detail specification is in the TABLE 2. The camera is important as well as the lens. Recently, the camera was develop with USB interface, so we can get the digital image data without the transform analog to digital device such as a frame grabber. Considering the condition of the camera to install on the UAV, we select the Cameleon 3 USB camera made by Pointgrey, which is the light weight and high resolution camera as shown in Fig. 3 as well. The detail specification is in TABLE 3. In order to process the raw image from the camera, we need the embedded computer which can be equipped on the UAV. We select the one of the best embedded computer that is, the Jetson TK1 board from the NVIDIA. This embedded board has a single chip that contains the 2.33GHz ARM quad-core Cortex A15 CPU and NVIDIA’s 192-core Kepler GK20a GPU which is capable of delivering over 300 gigaflops. Not only the board has the high performance processor, it also includes 2GB of DDR3L 933MHz DRAM plus 16GB of fast eMMC storage. TABLE 3 The detail specification for Cameleon 3

Size Weight Resolution Frame Rate

44 × 35 × 19.5 mm 55 g (without mount) 1288 × 964 30 FPS

Fig. 4 NVIDIA Jetson TK1 board

III. INTEGRATED VISION-BASED TRACKING SYSTEM In order to develop the system for the vision-based landing on the moving vehicle, the integrated system is required; that one is the flight control system and the other is image processing system. In this section, we define what the work is done in each systems and how we integrate those systems on the one UAV platform. A. Image Processing System The objective of this research is the landing on the moving vehicle. In order to track the moving target, we need the image processing algorithm to detect the marker on the target. In this section, we introduce the image processing procedure how we identify the color-based marker on the image by using the OpenCV functions [11]. When the raw image is received from the camera sensor, the first process is resizing the raw image for reducing the calculation load. Then, we needs the work for reducing the noise in the image through the 5×5 Gaussian kernel. We called that this filter is “Gaussian Smoothing Filter”. Then, we need to transform image type RGB to HSV for the image processing in the outdoor environment. HSV is the most common cylindrical-coordinate representation of points in an RGB color model. When we transform the RGB image to the HSV image, then it looks as shown in Fig. 5. We analysis the red marker HSV image in the outdoor environment according to the time, the light intensity, the amount of clouds and so on. Then, we can get the possible ranges of hue, saturation and value to detect the marker in the any environment. If we assign the loose ranges of parameters, the image has many noises, we build the detection algorithm with those ranges to find the red marker. The next step is the binarization in order to get the region of interest (ROI) about the red marker. There are many methods of binarization. The simple way of among them is the threshold algorithm with the ranges of three parameters of HSV color model. The equation of threshold is shown below. 255 B(x, y) = � 0

𝑖𝑖𝑖𝑖 𝐷𝐷(𝑥𝑥, 𝑦𝑦) > 𝑡𝑡ℎ𝑟𝑟𝑟𝑟𝑟𝑟ℎ𝑜𝑜𝑜𝑜𝑜𝑜 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

Fig. 6 The noise reduction by morphological filter

After binarization process, the noise still remain in the processed image. The noise in the image may be critical because it changed the target position in the image plane. So, we should remove the background noise for the precise detection. That’s why we use morphological filter in the detection algorithm. Morphological filter is widely used to reduce noise in binary image [11]. It is a non-linear operation related to ‘morphology’ (shape) of features in an image. It used structuring element (kernel) which is a small binary matrix, each pixel has a value as zero or one. The processing result of morphological filter is shown in Fig. 6. The camera geometrical schematic using the fish-eye lens is shown in Fig. 7. The fish-eye lens makes the image to be highly distort, that’s why we can see the circular world in the raw image. After the image processing, the contour of the ROI is drawn on the image plane surrounding the red marker. Then, we can calculate the moments of the marker with that contour information. The mass center is calculated by division the central moments of the spatial moments. At last, the vision detection system can achieve the position of the target on the image as the mass center of the ROI.

(1)

Fig. 5 The RGB image (left), the HSV image (right) Fig. 7 The camera geometry using the fisheye lens

B. Target Tracking Control System The tracking algorithm should be a part of the flight control system which makes the UAV to perform the autonomous flight. The flight control system of the quadrotor has an attitude controller, a velocity controller and a position profile. Guidance law depends on the waypoint-based navigation. The velocity commands are calculated using the relative distance between the current position and the target position. This guidance algorithm is referred to [7]. In Fig. 8, the flight trajectory geometry is shown where the current position is 𝑃𝑃(𝑥𝑥𝑐𝑐𝑐𝑐𝑐𝑐 , 𝑦𝑦𝑐𝑐𝑐𝑐𝑐𝑐 , 𝑧𝑧𝑐𝑐𝑐𝑐𝑐𝑐 ) and the destination position is 𝑃𝑃(𝑥𝑥𝑑𝑑𝑑𝑑𝑑𝑑 , 𝑦𝑦𝑑𝑑𝑑𝑑𝑑𝑑 , 𝑧𝑧𝑑𝑑𝑑𝑑𝑑𝑑 ) as well. We define 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 as the longitudinal position error, 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 as the lateral position error, and 𝐻𝐻𝑒𝑒 as the altitude error. Those errors are calculated by x, y and z components in Cartesian coordinate. We can write the equations about the relative distance geometry as the below equation 𝑃𝑃 (𝑡𝑡) − 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) � 𝑙𝑙𝑙𝑙𝑙𝑙 � = � 𝑃𝑃𝑙𝑙𝑜𝑜𝑜𝑜 (𝑡𝑡) 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝐻𝐻𝑒𝑒 (𝑡𝑡) = 𝑧𝑧𝑑𝑑 − 𝑧𝑧

𝑐𝑐𝑐𝑐𝑐𝑐 (𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝑥𝑥𝑒𝑒 �� � 𝑠𝑠𝑠𝑠 𝑛𝑛(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝑦𝑦𝑒𝑒

(2)

When we calculate the x and y position errors, we use the local image positions that we detected through the camera. So we can represent the destination position with the result of vision detection algorithm. 𝑥𝑥𝑑𝑑𝑑𝑑𝑑𝑑 𝑥𝑥𝑐𝑐𝑐𝑐𝑐𝑐 𝐾𝐾𝑥𝑥 �𝑦𝑦 � = �𝑦𝑦 � + � 0 𝑑𝑑𝑑𝑑𝑑𝑑 𝑐𝑐𝑐𝑐𝑐𝑐

0 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) −𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑥𝑥𝑖𝑖𝑖𝑖𝑖𝑖 𝐾𝐾𝑦𝑦 � � 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) � �𝑦𝑦𝑖𝑖𝑖𝑖𝑖𝑖 � (3)

𝐾𝐾𝑥𝑥 , 𝐾𝐾𝑦𝑦 represent the constant which transfer between the local NED coordinate and the image plane. We can rewrite upper equations like below. 𝑥𝑥𝑒𝑒 𝐾𝐾𝑥𝑥 � 𝑦𝑦 � = � 0 𝑒𝑒

0 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝐾𝐾𝑦𝑦 � � 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 )

−𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑥𝑥𝑖𝑖𝑖𝑖𝑖𝑖 � (4) �� 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑦𝑦𝑖𝑖𝑖𝑖𝑖𝑖

So, we can calculate 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 and 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 using the local image positions of the target. Next, the damping term is required to be more precise tracking trajectory as it is suggested in [1]. If the target is static, the damping equation depends on the velocity of the target only. Below equation represents the relative velocity geometry.



̇ (𝑡𝑡) 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 − 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) �=� ̇ 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝑃𝑃𝑙𝑙𝑜𝑜𝑜𝑜 (𝑡𝑡) 𝐻𝐻𝑒𝑒̇ (𝑡𝑡) = −𝑧𝑧̇

𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝑥𝑥𝑒𝑒̇ �� � 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓𝐿𝐿𝐿𝐿𝐿𝐿 ) 𝑦𝑦𝑒𝑒̇

(5)

where 𝑥𝑥𝑒𝑒̇ = −𝑣𝑣𝑥𝑥 , 𝑦𝑦𝑒𝑒̇ = −𝑣𝑣𝑦𝑦

In case of the moving target, however, the target has the velocity which reduces the trajectory damping term. In the below equation, 𝑣𝑣𝑥𝑥 ′ stands for the target speed 𝑥𝑥𝑒𝑒̇ = 𝑣𝑣𝑥𝑥 ′ − 𝑣𝑣𝑥𝑥 ′

𝑦𝑦𝑒𝑒̇ = 𝑣𝑣𝑦𝑦 − 𝑣𝑣𝑦𝑦

(6)

We can extract the target velocity with the current velocity of the target and the set of the local positions on the image plane. The relative velocity on the image frame is calculated by the slope of the image positions, and then these pass though the low pass filter (LPF) which effects the noise reduction. We can express the target velocity as the below equation 𝑣𝑣𝑥𝑥 𝑣𝑣𝑥𝑥 ′ 𝐾𝐾𝑥𝑥 � ′ � = �𝑣𝑣 � + � 0 𝑣𝑣𝑦𝑦 𝑦𝑦

̇ 0 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) −𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑥𝑥𝚤𝚤𝚤𝚤𝚤𝚤 � (7) 𝐾𝐾𝑦𝑦 � � 𝑠𝑠𝑠𝑠𝑠𝑠(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) 𝑐𝑐𝑐𝑐𝑐𝑐(𝜓𝜓ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ) � �𝑦𝑦𝚤𝚤𝚤𝚤𝚤𝚤 ̇

Now we can calculate the trajectory damping term including the effect of the target velocity. The suggested guidance algorithm in [7] is that command accelerations are derived from the position error and the position error rate as the second-order system. The command acceleration equations are below. Fig. 8 LOS-based flight trajectory tracking control geometry

̇ (𝑡𝑡) 𝑎𝑎𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡) = 𝜔𝜔𝑙𝑙𝑙𝑙𝑙𝑙 2 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡) + 2𝜁𝜁𝑙𝑙𝑙𝑙𝑙𝑙 𝜔𝜔𝑙𝑙𝑙𝑙𝑙𝑙 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙

̇ (𝑡𝑡) 𝑎𝑎𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡) = 𝜔𝜔𝑙𝑙𝑙𝑙𝑙𝑙 2 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡) + 2𝜁𝜁𝑙𝑙𝑙𝑙𝑙𝑙 𝜔𝜔𝑙𝑙𝑙𝑙𝑙𝑙 𝑃𝑃𝑙𝑙𝑙𝑙𝑙𝑙

𝑎𝑎ℎ (𝑡𝑡) = 𝜔𝜔ℎ 2 𝑃𝑃ℎ (𝑡𝑡) + 2𝜁𝜁ℎ 𝜔𝜔ℎ 𝑃𝑃ℎ̇ (𝑡𝑡)

IV. FLIGHT TEST RESULT (8)

In this equation, ω represents the natural frequency and ζ represents the damping ratio in the trajectory dynamics. The command acceleration integrated the command velocity as shown in below equations 𝑉𝑉𝑙𝑙𝑜𝑜𝑜𝑜 (𝑡𝑡) = 𝑉𝑉𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡 − 1) + 𝑎𝑎𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡)∆𝑡𝑡

𝑉𝑉𝑙𝑙𝑎𝑎𝑎𝑎 (𝑡𝑡) = 𝑉𝑉𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡 − 1) + 𝑎𝑎𝑙𝑙𝑙𝑙𝑙𝑙 (𝑡𝑡)∆𝑡𝑡

(9)

𝑉𝑉ℎ (𝑡𝑡) = 𝑉𝑉ℎ (𝑡𝑡 − 1) + 𝑎𝑎ℎ (𝑡𝑡)∆𝑡𝑡

The UAV is following the target as the suggested tracking guidance. After tracking, it starts to decrease the altitude when the UAV is located on the top of the target. The condition of landing is two; the first one is the distance from the center on the image frame and the second is the current altitude of the UAV.

C. Integrating Two Systems In the previous section, we build the vison-based detection system and the target tracking system. In order to achieve the goal of this research, the integrated system is required. The UAV has the tracking algorithm to the reference position and the reference position should be updated by the detection system whenever the vehicle is moving. So the detection system is connected to the flight control system as shown in Fig. 9.

We build the integrated system with the target tracking control system and the vision-based detection system. After we install those systems on the quadrotor testbed, we take some flight tests to verify the flight performance about landing algorithm on the moving vehicle. A. Demonstration of Tracking and Detection Algorithm Before we perform the flight test for landing on the moving vehicle, we first try to verify in the static target. The test was taken on the lawn in KAIST. We prepare the vision-based UAV platform and the color marker for the detection. The flight test in the static environment looks like Fig. 10. The experiment verifies how well the detection and tracking algorithm works in the real environment. In the Fig. 11, it shows that the raw image from the camera sensor and the post processing image by the detection algorithm. When the vision-based UAV detects the marker, it starts to track it. If the target position in the image is located on the center of the plane, it means the target would be located on the vertical-axis of the marker. After that, it decreases the altitude and then lands on the ground. The 3D trajectory graph during the flight test is shown in Fig. 12.

The information of the target image which is calculated by Jetson TK1 board is transferred to the flight control system. Communication protocol is UDP through the wired LAN port. As each systems are installed on the UAV platform, the integrated system can control the UAV autonomously with carrying out the vision-based missions.

Fig. 10 The vision-based landing test in the static environment

Fig. 9 Vision-based detection drone system diagram

Fig. 11 The Pointgrey camera raw image (left), and the post processed image (right)

Trajectory 69

Trajectory

67 66

376

70

Height [m]

Height [m]

68

65

374

69 372 68

370 368

67 366

-3 -2 -1

66

East [m]

364 -84

0 -51

-50

-47 -48 -49 North [m]

-46

-45

Fig. 12 3D trajectory of the flight test in local NED coordinate

Fig. 13 The local NED frame position graph (left) and velocity graph (right)

The 3D trajectory on the graph was recorded when the UAV flight mode is changed to be autonomous during the flight. It shows that the UAV far from the marker is trying to track it and then, it starts to land when its location is on the marker overhead. The graphs in Fig. 13 stand for the position and velocity data on the test.

The target is moving with the constant speed to the East direction. The 3D trajectory graph shows that the vision-based detection UAV is following the target on the moving vehicle (as shown Fig. 15). Its altitude is gradually decreasing during the flight. The landing algorithm is the discretized procedure that the reference altitude will be updated when the certain conditions are satisfied, which are the current altitude of the UAV and the image position of the target as I mentioned before. That’s why we can see the step-like motion on the 3D trajectory. It’s an obvious result as the step-like motion in the position graph shown in Fig. 16 which is expressed the NED coordinate The red line stands for the reference position, and the blue line is the current position. The UAV is holding the X position and changing Y position continuously during the flight. As I mentioned on the upper paragraph, the Z reference data is the step-like graph. The discrete reference data in the middle of the graph is due to the wind which made the UAV to sway The next graph shown in Fig. 17 is about the velocity graph. The red and blue lines means same with the position graph. The Z velocity has the oscillation due to the discrete altitude reference. The Y velocity has some vibration as well, but the average of the velocity is toward the moving target. There is some vibration in the response velocity because of the wind as we saw in the upper graph. -70 response reference

-80

X [m]

B. Moving Target Tracking Test The moving target tracking is one of the challenge works, because the vision-based UAV doesn’t know the state of the moving target and the environment is uncertainty. We prepare a truck as the moving target, and the red marker is on the landing site. The flight test environment looks like Fig. 14

North [m]

Fig. 15 3D trajectory of the moving target tracking flight test

-90 -100 45

50

55

60

65

70

75

80

85

90

95

50

55

60

65

70

75

80

85

90

95

50

55

60

65

70 75 Time, t [sec]

80

85

90

95

380

Y [m]

375 370 365 360 45 -64

Z [m]

-66 -68 -70

Fig. 14 The moving vehicle tracking flight test

-72 45

Fig. 16 The NED coordinate position graph

REFERENCES u [m/s]

1

[1]

response reference

0.5 0 -0.5 45

50

55

60

65

70

75

80

85

90

95

50

55

60

65

70

75

80

85

90

95

50

55

60

65

70 75 Time, t [sec]

80

85

90

95

v [m/s]

0.5 0 -0.5 -1 45 1

w [m/s]

0.5 0 -0.5 -1 45

Fig. 17 The NED coordinate velocity graph

V. CONCLUSION In this paper, we design the vision-based detection UAV for landing on the moving vehicle. In order to achieve this objective, it is required that the hardware system, the target tracking control system, and the vision-based detection system should be developed. We use the small embedded computer for the flight control including the GPS module, IMU sensor and the receiver to perform the autonomous flight for the UAV. The image processing hardware is NVIDIA Jetson TK1 board which is the high performance embedded system with 192-GPU core. We install those systems on the S500 platform to take the landing mission with on-board system. The vision-based detection algorithm is developed for the color-based marker. We approach the HSV threshold image processing method because the operation environment is outdoor. The RGB image from the camera sensor is transformed the HSV image, then we apply the noise filter algorithm which named “Morphological Filter” for removing the background noise in the threshold image. The target tracking algorithm is based on the relative distance between the UAV and the target. We use the target information which comes from the image processing computer. The relative information generates the acceleration command like the 2order system. Finally, we build the integrated system which is validated with the outdoor flight test about the autonomous landing on the moving vehicle. ACKNOWLEDGMENT This work was supported by a National Research Foundation of Korea (NRF) grant (No. 2012033464) funded by the Korea government (Ministry of Science, ICT and Future Planning)

J. O. Lee, T. S. Kang, K. H. Lee, S. K. Im, J. K. Park “Vision-Based Indoor Localization for Unmanned Aerial Vehicles”, Journal of Aerospace Engineering, vol. 24, issue 3, pp. 373-377, Jul. 2011. [2] G. Conte, P. Doherty, “Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information”, EURASIP Journal on Advances in Signal Processing, 387308, June. 2009. [3] S. W. Cho, S. S. Huh, D. H. Shim, H. S. Choi “Vision-Based Detection and Tracking of Airborne Obstacles in a Cluttered Environment”, Journal of Intelligent & Robotic Systems, vol. 69, issue 1-4, pp. 475-488, Jan. 2013. [4] Y. Y. Jung, D. G. Lee, H. C. Bang, “Close-Range Vision Navigation and Guidance for rotary UAV”, IEEE International Conference on Automation Science and Engineering, pp. 342 - 347, Aug. 2015. [5] J. W. Kim, Y. D. Jung, D. S. Lee, D. H. Shim “Outdoor Autonomous Landing on a Moving Platform for Quadrotors using an Omnidirectional Camera”, International Conference on Unmanned Aircraft Systems, pp. 1243-1256, May. 2014. [6] S. W. Cho, S. S. Huh, D. H. Shim, “Visual Detection and Servoing for Automated Docking of Unmanned Spacecraft”, Trans. Japan Society for Aeronautical and Space Science, Aerospace Tech. Japan, vol. 12, no. APISAT-2013, pp. a107-a116, Dec. 2014. [7] Y. D. Jung, S. W. Cho, D. H. Shim, “A Trajectory-Tracking Controller Design Using L1 Adaptive Control for Multi-Rotor UAVs”, International Conference on Unmanned Aircraft Systems Association, pp. 132-138, Denver, CO, Jun. 2015. [8] K. Ling, “Precision Landing of a Quadrotor UAV on a Moving Target Using Low-cost Sensors”, Master Thesis, University of Waterloo, 2014. [9] R. Brockers, P. Bouffard, J. Ma, L. Matthies, C. Tomlin “Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision”, The International Society for Optics and Photonics(SPIE), in Proc. SPIE 8031, 803111, 2011. [10] T. G. Carreira, “Quadcopter Automatic Landing on a Docking Station”, Master Thesis, Instituto Superior Tecnico(IST), Oct. 2013. [11] L. Robert, “OpenCV 2 Computer Vision Application Programming Cookbook”, The reference book, pp. 85-88, 118-124, May. 2011