A Camera-Based Target Detection and Positioning ... - Semantic Scholar

3 downloads 103 Views 17MB Size Report
Oct 25, 2016 - Today, low price drones allow people to quickly develop small UAVs, .... Windows, was installed on the GCS laptop for mission design and ...
sensors Article

A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes Jingxuan Sun, Boyang Li, Yifan Jiang and Chih-yung Wen * Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hong Kong, China; [email protected] (J.S.); [email protected] (B.L.); [email protected] (Y.J.) * Correspondence: [email protected]; Tel.: +852-2766-6644 Academic Editors: Felipe Gonzalez Toro and Antonios Tsourdos Received: 30 August 2016; Accepted: 19 October 2016; Published: 25 October 2016

Abstract: Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. Keywords: unmanned aerial vehicle (UAV); wilderness search and rescue; target detection

1. Introduction Wilderness search and rescue (SAR) is challenging, as it involves searching large areas with complex terrain for a limited time. Common wilderness search and rescue missions include searching and rescuing injured humans and finding broken and lost cars in deserts, forests or mountains. Incidents of commercial aircraft disappearing from radar, such as the case in Indonesia in 2014 [1–3], also entail a huge search radius and search timeliness is critical to “the probability of finding and successfully aiding the victim” [4–7]. This research focuses on applications common in eastern Asian locations such as Hong Kong, Taiwan, the southeastern provinces of mainland China, Japan and the Philippines, where typhoons and earthquakes happen a few times annually, causing landslides and river flooding that result in significant damage to houses, roads and human lives. Immediate assessment of the degree of damage and searching for survivors are critical requirements for constructing a rescue and revival plan. UAV-based remote image sensing can play an important role in large-scale SAR missions [4–6,8,9]. With the development of micro-electro-mechanical system (MEMS) sensors, the use of small UAVs (with a wing-span of under 10 m) is a promising platform for conducting search, rescue and environmental surveillance missions. UAVs can be equipped with various remote sensing systems, such as powerful tools for observing disaster mitigation, including rapid all-weather flood and earthquake damage assessment. Today, low price drones allow people to quickly develop small UAVs, which have the following specific advantages:

• •

Can loiter for lengthy periods at preferred altitudes; Produce remote sensor data with better resolution than satellites, particularly in terms of image quality;

Sensors 2016, 16, 1778; doi:10.3390/s16111778

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1778

• • •

2 of 24

Low cost, rapid response; Capable of flying below normal air traffic height; Can get closer to areas of interest.

Applying UAV technology and remote sensing to search, rescue and environmental surveillance is not a new idea. Habib et al. stated the advantages of applying UAV technologies to surveillance, security and mission planning, compared with the normal use of satellites, and various technologies and applications have been integrated and tested on UAV-assisted operations [9–13]. A fact people cannot ignore when applying UAV-assisted SAR is the number of required operators. It is claimed that at least two roles are required: one pilot who flies, monitors, plans and controls the UAV, and a second pilot who operates the sensors and information flow [14]. Practically, these two roles can be filled by a single operator, yet studies on ground robots have also suggested that a third person is recommended to monitor and protect the operator(s). Researchers have also studied the human behavior involved in managing multi UAVs, and have found that “the span of the human control is limited” [4,14,15]. As a result, a critical challenge of applying multiple UAVs in SAR is simultaneously monitoring information-rich data streams, including flight data and aerial video. The possibility of simplifying the human roles by optimizing information presentation and automatizing information acquisition was also explored [4], in which a fixed-wing UAV was used as a platform, and they analyzed and compared three computer vision algorithms to improve the presentation. To automatize the information acquisition, it has been suggested that UAV systems integrate target-detection technologies for detecting people, cars or aircraft. A common method of observing people is the detection of heat features, which can be achieved by applying infrared camera technology and specifically developed algorithms. In 2005, a two-stage method based on a generalized template was presented [16]. In the first stage, a fast screening procedure is conducted to locate the potential person. Then, the hypothesized location of the person is examined by an ensemble classifier. In contrast, human detection based on color imagery has also been studied for many years. The research on developing a human detection method was conducted, which uses background subtraction, but pre-processing is required before a search mission [17]. Another method of human detection was presented that uses color images and models the human/flexible parts, then detects the parts separately [18]. A combination of both thermal and color imagery for human detection was also studied in [19]. To enhance information presentation and support humanitarian action, geo-referenced data from disaster-affected areas is expected to be produced. Numerous different technologies and algorithms for generating geo-referenced data via UAV have been studied and developed. A self-adaptive, image-matching technique to process UAV video in real-time for quick natural disaster response was presented in [20]. A prototype UAV and a geographical information system (GIS) by applying the stereo-matching method to construct a three-dimensional hazard map was also developed [21]. Scale Invariant Features Transform (SIFT) algorithms was improved in [22] by applying a simplified Forstner operator. Rectifying images on pseudo center points of auxiliary data were proposed in [23]. The aim of this study is to build an all-in-one camera-based target detection and positioning system that integrates the necessary remote sensors for wilderness SAR missions into a fixed-wing UAV. Identification and search algorithms were also developed. The UAV system can autonomously conduct a mission, including auto-takeoff and auto-landing. The on-board searching algorithm can report victims or cars with GPS coordinates in real-time. After the mission, a map of the hazard area can be generated to facilitate further logistics decisions and rescue troop action. Despite their importance, the algorithms for producing the hazard map are beyond the scope of this paper. In this work, we focus on the possibility of using a UAV to simultaneously collect geo-referenced data and detect victims. A hazard map and points are generated by the commercial software Pix4DmapperTM (Pix4Dmapper Discovery version 2.0.83, Pix4D SA, Lausanne, Switzerland). Figure 1 provides a mission flowchart. Once a wilderness SAR mission is requested to the Ground Control System (GCS), the GCS operator designs a flight path that covers the search area and

Sensors 2016, 16, 1778

3 of 24

sends the UAV into the air to conduct the mission. During the flight, the on-board image processing system is designed to identify targets such as cars or victims, and to report possible targets with the corresponding GPS coordinates to the GCS within 60 m accuracy. These real-time images and generalized GPS help the immediate rescue action including directing the victim to wait for rescue at the current location and delivering emergency medicine, food and water. Meanwhile, the UAV is transmitting real-time video to the GCS and recording high-resolution aerial video that can be used, once the UAV lands, in post-processing tasks such as target identification and mapping the affected area. The post-target identification is designed to report victims’ accurate locations within 15 m, and the map of the affected area can be used to construct a rescue plan.

Figure 1. Flowchart of a wilderness SAR mission using the all-in-one UAV.

The remainder of this paper is organized as follows. Section 2 describes the details of the UAV system. Section 3 presents the algorithm and the implementation. Section 4 presents the tests and results, and Section 5 concludes the paper. 2. Experimental Design The all-in-one camera-based target detection and positioning UAV system integrates the UAV platform, the communication system, the image system, and the GCS. The detailed hardware construction of the UAV is introduced in this section. 2.1. System Architecture The purpose of the UAV system developed in this study was to find targets’ GPS coordinates within a limited amount of time. To achieve this, a suitable type of aircraft frame was needed. The aircraft had to have enough fuselage space to accommodate the necessary payload for the task. The vehicle configuration and material had to exhibit the good aerodynamic performance and reliable structural strength needed for long-range missions. The propulsion system for the aircraft was calculated and selected once the UAV’s configuration and requirements were known. Next, a communication system, including a telemetry system, was used to connect the ground station to the UAV. After adding the flight control system, the aircraft could take off and follow the designed route autonomously. Finally, with the help of the mission system (auto antenna tracker (AAT), 1

Sensors 2016, 16,16, 1778 Sensors 2016, 1778

4 of4 24 of 24

(AAT), cameras, on-board processing board Odroid and gimbal), targets’ and their GPS coordinates cameras, on-board processing Odroid and gimbal), targets’ and theirthe GPS coordinates could could be found. Figure 2 showsboard the UAV system’s systematic framework, details of which arebe found. Figure 2 shows the UAV system’s systematic framework, the details of which are explained explained in the following sub-sections. The whole system weighs 3.35 kg and takes off via hand in the following sub-sections. The whole system weighs 3.35 kg and takes off via hand launching. launching.

Figure 2. Systematic framework of the UAV system. Figure 2. Systematic framework of the UAV system.

2.2. Airframe of the UAV System 2.2. Airframe of the UAV System The project objective was to develop a highly integrated system capable of large-area SAR The project objective was to develop a highly integrated system capable of large-area SAR missions. Thus, the flight vehicle, as the basic platform of the whole system, was chosen first. Given missions. Thus, the flight vehicle, as the basic platform of the whole system, was chosen first. the prerequisites of quick response and immediate assessment capabilities, a fixed-wing aircraft was Given the prerequisites of quick response and immediate assessment capabilities, a fixed-wing aircraft chosen for its high speed cruising ability, long range and flexibility in complex climatic conditions. was chosen for its high speed cruising ability, long range and flexibility in complex climatic conditions. To shorten the development cycle and improve system maintenance, an off-the-shelf commercial To shorten the development cycle and improve system maintenance, an off-the-shelf commercial UAV UAV platform “Talon” from X-UAV company was used (Figure 3). The wingspan of the Talon is 1718 platform “Talon” from X-UAV company was used (Figure 3). The wingspan of the Talon is 1718 mm mm and the wing area is 0.062m2. The take-off weight of this airframe can reach 3.8 kg. and the wing area is 0.06 m . The take-off weight of this airframe can reach 3.8 kg.

Sensors 2016, 16, 1778

5 of 24

Sensors 2016, 16, 1778

5 of 24

Figure 3. Overall View of X-UAV Talon [24]. Figure 3. Overall View of X-UAV Talon [24].

2.3. Propulsion System 2.3. Propulsion System The UAV uses Sunnysky X-2820-5 motor works in conjunction with an APC 11X5.5EP propeller. The mAh UAV Lipo uses Sunnysky motor in conjunction with anprovides APC 11X5.5EP propeller. A 10,000 4-cell 20 C X-2820-5 battery was usedworks and this propulsion system a maximum A 10,000 mAh Lipo 4-cell 20 C battery was used and this propulsion system provides a maximum cruse time of approximately 40 min at an airspeed of 18 m/s. cruse time of approximately 40 min at an airspeed of 18 m/s. 2.4. Navigation System 2.4. Navigation System The main component of the navigation system is the Pixhawk flight controller running the free The main of the navigation is the kit, Pixhawk flight controller running ArduPilot Planecomponent firmware, equipped with GPS system and compass airspeed sensor and a sonar for the free ArduPilot Plane firmware, equipped with GPS and compass kit, airspeed sensor and a sonar measuring the height below 7 m. The airplane with this navigation system can conduct a fully for measuring the height below 7 m. The airplane this navigation system can conduct a fully autonomous mission, including auto take-off, cruise viawith waypoints, return to home position and auto autonomous auto take-off, cruise via waypoints, return to home position and auto landing, with mission, enhancedincluding fail-safe protection. landing, with enhanced fail-safe protection. 2.5. GCS and Data Link 2.5. GCS and Data Link The GCS works via a data link that enables the researcher to monitor or interfere with the UAV The via a dataPlanner, link thatan enables the researcher to monitor or interfere withwith the UAV during anGCS auto works mission. Mission open-source ground station application compatible Windows, was installed the GCS laptop for mission design and monitoring. An HKPilot 433 Mhz with during an auto mission.on Mission Planner, an open-source ground station application compatible 500 Mw radio and receiver was installed on the GCS laptop, along with An a Pixhawk Windows, wastransmitter installed on the GCS laptop for mission design and monitoring. HKPilotflight 433 Mhz controller. An transmitter auto antenna tracker (AAT) in on conjunction with a along 9 dBi with patchaantenna 500 Mw radio and receiver wasworked installed the GCS laptop, Pixhawktoflight provide a reliable link within 5-km range. controller. An autodata antenna trackera (AAT) worked in conjunction with a 9 dBi patch antenna to provide a reliable data link within a 5-km range. 2.6. Post-Imaging Processing and Video Transmission System

2.6. Post-Imaging Processing and Video The UAV system is designed with Transmission a fixed-wing System aircraft flying at airspeeds ranging from 15 to 25 m/s for quicker response times on SAR missions. The ground speedflying may reach 40 m/s inranging extremefrom The UAV system is designed with a fixed-wing aircraft at airspeeds weather conditions. A GoPro HERO 4 was installed in the vehicle after considering the balance 15 to 25 m/s for quicker response times on SAR missions. The ground speed may reach 40 m/s between its weight and image quality capabilities. In a searching and mapping mission, the aerial in extreme weather conditions. A GoPro HERO 4 was installed in the vehicle after considering the image always faces the ground. During flight, some actions such as rolling, pitching or other balance between its weight and image quality capabilities. In a searching and mapping mission, the unexpected vibrations can disrupt the camera’s stability, which may lead to unclear video. A Mini aerial image always faces the ground. During flight, some actions such as rolling, pitching or other 2D camera gimbal produced by Feiyu Tech Co., Ltd. (Guilin, China), powered by two brushless unexpected vibrations can disrupt the camera’s stability, which may lead to unclear video. A Mini 2D motors, was used to stabilize the camera (Figure 4). The camera (GoPro HERO 4, GoPro, Inc., San camera gimbal produced by Feiyu Tech Co., Ltd. (Guilin, China), powered by two brushless motors, Mateo, CA, USA) was set to video mode with a 1920 × 1080 pixel resolution in a narrow field of view was used to stabilize thesecond. cameraDuring (Figurethe 4). flight, The camera (GoPro HERO 4, GoPro, Mateo, CA, (FOV) at 25 frames per an analog image signal is sent Inc., to anSan on-screen USA) was set to video mode with a 1920 × 1080 pixel resolution in a narrow field of view (FOV) at 25 frames per second. During the flight, an analog image signal is sent to an on-screen display (OSD)

Sensors 2016, 16, 1778

6 of 24

Sensors 2016, 16, 1778

6 of 24

and video transmitter. a frequency 5.8 GHz, the aerial video can bevideo visualized GCS in display (OSD) and videoWith transmitter. Withofa frequency of 5.8 GHz, the aerial can be by visualized Sensorsas 2016, 16,high-resolution 1778 6 of 24 real-time the video is rerecorded for use during post-processing. by GCS in real-time as the high-resolution video is rerecorded for use during post-processing. display (OSD) and video transmitter. With a frequency of 5.8 GHz, the aerial video can be visualized by GCS in real-time as the high-resolution video is rerecorded for use during post-processing.

Figure 4. GoPro HERO 4 attached to the camera gimbal. Figure 4. GoPro HERO 4 attached to the camera gimbal. Figure 4. GoPro HERO 4 attached to the camera gimbal.

2.7. On-Board, Real-Time Imaging Process and Transmission System 2.7. On-Board, Real-Time Imaging Process and Transmission System 2.7. On-Board, Real-Time Imaging Process and Transmission System A real-time imaging process and transmission system was setup on the UAV. The “oCam,” A real-time imaging process and transmission system was setup on the UAV. The “oCam,” real-time and transmission device system (CCD) was setup on the UAV. The “oCam,” (shows inAFigure 5) imaging a 5-megaprocess pixel charge-coupled camera was chosen as the image (shows in Figure 5) a 5-mega pixel charge-coupled device (CCD) camera was chosen as the image (shows in Figure 5) a 5-mega pixel charge-coupled device (CCD) camera was chosen as the source for the on-board target identification system. The focal length of the camera is 3.6 image mm and it source for the on-board target identification system. Thefocal focallength length the camera 3.6 mm it forview the on-board target identification system. The of of the camera iswith 3.6ismm and itand has asource field of of 65°. It weighs 37 g and has a 1920 × 1080 pixel resolution 30 frames per ◦ has ahas fieldfield of view of 65 . It weighs × 1080pixel pixel resolution withframes 30 frames of view of 65°. It weighs37 37ggand and has has aa 1920 1920 × 1080 resolution per per second. aThe development of the on-board image processing was based with on 30 the Odroid XU4 second. The development of theofon-board imageimage processing was based the Odroid XU4 (Hardkernel second. The development the on-board processing was on based on the Odroid XU4 (Hardkernel co., Ltd., GyeongGi, South Korea) (Figure 5b), which is a light, small, powerful (Hardkernel co., Ltd., South 5b), Korea) (Figure which is powerful a light, small, powerful co., Ltd., GyeongGi, SouthGyeongGi, Korea) (Figure which is a 5b), light, small, computing device computing device equipped with a a2-GHz core CPU and22Gbyte Gbyte LPDDR3 Random-Access Memory computing device equipped with 2-GHz core CPU and LPDDR3 Random-Access Memory equipped with a 2-GHz core CPU and 2 Gbyte LPDDR3 Random-Access Memory (RAM). It also (RAM). It also provides USB 3.0 interfaces that increasetransfer transferspeeds speeds high-resolution images. (RAM). It 3.0 also provides USB interfaces that increase forfor high-resolution images. provides USB interfaces that3.0 increase transfer speeds for high-resolution images. The Odroid XU4 The Odroid XU4XU4 used onon thethe UAV in this runsUbuntu Ubuntu14.04. 14.04. The details of algorithm the algorithm Odroid used UAV thissystem system runs The details of the and and usedThe on the UAV in this system runsinUbuntu 14.04. The details of the algorithm and implementation implementation willwill be be discussed Odroidboard board was connected 4th Generation implementation discussedininSection Section 3. 3. The The Odroid was connected to ato 4tha Generation will be discussed in Section 3. The Odroid board was connected to a 4th Generation (4G) cellular (4G) cellular network via a HUAWEI (Shenzhen, China) E3372 USB dongle. Once the is (4G) cellular network via a HUAWEI (Shenzhen, China) E3372 USB dongle. Once target the target is network via a by HUAWEI (Shenzhen, China) E3372 USB dongle. Once thethe target is identified by the identified the Odroid XU4, that particular image is transmitted through 4G cellular network identified by the Odroid XU4, that particular image is transmitted through the 4G cellular network Odroid XU4, that particular image is transmitted through the 4G cellular network to the GCS. the GCS. to thetoGCS.

(a)

(a)

(b) Figure 5. (a) oCam [25] and (b) Odroid XU4.

Figure 5. (a) oCam [25] and (b) Odroid XU4.

(b)

Figure 5. (a) and (b) Odroid 3. Algorithm for and Implementation ofoCam Target[25] Identification andXU4. Mapping

3. Algorithm for and Implementation of Target Identification and Mapping The target identification program was implemented using an on-board micro-computer (Odroid 3. Algorithm for and Implementation of Target Identification and Mapping XU4,)target and theidentification ground control station. The was program can automatically and reportmicro-computer cars, people The program implemented usingidentify an on-board and other specific targets. (Odroid the groundprogram control station. The program can an automatically identify and report cars, TheXU4,) targetand identification was implemented using on-board micro-computer (Odroid people andthe other specific targets. XU4,) and ground control station. The program can automatically identify and report cars, people 3.1. Target Identification Algorithm and other specific targets. 3.1. TargetThe Identification Algorithm mission is to find victims who need to be rescued, crashed cars or aircraft. The algorithm approaches these reconnaissance problems by using the color signature. These targets create a good 3.1. Target Identification Algorithm The mission is to find victims who need to be rescued, crashed cars or aircraft. The algorithm contrast with the backgrounds due to their artificial colors. Figure 6 shows the flowchart of the

approaches these reconnaissance problems by to using the colorcrashed signature. These targetsThe create a good The mission isalgorithm. to find victims who need beYUV rescued, cars orspace aircraft. algorithm reconnaissance The aerial images are in rather than RGB color to identify the contrast with the backgrounds due to their artificial colors. Figure 6 shows the flowchart the approaches these reconnaissance problems by using the color signature. These targets create aofgood reconnaissance algorithm. The aerial images in YUV ratherFigure than RGB color the space to identify contrast with the backgrounds due to their are artificial colors. 6 shows flowchart of the the

reconnaissance algorithm. The aerial images are in YUV rather than RGB color space to identify the

Sensors 2016, 16, 1778

7 of 24

Sensors 2016, 16, 1778

7 of 24

color signatures [26]. [26]. ThisThis progress cancan be be achieved bybycalling functionprovided providedbyby OpenCV color signatures progress achieved callingback back the the function OpenCV libraries. Both blue and red signatures are examined. libraries. Both blue and red signatures are examined.

Figure 6. Flowchart of the identification algorithm.

Figure 6. Flowchart of the identification algorithm.

The crucial step of the algorithm is to find an appropriate value of T hreadl . A self-adapting The crucial of the algorithm is to find an appropriate valueincluded of Threadl. A self-adapting method wasstep applied to the reconnaissance program. The identification the following steps. method was applied to the reconnaissance program. The identification included the following steps. Step 1:

Read the blue and red chrominance values (Cb and Cr layers) of the image, and determine

theblue maximum, and mean of the chrominance These values arethe Step 1: Read the and redminimum chrominance valuesvalues (Cb and Cr layers) of the matrix. image, and determine then used to adapt the threshold. maximum, minimum and mean values of the chrominance matrix. These values are then used Step Distinguish whether existing objects are in great contrast. The distinction is processed by to2:adapt the threshold. comparing the maximum/minimum and mean values of the chrominance. Introducing this Step 2: Distinguish whether existing objects are in great contrast. The distinction is processed by step improves the efficiency with which the aerial video is processed, because the relevant comparing the maximum/minimum and mean values of the chrominance. Introducing this identification is skipped if the criteria are not met. The criteria are expressed in Equation step improves the efficiency with which the aerial video is processed, because the relevant (1): identification is skipped if the criteria are not met. The criteria are expressed in Equation (1): max  mean  30

Step 3:

(1)

mean− mean min < 30 max > 30 (1) min < 30which is determined by Equation (2), Determine the appropriate valuemean of the−threshold,

where the threshold with subscripts b and r donate blue and red, respectively.

Ks

is the

Step 3: Determine the appropriate value of the threshold, which is determined by Equation (2), where Ks also sensitivity factor, and the program becomes morered, sensitive as it increases. the threshold with subscripts b and r donate blue and respectively. Ks is the sensitivity different cameras, and was set as as it 0.1increases. for the GoPro HERO 4 andwith 0.15 different for the factor, changes and the with program becomes more sensitive Ks also changes oCam in this study. cameras, and was set as 0.1 for the GoPro HERO 4 and 0.15 for the oCam in this study. Threadlb = max − (max − mean) ∗ Ks Threadlr = (mean − min) ∗ Ks + min

(2)

Sensors 2016, 16, 1778 Sensors 2016, 16, 1778

8 of 24

Threadlb  max  (max mean)*K s

8 of 24

Threadlr  (mean min)*K s  min

Step 4: Binarize the image with the threshold. Step 4: Binarize the image with the threshold. ( )  0;( p 0;Threadl ( p < Threadl ) f (fp()p) = p  Threadl ) ( p > Threadl ) 255;( 255;

(2)

(3) (3)

where0 0represents representsthe theblack blackcolor colorand and255 255represents represents the white color. where the white color. Step 5: Examine the number of targets and their sizes. The results are abandoned if there are too Step 5: Examine the number of targets and their sizes. The results are abandoned if there are too many targets (over 20) in a single image because such results are typically caused by noise many targets (over 20) in a single image because such results are typically caused by noise at at the flight height of 80 m. The amount criterion is used because it is rare for a UAV to the flight height of 80 m. The amount criterion is used because it is rare for a UAV to capture capture over 20 victims or cars in a single image in the wilderness. When examining the over 20 victims or cars in a single image in the wilderness. When examining the size of the size of the targets, the results are abandoned if the suspected target only has a few or too targets, the results are abandoned if the suspected target only has a few or too many pixels. many pixels. The criterion for the number of pixels is determined by the height of the UAV The criterion for the number of pixels is determined by the height of the UAV and the size of and the size of the target. the target. Step 6: The targets are marked with blue or red circles on the original image and reported to the Step 6: The targets are marked with blue or red circles on the original image and reported to the GCS. GCS.

(a)

(b)

(c) Figure 7. Figure 7. (a) (a) The The original original image image with with red red target target in in RGB RGB color color space; space; (b) (b) the the Cr Cr layer layer of of the the YCbCr YCbCr color color space and (c) the binarized image with threshold. space and (c) the binarized image with threshold.

Sensors 2016, 16, 1778

9 of 24

Figure 7 demonstrates a test of the target identification algorithm using an aerial image with a tiny red target. Figure 7a is the original image captured from the aerial video with the target circled for easy identification. The Cr data were loaded for red color, as shown in Figure 7b. Figure 7c shows the 16, 1778 image with a threshold of 0.44 (the white spot in the upper left quadrant). 9 of 24 results Sensors of the2016, binarized Figure 7 demonstrates of the target identification algorithm using an aerial image with a 3.2. On-Board Target Identificationa test Implementation tiny red target. Figure 7a is the original image captured from the aerial video with the target circled

for easy identification. Cr data were loaded red color, astargets, shown inthe Figure 7b. Figure 7cto shows Before developing theThe on-board system forfor identifying method used report the the binarized image with a threshold of 0.44 (theConsidering white spot in the quadrant). on the targets the andresults theiroflocations to the GCS must be determined. all upper of theleft subsystems vehicle and the frequencies used for the data link (433 MHz), live video transmission (5.8 GHz) and 3.2. On-Board Target Identification Implementation remote controller (2.4 GHz), the on-board target identification system is designed to connect to the developing the on-board system targets, the method used Kong to report base stationBefore of a cellular network, 800–900 MHzfor in identifying the proposed testing area (Hong andtheTaiwan). targets and their locations to the GCS must be determined. Considering all of the subsystems on the The results are then uploaded to the Dropbox server. Consequently, the on-board target identification vehicle and the frequencies used for the data link (433 MHz), live video transmission (5.8 GHz) and system remote consists of four (2.4 modules: Odroid as the core hardware,system an oCam CCD camera, a GPS controller GHz), the on-board target identification is designed to connect to themodule and a dongle that connects to the 4G cellular network and provides it for the Odroid. The workflow base station of a cellular network, 800–900 MHz in the proposed testing area (Hong Kong and Taiwan). The results are then uploaded to designed the Dropbox the on-board target of the on-board target identification system, as server. shownConsequently, in Figure 8, includes three functions: identification system consists of fourreporting. modules: Odroid as the core hardware, an oCam CCD camera, Self-starting, identification and target a GPS module andisaachieved dongle thatvia connects to the 4G cellular and provides for the Odroid. when The self-starting a Linux shell script.network The program runsitautomatically The workflow of the on-board target identification system, designed as shown in Figure 8, includes Odroid is powered on. The statuses of the camera, the Internet and the GPS module are checked. After three functions: Self-starting, identification and target reporting. successfullyThe connecting allisofachieved the modules, the identification program runs a loop untilwhen the Odroid self-starting via a Linux shell script. The program runsonautomatically is powered off.is The identification program usually four frames in module a second. Odroid powered on. The statuses of the camera,conducts the Internet and the GPS are checked. After successfully all of the modules, identification program runs onas a loop until the of the During the flight, connecting the GPS coordinates of thethe aircraft are directly treated the location is powered off. The identification program usually fouraframes a second.report during targets,Odroid because the rapid report is preferable to taking the conducts time to get highlyinaccurate During thelocations flight, theofGPS the aircraft are directly treated as the location of the aerial flight. The accurate thecoordinates targets areofdiscovered post-flight using the high-resolution targets, because the rapid report is preferable to taking the time to get a highly accurate report during video taken by the GoPro camera. flight. The accurate locations of the targets are discovered post-flight using the high-resolution aerial When the system scans the resulting files every 30 s and packs the new results, which videoreporting, taken by the GoPro camera. are uploaded as areporting, package the instead ofscans as frames to limit time consumption, the Dropbox When system the resulting files every 30 s and packsbecause the new results, which server requiresare verification fora each file.instead The testing resultstoshow a package every 30 s is faster uploaded as package of as frames limit that time uploading consumption, because the Dropbox server requires verification for each file. The testing results show that uploading a package than uploading frame by frame. The reporting results include the images of the markedevery target and 30 sof is the faster thancoordinates. uploading frame by frame. results include the images of the marked a text file GPS These files The are reporting then stored in an external SD card that allows the target and a text file of the GPS coordinates. These files are then stored in an external SD card that GCS to quickly check the results post-flight. Figure 9 shows a truck reported by the on-board target allows the GCS to quickly check the results post-flight. Figure 9 shows a truck reported by the onidentification system. board target identification system.

Figure 8. Flowchart of the on-board target identification system.

Figure 8. Flowchart of the on-board target identification system.

Sensors 2016, 16, 1778

10 of 24

Sensors 2016, 16, 1778

10 of 24

Sensors 2016, 16, 1778

10 of 24

9. Atruck blue truck reported theon-board on-board target system, marked marked by the Figure 9. Figure A blue reported bybythe targetidentification identification system, by the identification program with a white circle. identification program with a white circle.

Figure 9. A blue truck reported by the via on-board target system, marked by the 3.3. Post-Target Identification Implementation Aerial Video andidentification Flight Log identification program Implementation with a white circle.via Aerial Video and Flight Log 3.3. Post-Target Identification Post-target identification is conducted using the high-resolution aerial video taken by the GoPro camera and stored in the SD card, and the flight data log from the flight controller to capture all

Post-target identification is conducted using the Video high-resolution aerial video taken by the GoPro 3.3. Post-Target Identification Implementation via Aerial and Flight possible targets to be rescued and obtain their accurate locations. In thisLog section, the technical details camera and stored in the SD card, and the flight data log from the flight controller to capture all of post-target identificationis are discussed. using the high-resolution aerial Post-target identification conducted video taken by the GoPro possible targets to be rescued and obtain their accurate locations. In this section, the technical details camera and stored in the SD card, and the flight data log from the flight controller to capture all 3.3.1. Target Identification of post-target identification are discussed. possible targets to be rescued and obtain their accurate locations. In this section, the technical details The altitude of the flight path is carefully determined during the flight tests via the inertialof post-target identification are discussed.

measurement unit and GPS data in the flight controller. Any targets coated with artificial colors of or 3.3.1. Target Identification

larger than the estimated image size (15 × 15 pixels), calculated according to the height of the UAV

3.3.1. altitude Target Identification The of the flight path is carefully determined during the flight tests via the and the target’s physical dimensions, should be reported. Figure 10of shows an aerial image of in a 0.8 m 0.8 m controller. blue board with ‘Y’tests oncoated itvia from flight The altitude the flight path is carefully determined during thea letter flight the inertialinertial-measurement unit and GPS data the × flight Any targets with artificial heights ofunit 50 m, 80 m anddata 100 m. The flight heightcontroller. of the flightAny pathtargets for the later field test artificial was determined measurement and GPS in the coated with colors of or of colors of or larger than the estimated image size (15 × 15 pixels), calculated according to the height to be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image larger than the estimated image size (15 × 15 pixels), calculated according to the height of the UAV the UAV and target’s physical and the might be treated as noise.dimensions, should be reported. and the 10 target’s physical dimensions, be × reported. Figure shows an aerial image ofshould a 0.8 m 0.8 m blue board with a letter ‘Y’ on it from flight Figure 10 shows an aerial image of a 0.8 m × 0.8 m blue board with a letter ‘Y’ on it from flight heights of 50 m, 80 m and 100 m. The height of the flight path for the later field test was determined to heights of 50 m, 80 m and 100 m. The height of the flight path for the later field test was determined be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image and to be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image mightand be might treated noise.as noise. beas treated

(a)

(a)

(b)

(b)

Figure 10. Cont.

Sensors 2016, 16, 1778

11 of 24

Sensors 2016, 16, 1778

11 of 24

Sensors 2016, 16, 1778

11 of 24

(c) Figure 10. The results of altitude tests with the vehicle cruising at (a) 50 m; (b) 80 m and (c) 100 m.

Figure 10. The results of altitude tests with the vehicle cruising at (a) 50 m; (b) 80 m and (c) 100 m. (c)

The main loop of the post-identification program was developed in the OPENCV environment. results of altitude tests with the vehicle cruising at (a) 50 m; (b) 80 m and 100 m.video The main loop10.ofThe the post-identification was developed inloads the OPENCV environment. Similar toFigure on-board target identification, the program post-identification program the(c) aerial file Similar to on-board target identification, the post-identification program loads the aerial and runs the algorithm in a loop with each frame. The targets are marked for the GCS operator, video who file The main loop of the post-identification program was developed in the OPENCV environment. engages efficient confirmation. Theeach flight data log the aerial video are and runs theinalgorithm in a loop with frame. Theand targets are marked forsimultaneously the GCS operator, Similar to on-board target identification, the post-identification program loads the aerial video file to determine the reference frame number andand reference shutter time. The technical who synchronized engages in efficient confirmation. The flight data log the aerial video are simultaneously and runs the algorithm in a loop with each frame. The targets are marked for the GCS operator, who details of this step are discussed in Section 3.3.2. The target image is saved as a JPEG file and named synchronized to determine the reference frame number and reference shutter time. The technical engages in efficient confirmation. The flight data log and the aerial video are simultaneously withsynchronized its frame number. Figure 11 shows a red target board and a green agricultural net reported to discussed determine the reference 3.3.2. frame The number andimage reference shutteras time. The technical details of this step are in Section target is saved a JPEG file andby named the details post-identification program. This JPEG file is The senttarget to theimage GPS transformation program discussed of this step are discussed in Section 3.3.2. is saved as a JPEG file and named by the with its frame number. Figure 11 shows a red target board and a green agricultural net reported in Section tonumber. better position theshows target. with its3.3.3 frame Figure 11 a red target board and a green agricultural net reported by

post-identification program. This JPEG file is sent to the GPS transformation program discussed in the post-identification program. This JPEG file is sent to the GPS transformation program discussed Section 3.3.3 to better position the target. in Section 3.3.3 to better position the target.

Figure 11. A red target board and a green agricultural net reported by the post-identification program, withFigure both the red blue targets with circlesnet inreported corresponding colors. 11. A redand target board and marked a green agricultural by the post-identification program,

Figure 11. A red target board and a green agricultural net reported by the post-identification program, with both the red and blue targets marked with circles in corresponding colors. determine the blue image’s frame number, wecircles assume that the GoPro colors. HERO 4 camera records the with To both the red and targets marked with in corresponding video with a fixed frame rate of 25 frames per second (FPS) inthe thisGoPro study.HERO Thus,4 the timerecords interval To determine the image’s frame number, we assume that camera the( ) of the target frame F in the aerial video and the reference frame can be determined by video with a fixed frame rate of 25 frames per second (FPS) in this study. Thus, the time interval ( ) To determine the image’s frame number, we assume that the GoPro HERO 4 camera records the of the target frame F in the aerial video and the reference frame can be determined by

(4) ( TI ) ( 25 frames video with a fixed frame rate per−second (FPS) in this study. the time interval = of . ) × 40 Thus, ms (4) ( ) = − . × 40 ms of theand target frame F in the the GPS time of F is aerial video and the reference frame can be determined by and the GPS time of F is

= − Re f erence Frame+No.) × 40 ms TI = ( Frame Number = +

(5)

(5)

(4)

where the . and are determined during synchronization, where the of F is . and are determined during synchronization, and the GPS time as discussed in Section 3.3.2. as discussed in Section 3.3.2. GPSTime = Reisfdetermined, erence GPS the Time + TI and GPS coordinates of the (5) Once the GPS time of the target frame altitude Once the GPS time of the target frame is determined, the altitude and GPS coordinates of the camera are determined. The yaw angle Ψ is recorded as part of the Attitude messages in the flight camera are determined. The yaw angle Ψ is recorded as part of the Attitude messages insynchronization, the flight where the Re f erence Frame No. and Re f erence GPS Time are determined during datadata log,log, andand thethe corresponding be searched searchedvia viaGPS GPStime. time.The The update correspondingAttitude Attitude message message can can be update as discussed in Section 3.3.2. messages come from an inertial-measurement unit IMU sensor, and the frequencies of the Attitude frequencies of the Attitude messages come from an inertial-measurement unit IMU sensor, and the Once the GPSare time of theThese target frame isof determined, thebe altitude and GPS coordinates GPS messages different. types messages cannot be recorded simultaneously GPS messages are different. Thesetwo two types of messages cannot recorded simultaneously duedue to toof the camera determined. The yaw angle is recorded as part offrequency the Attitude messages inmessage the flight the are control logic of of the flight However, the frequency the Attitude the control logic the flightboard. board.Ψ However, the updating updating ofofthe Attitude message is is data much higher than that ofAttitude thethe GPS thus the attitude message that closest GPS time much higher than that of GPSmessages, messages,can thusbe thesearched attitude message isisclosest to to thethe GPS time log, and the corresponding message via GPSthat time. The update frequencies of is treated as the vehicle’s current attitude. is treated as the vehicle’s the Attitude messages come current from anattitude. inertial-measurement unit IMU sensor, and the GPS messages are

different. These two types of messages cannot be recorded simultaneously due to the control logic of the flight board. However, the updating frequency of the Attitude message is much higher than that of the GPS messages, thus the attitude message that is closest to the GPS time is treated as the vehicle’s current attitude.

Sensors 2016, 16,16, 1778 Sensors 2016, 1778

12 of 12 24 of 24

3.3.2. Synchronization of the Flight Data and Aerial Video 3.3.2. Synchronization of the Flight Data and Aerial Video During the flight, the aerial video and flight data are recorded by the GoPro HERO 4 camera During the flight, the aerial video and flight data are recorded by the HERO 4 camera and and flight controller, respectively. It is crucial to synchronize the flight dataGoPro and the aerial video to flight controller, respectively. It is crucial to synchronize the flight data and the aerial video to obtain obtain the targets’ geo-information for the identification and mapping of the affected areas in a rescue the targets’ geo-information for the identification and mapping of the affected areas in a rescue mission. mission. Camera trigger distance (DO_SET_CAM_TRIGG_DIST), a camera control command provided Camera trigger distance (DO_SET_CAM_TRIGG_DIST), a camera control command provided by ArduPlane firmware, was introduced to synchronize the aerial video and the flight data by ArduPlane firmware, was introduced to synchronize the aerial video and the flight data log.log. DO_SET_CAM_TRIGG_DIST distance meters between camera triggers, and flight DO_SET_CAM_TRIGG_DIST setssets thethe distance in in meters between camera triggers, and thethe flight control board logs the camera messages, including GPS time, GPS location and aircraft altitude when control board logs the camera messages, including GPS time, GPS location and aircraft altitude when camera triggered. Compared with commercial quad-copters, fixed-wing UAVs higher thethe camera is is triggered. Compared with commercial quad-copters, fixed-wing UAVs flyfly at at higher airspeeds. time interval between two consecutive images should small enough meet airspeeds. TheThe time interval between two consecutive images should be be small enough to to meet thethe overlapping requirement for further mapping. However, the normal GoPro HERO 4 cannot achieve overlapping requirement for further mapping. However, the normal GoPro HERO 4 cannot achieve continuous photo capturing a high frequency (5 or Hz10orHz) 10 for Hz)longer for longer than 30 s Thus, [27]. the Thus, continuous photo capturing at aathigh frequency (5 Hz than 30 s [27]. the GoPro was set to work in video recording mode with a frame rate of 25 FPS. The mode and shutter GoPro was set to work in video recording mode with a frame rate of 25 FPS. The mode and shutter buttons were modified with a pulse width modulation (PWM)-controlled relay switch, shown buttons were modified with a pulse width modulation (PWM)-controlled relay switch, as as shown in in Figure that thecamera cameracan canbe becontrolled controlledby bythe the flight flight controller. controller. The Figure 12,12, soso that the The shutter shutterand andits itsduration durationare configured in the flight controller. are configured in the flight controller.

Figure 12. Modification of the GoPro buttons to PWM-controlled relay switch. Figure 12. Modification of the GoPro buttons to PWM-controlled relay switch.

The camera trigger distance can be set to any distance that will not affect the GoPro’s video TheAcamera trigger distance can be set to any distance will not theIn GoPro’s video recording. high-frequency photo capturing command will leadthat to video file affect damage. this study, recording. A high-frequency photo capturing command will lead to video file damage. In this study, the flight controller sends a PWM signal to trigger the camera and record the shutter times and the flight controller a PWM signal the to trigger therecords camerathe and record shutter signal times is and positions of the camerasends messages. However, Pixhawk time that the the control positions the camera messages. Pixhawk records the time that time. the control signal is sent out, andofthere is a delay between However, the image’sthe recorded time and its real shutter This shutter sent out, and there is a delay between the image’s recorded time and its real shutter time. This shutter delay was measured to be 40 ms and was introduced to the synchronization process. delay was measured to be 40 ms and was introduced to the synchronization process. The synchronization process shown in Figure 13 is conducted after the flight. The The synchronization process shown in 13 is conducted theThe flight. The synchronization synchronization process shown in Figure 13Figure is conducted after theafter flight. comparison process process shown in Figure 13 is conducted after the flight. The comparison process started with reading started with reading the aerial video and the photograph saved in GoPro’s SD card. The original the aerial videowas and the photograph saved in GoPro’s card. The photowas was resized captured photo resized to 1920 × 1080 pixelsSD because theoriginal GoProcaptured photograph of a to 1920 × 1080 pixels because the GoPro photograph was of a nonstandard size of 2016 × 1128 and pixels. nonstandard size of 2016 × 1128 pixels. During the comparison process, both the video frames During the comparison process, both the video frames and photograph were treated as a matrix with photograph were treated as a matrix with a size of 1920 × 1080 × 3, where the number 3 denotes the a size of ofRGB 1920color × 1080 × 3, The where the number 3 denotes 3 layers RGB color space. difference 3 layers space. difference ε between thethe video frameofand the photo was The determined ε between the video frame and the photo was determined by the mean-square deviation value of by the mean-square deviation value of ( − ). The video frame with minimum ( Matrixphoto − Matrixframe ). The video frame with minimum value of ε was considered the same as value of ε was considered the same as the original aerial photo (Figure 14) and the number of this the original aerial photo (Figure 14) and the number of this video frame was recorded as the Reference video frame was recorded as the Reference Frame No (RFN). The recorded GPS time of sending the Frame No (RFN). The recorded GPS time of sending the aerial photo triggering command was named aerial photo triggering command was named as the Reference GPS time (RGT). Considering the as the Reference GPS time (RGT). Considering the above-mentioned 40 ms delay between sending out above-mentioned 40 ms delay between sending out the command and capturing the photo the frame the command and capturing the photo the frame at RFN was taken at the time of (RGT + 40 ms delay at RFN was taken at the time of (RGT + 40 ms delay time). Therefore, the video is combined with the time). Therefore, the video is combined with the flight log. flight log.

Sensors 2016, 16, 1778 Sensors 2016, 16, 1778 Sensors 2016, 16, 1778

13 of 24 13 of 24 13 of 24

Figure 13.13. Flowchart forfor thethe synchronization of the aerial video andand thethe flight data log.log. Figure Flowchart synchronization of the aerial video flight data Figure 13. Flowchart for the synchronization of the aerial video and the flight data log.

(a) (a)

(b)(b)

Figure 14.14. Comparison results in in synchronization process (a)(a) thethe original photo taken Figure Comparison results synchronization process original photo takenbybyGopro Gopro Figureand 14. (b) Comparison results in synchronization process (a) the original photo taken by Gopro camera camera video frame captured by synchronization program. camera and (b) video frame captured by synchronization program. and (b) video frame captured by synchronization program.

3.3.3. GPS Transformation to to Locate Targets 3.3.3. GPS Transformation Locate Targets 3.3.3. GPS Transformation to Locate Targets Once a target with itsits current aircraft position is is reported Once a target with current aircraft position reportedtotothe theGCS, GCS,ananin-house in-houseMatLab MatLab Once a target with its current aircraft position is reported to the GCS, an in-house MatLaboflocating locating program is used to to report thethe target’s GPS coordinates. InIn this locating program is used report target’s GPS coordinates. thisstudy, study,the theposition position ofthe the program is used to report the target’s GPS coordinates. In this study, the position of the aircraft aircraft is assumed to be at center of the image, because the GPS module is placed above the aircraft is assumed to be at the center of the image, because the GPS module is placed above theis assumed camera. camera. to be at the center of the image, because the GPS module is placed above the camera. The coverage image can estimated using the camera’s field ofview view (FOV) [28], asshown shown The coverage ofofan image can bebe estimated using thethe camera’s field ofof view (FOV) [28], asas shown The coverage ofan an image can be estimated using camera’s field (FOV) [28], 15. The distances the xx and yy directions estimated using Equation (6). ininFigure 15. The distances inin the x and y directions areare estimated using Equation (6). inFigure Figure 15. The distances in the and directions are estimated using Equation (6). 2 2h 2 a=== FOVX FOV cos ( FOV ) cos( cos(2 2 ) ) 2

(6)(6)

(6)

2h b = 2 2FOV Y = = cos( FOV 2 ) FOV cos( cos(2 ) ) The resolution of the video frame is set to be 19202 × 1080 pixels. The scale between the distance andThe pixels is assumed tovideo be a linear relationship, and×is1080 presented inThe Equation (7) as: the distance resolution of the frame is set to to bebe1920 scale between The resolution of the video frame is set 1920 × 1080pixels. pixels. The scale between the distance and pixels is assumed to to be be a linear relationship, and is is presented inin Equation (7)(7) as:as: and pixels is assumed a linear relationship, and presented Equation a 2h   scalex = = (7) FOV 1920 1920 22 2X == == (7)(7) FOV FOV 1920 1920 1920 1920 2 b 2h 2   scaley = = 1080 1080 FOVY 2

Sensors 2016, 16, 1778

14 of 24

= Sensors 2016, 16, 1778

(a)

1080

=

2 1080

FOV 2

(b)

14 of 24

(c)

Figure 15. Camera and world coordinates. Figure 15. Camera and world coordinates.

As Figure 16 shows, a target is assumed to be located on the ( , ) pixel in the photo, and the Asof Figure 16 shows, a target to be offset the target from the centerisofassumed the picture is located on the ( x, y) pixel in the photo, and the offset of the target from the center of the picture is ∙ # "= (m) (8) scalex · x∙ offsettarget = (8) (m) scaley · y For the transformation of a north-east (NE) world-to-camera frame with the angle of the , the rotation matrix is defined as For the transformation of a north-east (NE) world-to-camera frame with the angle of the Ψ, the rotation matrix is defined as # " cos( cos sin (Ψ )) (Ψ ) ) −−sin( C = (9) RW = (9) sin( cos( sin (Ψ ) ) cos (Ψ ))

where yawyaw angel of the aircraft. Thus, the position offsetoffset in theinworld frameframe can becan solved with whereΨ is the is the angel of the aircraft. Thus, the position the world be solved with " # PE CT P = RW offsettarget = (10) = = PN (10) Therefore, Therefore,the thetarget’s target’sGPS GPScoordinates coordinatescan canbebedetermined determinedusing using " / # = +PE / f x / GPStarget = GPScam + PN / f y

(11) (11)

and denote the distances represented by one degree of longitude and latitude, where where f x and f y denote the distances represented by one degree of longitude and latitude, respectively. respectively. A graphical user interface was designed and implemented in the MatLab environment to A graphical user interface was designed and implemented in the MatLab environment to transform the coordinates with a simple ‘click and run’ function (Figure 17). The first step is opening transform the coordinates with a simple ‘click and run’ function (Figure 17). The first step is opening the image containing the targets. The program automatically loads the necessary information for the the image containing the targets. The program automatically loads the necessary information for the image, including the frame number (also the image’s file name), current location, camera attitude and image, including the frame number (also the image’s file name), current location, camera attitude and yaw angle of the plane. The second step is to click the ‘GET XY’ button and use the mouse to click the yaw angle of the plane. The second step is to click the ‘GET XY’ button and use the mouse to click the target in the image. The program shows the coordinates of the target in this image. Finally, clicking the target in the image. The program shows the coordinates of the target in this image. Finally, clicking ‘GET GPS’ button provides the GPS coordinates reported by the program. the ‘GET GPS’ button provides the GPS coordinates reported by the program.

Sensors 2016, 16, 1778

15 of 24

Sensors 2016, 16, 1778

15 of 24

Sensors 2016, 16, 1778

15 of 24

Figure 16.16. Coordinates worldframes. frames. Figure Coordinatesofofthe thecamera camera and and world Figure 16. Coordinates of the camera and world frames.

Figure 17. Graphical user interface for the GPS transformation that allows end users to access a Figure 17. Graphical user interface for the GPS transformation that allows end users to access a target’s target’s GPS coordinates using simple Figure 17.coordinates Graphical usersimple interface for buttons. the GPS transformation that allows end users to access a GPS using buttons.

target’s GPS coordinates using simple buttons. 3.4. Mapping the Searched Area 3.4. Mapping the Searched Area During rescue Area missions following landslides or floods, the terrain features can change 3.4. Mapping the Searched During rescue missions following landslides or floods, the terrain features can change significantly. significantly. After target identification, the local map must be re-built to guarantee the rescue team’s After target identification, the local map landslides must be re-built to guarantee rescuefeatures team’s safety and During rescue missions or floods, the the terrain change safety and shorten the rescuefollowing time. In this study, we provide a preliminary demonstration ofcan a fixedshorten the rescue time. In this study, we provide a preliminary demonstration of a fixed-wing UAV significantly. After target identification, thesurveillance. local map must be re-built to guarantee the rescue team’s wing UAV used to assist in post-disaster Mapping algorithms are not discussed in this used to assist in post-disaster surveillance. Mapping algorithms are not discussed in this paper. safetypaper. and shorten the rescue time. In thiswas study, preliminary demonstration of a fixedThe commercial software Pix4D usedwe to provide generate a orthomosaic models and point clouds. The commercial software Pix4D was used to generate orthomosaic models and point clouds. To map the disaster area, a set of aerial photos and their geo-information are applied to the wing UAV used to assist in post-disaster surveillance. Mapping algorithms are not discussed in this To map the disaster area, a set of aerial photos and their geo-information are applied to the commercial software, Pix4D. There should be at least 65% overlap between consecutive pictures, but paper.commercial The commercial software Pix4D was used to generate orthomosaic models and point clouds. software, Pix4D. There should be at least 65% overlap between consecutive pictures, aiming for or higher is recommended. distance between two flight paths should be smaller To the80% disaster setrecommended. of aerialThe photos theirbetween geo-information are applied butmap aiming for 80% or area, highera is The and distance two flight paths should to be the than , and estimation Equation (6) can be found in Section 3.3.3. A mapping image capture program commercial There should(6) becan at least 65%in overlap consecutive pictures, smaller software, than a, andPix4D. estimation Equation be found Sectionbetween 3.3.3. A mapping image capture but is shown in Figure 18. aiming for 80% or higher is recommended. The distance between two flight paths should be smaller program is shown in Figure 18.

than , and estimation Equation (6) can be found in Section 3.3.3. A mapping image capture program is shown in Figure 18.

Sensors 2016, 16, 1778 Sensors 2016, 16, 1778

16 of 24 16 of 24

Figure 18. 18. Flowchart Flowchart of of the the mapping mapping image image capture capture program. Figure program.

The mapping image capture program starts with GPS messages from the flight data log with reference frame numbers and shutter times generated by the synchronization step discussed in Section 3.2. 3.2. The The program programloads loadsthe theGPS GPStimes timesofofallall GPS messages in the loop calculates ofof thethe GPS messages in the loop andand calculates the the corresponding frame number inaerial the aerial video, which equals corresponding frame number N in the video, which equals − GPS Time − Re f erence GPS Time = + + Re f erence Frame No. . N= 40 ms 40 ms Then, the mapping image capture program loads the Nth frame of the aerial video and saves it to the Then, the mapping image capture program loads the Nth frame of the aerial video and saves it to the image file. image file. Once the mapping image capture program is complete, a series of photos and a text file Once the mapping image capture program is complete, a series of photos and a text file containing containing the file names, longitude, latitude, altitude, roll, pitch and yaw are generated. The Pix4D the file names, longitude, latitude, altitude, roll, pitch and yaw are generated. The Pix4D then produces then produces the orthomosaic model and point clouds using these two file types. the orthomosaic model and point clouds using these two file types. 4. 4. Blind BlindTests Testsand andResults Results To testthe theall-in-one all-in-onecamera-based camera-basedtarget target detection and positioning system, a blind test To test detection and positioning system, a blind fieldfield test was was designed. A drone, 2 morblue red square board and blue red square designed. A drone, a 2 ma×2 m 2 m×blue red or square board and a 0.8 m a×0.8 0.8 × m0.8 blue or redor square board board were used to simulate a crashed airplane, broken cars and injured people, respectively were used to simulate a crashed airplane, broken cars and injured people, respectively (Figure(Figure 19a–c). 19a–c). The flight tests were conducted at two test sites, the International Model Aviation Center The flight tests were conducted two Kong test sites, theEngineering International Model 0 58.1 00 N 114 ◦ 02 0 35.400 (22◦ 24 E) of the at Hong Model Club, Ltd.Aviation in YuenCenter Long (22°24′58.1′′N 114°02′35.4′′E) of the Hong Kong Model Engineering Club, Ltd. in Yuen Long town, town, Hong Kong and the Zengwun River (23◦ 70 18.0300 N 120◦ 130 53.8600 E) in the Xigang District, Hong Kong and the Zengwun River (23°7′18.03′′N 120°13′53.86′′E) Xigang District, of Tainan of Tainan city, Taiwan. Given concerns with the limited flying areaininthe Hong Kong, the preliminary city, Taiwan. Given concerns with the limited flying area in Hong Kong, the preliminary in-sight tests in-sight tests were conducted in Hong Kong and the main blind out-of-sight tests were conducted in were conducted in Hong Kong and the main blind out-of-sight tests were conducted in Taiwan. The flight test information is listed in Table 1. Only post-identification tests were conducted in Hong Kong. In Taiwan, no after-flight mapping was done for the first two tests (Tests 3 and 4).

Sensors 2016, 16, 1778

17 of 24

Taiwan. The flight test information is listed in Table 1. Only post-identification tests were conducted in Hong 2016, Kong. Taiwan, no after-flight mapping was done for the first two tests (Tests 3 and 4). 17 of 24 Sensors 16,In 1778

Sensors 2016, 16, 1778

17 of 24

(a)

(b)

(c)

Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards represented broken cars and (c) the 0.8 m × 0.8 m blue or red targets boards represented injured represented broken cars and (c) the 0.8 m × 0.8 m blue or red targets boards represented injured people (a) (b) (c) people to be rescued. to be rescued. Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards Table 1. Basic information for flight tests. represented broken cars and (c) the 0.8 minformation × 0.8 m bluefor or flight red targets Table 1. Basic tests. boards represented injured people to be rescued.

Testing Function Testing Function Real-Time PostMapping Table 1. Basic information for flight tests. Flight Test Flight Time (min) Identification Test Site Real-Time Identification Post-Identification Mapping Identification Testing Function √ × × Test 1 Hong Kong 15:36 √ PostFlight Test Site Kong Flight Time15:36 (min) Real-Time × 1 Test Hong ×× × Test 2 Test Hong Kong 3:05 √ Mapping √ Identification Identification × × × Test 3 Test 2TaiwanHong Kong 13:23 3:05 √√ √ √ Taiwan × × × × 1 3Taiwan Hong Kong 15:36 13:23 √√ Test Test 4 Test 17:41 √√ √ Test 4 Hong Kong Taiwan × × × 3:05 17:41 √√ Test Test 5 2 Taiwan 17:26 √√ √ √ √ Taiwan × TestTest 3 5 Taiwan 13:23 17:26 √√ √ √ √ Test 6 Test 6Taiwan Taiwan 16:08 16:08 √ √ √ Test 4 Taiwan 17:41 √ √ √√ √ × √ Test 7 Test 7Taiwan Taiwan 16:23 16:23 √√ Test 5 Taiwan 17:26 √ √ √√ √ √ √ Test 8 Test 8Taiwan Taiwan 17:56 17:56 √√ Test 6 Taiwan 16:08

Flight Test

Test Site

Flight Time (min)







Test 7 Taiwan √ √ Figure 20a,b shows the search 16:23 site and its schematic √ in Hong Kong. The search path repeated the Test 8 Taiwan 17:56 √ √ √ Figure shows the search and itsyellow schematic Thedesigned search path repeated square route20a,b due to the limited flightsite area. The path in in Hong FigureKong. 20a is the mission path Figure shows search site and its schematic inpath HonginKong. The search path repeated the the square route20a,b dueindicates to thethe limited flight area. The Figure 20a isinthe designed mission and the purple line the real flight path ofyellow the vehicle. For the tests Taiwan, there were square route due toline theand limited flight area. The yellow path in Figure 20a isthe the designed path and the purple indicates thethe real flight path of the vehicle. the testsDistrict inmission Taiwan, there two main search areas (A B) along bank of the Zengwun River inFor Xigang ofpath Tainan and the purple line indicates the real flight path of the vehicle. For the tests in Taiwan, there were were two main (A and20c. B) along the bank of the Xigang District city, Taiwan, assearch shownareas in Figure The schematics ofthe theZengwun designed River searchinroute and areas are two main search areasas (Ashown and B) along the bank the Zengwun thedesigned Xigang District of route of Tainan Taiwan, in Figure 20c.ofThe schematics of in search and depicted incity, Figure 20d. The maximum communication distanceRiver was 3the km and the width ofTainan the flight Taiwan, as shown in Figure 20c. The schematics of the designed search route and areas are areascity, arewas depicted Figure 20d.was Theintended maximum was and 3 kmthe and the width corridor 30 m.inThis width to communication test the stability distance of the UAV geo-fencing depicted in Figure 20d. The maximum communication distance was 3 km and the width of the flight of the flight corridor was 30 m.If This width was intended to test the of the UAV crashed. and the function of the flight controller. the UAV flies outside the corridor, it isstability considered to have corridor was 30 m. This width was intended to test the stability of the UAV and the geo-fencing geo-fencing function of the flight controller. Ifflew the UAV flies outside theand corridor, it is considered to After the flight performance tests, the UAV inside the corridor was proven stable. function of the flight controller. If the UAV flies outside the corridor, it is considered to have crashed. An haveAfter crashed. After the flight performance tests, the UAV flew the corridor andstable. was proven unknown number ofperformance targets were placed search areas A and B inside by an independent volunteer before the flight tests, the in UAV flew inside the corridor and was proven An stable. An unknown number of targets were placed in search areas A and B by an independent every test. The search team then conducted the field tests and tried to find the targets. The test results unknown number of targets were placed in search areas A and B by an independent volunteer before volunteer every test. The search team then conducted field teststhe and tried to find targets. are discussed the following sections. every before test. in The search team then conducted the field tests andthe tried to find targets. The testthe results The test results are in the following sections. are discussed indiscussed the following sections.

(a) (a)

(b) (b) Figure 20. Cont.

Sensors 2016,2016, 16, 1778 Sensors 16, 1778

1824of 24 18 of

(d)

(c)

Figure 20. (a) Test route in Hong Kong; (b) schematics of the designed route in Hong Kong; (c) search

Figure 20.A(a) Test route in tests HonginKong; (b)and schematics of theofdesigned routesearch in Hong Kong; (c) search areas and B for blind Taiwan (d) schematics the designed route and areas in areasTaiwan. A and B for blind tests in Taiwan and (d) schematics of the designed search route and areas in Taiwan. 4.1. Target Identification and Location

4.1. TargetPost-target Identification and Locationprocessing was conducted in all eight flight test to assess the identification identification algorithm. The post-identification program ran on a laptop equipped with Intel Core Post-target identification processing was conducted in all eight flight test to assess the i5-2430M CPU and 8 Gb RAM. The testing results are shown in Table 2. Note that the postidentification algorithm. The post-identification program ran on a laptop equipped with Intel Core identification program only missed two targets for all of the tests.

i5-2430M CPU and 8 Gb RAM. The testing results are shown in Table 2. Note that the post-identification program only missed two targets Table for all2.of the tests.identification results. Post-target Flight Test

Resolution

Test 1 Flight Test Test 2 Test 3 Test Test1 4 Test Test2 5 Test 3 Test 6 Test 4 Test5 7 Test Test6 8 Test Test 7 Taking Test 8

1920 × 1080 Resolution 1920 × 1080 1920 × 1080 1920 1080 1920× × 1080 1920 1080 1920× × 1080 1920 × 1080 1920 × 1080 1920 × 1080 1920× × 1080 1920 1080 1920× × 1080 1920 1080 1920 × 1080 test an 19207×as 1080

Total Post-Target Identification Time (min) Total11:08.6 Post-Target 80 15:36 3 2 Flying Altitude Flight Time (min) Targets Identified Targets Identification 80 3:05 2 2 02:46.3 Time (min) 80 13:23 3 3 12:57.1 80 15:36 3 2 11:08.6 80 17:41 3 2 14:04.6 80 3:05 02:46.3 80 17:26 32 32 13:23.9 80 13:23 3 3 12:57.1 80 16:08 3 3 11:45.1 80 17:41 3 2 14:04.6 80 16:23 6 6 12:16.9 80 17:26 3 3 13:23.9 75 17:56 63 63 14:18.3 80 16:08 11:45.1 80 16:23 6 6 12:16.9 example, were found by6 the identification system, as shown in 75 6/6 targets 17:56 6 14:18.3

Table 2. Post-target identification results.

Flying Altitude

Flight Time (min)

Targets

Identified Targets

Figure 21, including a crashed aircraft, two crashed cars and three injured people. Note that in Figure 21g the target board, representing the injured people, was folded by gusts of wind to the extent Taking test 7 as an example, 6/6 targets were found by the identification system, as shown that it is barely recognizable. Nevertheless, the identification system still reported this target, in Figure 21, including a crashed aircraft, two crashed cars and three injured people. Note that in confirming its reliability. The locating error of 5 targets was less than 15 m as shown in Table 3 (having Figure the target board, representing injured people, was foldedwere by gusts of wind to the met21g the requirements discussed in Section the 1). The targets and their locations reported in 15 min.

extent that it is barely recognizable. Nevertheless, the identification system still reported this target, confirming its reliability. The locating error of 5 targets less test than Table 3. Locating resultswas of flight 7. 15 m as shown in Table 3 (having met the requirements discussed in Section 1). The targets and their locations were reported in 15 min. Target

Red Z

Latitude (N)

23.114536°

Longitude (E)

120.213111°

Red Plane

Blue I

Blue V

Blue J

23.111577°

23.110889°

23.113637°

23.122189°

23.117840°

120.211898°

120.210819°

120.210463°

120.223206°

120.225225°

Table 3. Locating results of flight test 7.

Error Target

2.8 Zm Red

m Red13.9 Plane

1.6Im Blue

0.8Vm Blue

11.3J m Blue

Latitude (N) Longitude (E) Error

23.114536◦

23.111577◦

23.110889◦

23.113637◦

23.122189◦

120.213111◦ 2.8 m

Red Q

120.211898◦ 13.9 m

120.210819◦ 1.6 m

120.210463◦ 0.8 m

120.223206◦ 11.3 m

4.8Qm Red 23.117840◦ 120.225225◦ 4.8 m

Sensors Sensors 2016, 2016, 16, 16, 1778 1778

19 of of 24 24 19

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Figure Figure 21. 21. (a) The locations of six simulated targets; (b) the original image saved by the identification program with target drone; (c) designed target (blue board with letter letter V) V) represents represents an an injured injured person; person; designedtarget target(blue (blueboard board with letter J) represents an injured person; (e) designed (red (d) designed with letter J) represents an injured person; (e) designed targettarget (red board boardletter withQ) letter Q) represents a crashed car; (f) designed target (red board with letter Z) represents with represents a crashed car; (f) designed target (red board with letter Z) represents a crashed acar crashed and (g)target designed target board) an represents an injured Theblown board over was and (g)car designed (small blue(small board)blue represents injured person. Theperson. board was blown over by the wind. by the wind.

In designed targets, targets, the the identification identification program program reported reported real real cars/tracks, cars/tracks, people, In addition addition to to the the designed people, boats and other objects. The percentages of each type of target are shown in Figure 22. The large boats and other objects. The percentages of each type of target are shown in Figure 22. The large amount of other targets is due to the nature of the search area. The testing site is a large area of amount of other targets is due to the nature of the search area. The testing site is a large area of cropland and the the local local farmers farmers use use aa type type of of fertilizer fertilizer that that is is stored stored in in blue blue buckets buckets and and cropland near near aa river, river, and they use green nets to fence in their crops. These two item types were reported, as shown in they use green nets to fence in their crops. These two item types were reported, as shown in Figure 23. Figure 23. However, these results can be quickly sifted through by the GCS operator. The However, these results can be quickly sifted through by the GCS operator. The identification program identification program still reduces the and operator’s workmission load, and search mission was successfully still reduces the operator’s work load, the search wasthe successfully completed in 40 min, completed in 40 min, beginning when the UAV took off and ending when all of the targets beginning when the UAV took off and ending when all of the targets had been reported. had been reported.

Sensors 2016, 16, 1778 Sensors 2016, 16, 1778

20 of 24 20 of 24

Sensors 2016, 16, 1778

20 of 24

People People 2% 2% Simulated Simulated Targets Targets 10% (b)(b) Cars/Tracks Cars/Tracks 15% 15% Ships/Boats Ships/Boats 6% 6% (c)

Other

Other 67% 67%

(c)

(d)

(a)

(d)

(a)

Figure 22. (a) Composition of reporting targets; (b) a person on the road; (c) a red car and (d) a red

Figure 22.22. (a)(a) Composition ofofreporting targets; (b) a person on the road; (c) a red car and (d) a red Figure Composition reporting targets; (b) a person on the road; (c) a red car and (d) a red boat reported by the identification program. boat reported by the identification program. boat reported by the identification program.

(a)

(b)

(a) reporting targets: (a) A blue bucket(b) Figure 23. The other and (b) green nets. Figure 23. The other reporting targets: (a) A blue bucket and (b) green nets.

In tests 3–8, both post-processing were conducted and the Figure 23.on-board The otherreal-time reportingprocessing targets: (a)and A blue bucket and (b) green nets. results are shown in Figure 24. Note that the performance of the post-target identification is better In tests 3–8, both on-board real-time processing and post-processing were conducted and the than that of real-time onboard target identification, due to the higher resolution of the image source. In tests both on-board and post-processing conducted and the results are 3–8, shown in Figure 24. real-time Note that processing the performance of the post-targetwere identification is better Nevertheless, the on-board target identification system still reported more than 60% of the targets results are shown in Figure 24. Note that the performance of the post-target identification is better than of real-time onboard target supplementary identification, due resolution the image source. andthat provided an efficient real-time tool to forthe thehigher all-in-one rescue of mission. A future than that of real-time onboard targetidentification identification, due to the higher resolution ofsystems. the source. Nevertheless, on-board target system still reported more than 60% ofimage the targets study will bethe conducted to improve the success rates of on-board target identification

Nevertheless, thean on-board identification system still more than 60% of the targets and and provided efficienttarget real-time supplementary tool forreported the all-in-one rescue mission. A future study will conducted to improve the successtool ratesfor of the on-board target identification provided an be efficient real-time supplementary all-in-one rescue mission.systems. A future study 4.2. Mapping will be conducted to whole improve thearea, success ratesplan of on-board target To cover the search the flight was designed as identification shown in Figuresystems. 25. The distance 4.2. Mapping between 2 adjacent flight paths is 80 m. The total distance of flight plan is 20.5 km with a flight time 4.2. Mapping of 18 The the UAV was calculated, and it is 50 for bank angles no than Tomin. cover theturning wholeradius searchofarea, the flight plan was designed asmshown in Figure 25.larger The distance 35°. Thus, as shown insearch Figure 25b, the was designed withasplan a shown 160-m turning diameter while between 2 adjacent flight paths is 80 m.flight The plan total distance of flight is 20.5 km with flight time To cover the whole area, the flight plan was designed in Figure 25.a The distance the gap between the two flight paths remained 80 m to ensure overlapping and complete coverage. of 18 min. The turning radius of the UAV was calculated, and it is 50 m for bank angles no larger than between 2 adjacent flight paths is 80 m. The total distance of flight plan is 20.5 km with a flight time of After the flight, mapping image program this study was appliedwhile to as shown in the Figure 25b,UAV the flight plan was designed a in 160-m turning diameter 18 35°. min.Thus, The turning radius of the wascapture calculated, anddeveloped it iswith 50 m for bank angles no larger than capture the images from the high-resolution video and process the flight data log. A total of 2200 ◦ the gap between the two flight paths remained 80 m to ensure overlapping and complete coverage. 35 . Thus, as shown in Figure 25b, the flight plan was designed with a 160-m turning diameter while photos were and appliedimage to Pix4D, and the resulting orthomosaic model and was pointapplied clouds to After the generated flight, theflight mapping capture program developed in this study the gap between the two paths part remained 80the m to ensure overlapping and complete coverage. are shown in Figure 26. The missing is due to strong reflection on the water’s surface resulting capture the images from the high-resolution video program and process the flightin data log. A total ofapplied 2200 After the flight, the mapping image capture developed this study was in mismatched features. photos were generated and applied to Pix4D, and the resulting orthomosaic model and point clouds to capture the images from the high-resolution video and process the flight data log. A total of are shown in Figure 26. The missing part is due to the strong reflection on the water’s surface resulting 2200 photos were generated and applied to Pix4D, and the resulting orthomosaic model and point in mismatched features. clouds are shown in Figure 26. The missing part is due to the strong reflection on the water’s surface resulting in mismatched features.

Sensors 2016, 16, 1778

21 of 24

Sensors Sensors2016, 2016,16, 16,1778 1778

21ofof24 24 21

77 66

Target Amount Target Amount

55 44 33 22 11 00 Test33 Test

Test44 Test Targets Targets

Test 55 Test Real-Time Processing Processing Real-Time

Test 6

Test Test 77

Test Test88

Post Post Processing Processing

Figure 24.Target Targetidentification identification results results of of real-time real-time processing processing and post-processing. Figure and post-processing. post-processing. Figure 24. 24. Target identification results of real-time processing and

(a) (a)

(b) (c) (b) (c) Figure 25. (a) Overall flight plan for the search mission, (b) flight plan for search area B (the turning Figure 25. (a) Overall flight plan for the search mission, (b) flight plan for search area B (the turning Figure 25. reaches (a) Overall forthe theflight search mission, (b)while flightthe plan for search area the B (the diameter 160flight m to plan ensure performance distance between twoturning flight diameter reaches 160 m to ensure the flight performance while the distance between the two flight diameter reaches m to ensure the performance while between thearea twoA. flight paths remains 80160 m, guaranteeing full flight coverage and overlap) andthe (c) distance flight plan for search paths remains 80 m, guaranteeing full coverage and overlap) and (c) flight plan for search area A. paths remains 80 m, guaranteeing full coverage and overlap) and (c) flight plan for search area A.

Sensors 2016, 16, 1778

22 of 24

Sensors 2016, 16, 1778

22 of 24

(a)

(b) Figure Orthomosaicmodel modelof ofthe the testing testing area of of thethe search area. Figure 26. 26. (a)(a) Orthomosaic area and and(b) (b)point pointclouds clouds search area.

5. Conclusions

5. Conclusions

In this study, a UAV system was developed, and its ability to assist in SAR missions after

In this study, a UAV system was developed, and its ability to assist in SAR missions after disasters disasters was demonstrated. The UAV system is a data acquisition system equipped with various was sensors demonstrated. UAV and system is a data acquisition equipped with various sensors to to realize The searching geo-information acquisitionsystem in a single flight. The system can reduce realize searching and geo-information acquisition in a single flight. The system can reduce the cost of the cost of large-scale searches, improve the efficiency and reduce end-users’ workloads. large-scale searches, improve the efficiency and reduce end-users’ workloads. In this paper, we presented a target identification algorithm with a self-adapting threshold that In this paper,towe presented targeton identification algorithm with a self-adapting can be applied a UAV system.a Based this algorithm, a set of programs was developedthreshold and testedthat in aapplied simulated teston results demonstrated theofreliability and efficiency of this new can be to asearch UAV mission. system. The Based this algorithm, a set programs was developed and tested UAV system. in a simulated search mission. The test results demonstrated the reliability and efficiency of this new A further study will be conducted to improve the image processing in both onboard and post UAV system. target identification, on reducing the the unexpected reportingintargets. A proposed A further study will focusing be conducted to improve image processing both onboard and post optimization method is to add an extra filtration process to the GCS to further identify the shape of target identification, focusing on reducing the unexpected reporting targets. A proposed optimization the targets. This proposed method will not increase the computational time of the onboard device method is to add an extra filtration process to the GCS to further identify the shape of the targets. significantly. It is a simple but effective method concerning the limited CPU capability of an on-board This proposed method will not increase the computational time of the onboard device significantly. processor. Generally speaking, most commercial software is too comprehensive to be used in the onIt is board a simple butNotably, effectivethe method concerning the limited CPU capability of consideration an on-board during processor. device. limitation of the computing power becomes a minor Generally speaking, most commercial software is too comprehensive to be used in the on-board post-processing since powerful computing devices can be used at this stage. To evaluate and improve device. Notably, theoflimitation of the computing power becomes a minor consideration the performance targets’ identification algorithm in post-processing, further study will during be post-processing since powerful computing devices be used technology at this stage. Tocomparison evaluate and improve conducted, including the application of the parallelcan computing and with the the performance of targets’ identification algorithm in post-processing, further study will be conducted, advanced commercial software. In this study, the scales the camera and world coordinates were assumed towith be linear. This including the application of theofparallel computing technology and comparison the advanced assumption can result in target location errors. We tried to reduce the error by selecting the image commercial software. with the target image Although error of the current were systemassumed is acceptable forlinear. a In this study,near the the scales of center. the camera andtheworld coordinates to be search mission, we will conduct a further study to improve the location accuracy. Lidar will be This assumption can result in target location errors. We tried to reduce the error by selecting the image installed to replace the sonar, and more accurate relative vehicle height will be provided for autowith the target near the image center. Although the error of the current system is acceptable for a search landing. Also, in the future, the vehicle will be further integrated to realize the ‘Ready-to-Fly’ stage mission, we will conduct a further study to improve the location accuracy. Lidar will be installed for quick responses in real applications. to replace the sonar, and more accurate relative vehicle height will be provided for auto-landing. Also, in the future, the vehicle will be further integrated to realize the ‘Ready-to-Fly’ stage for quick responses in real applications.

Sensors 2016, 16, 1778

23 of 24

Supplementary Materials: The following is available online at https://www.youtube.com/watch?v=19_ -RyPp93M. Video S1: A Camera-Based Target Detection and Positioning System for Wilderness Search and Rescue using a UAV. https://github.com/jingego/UAS_system/tree/master/Image%20Processing. Source Code 1: Matlab Code of targets identification. https://github.com/jingego/UAS_system/blob/master/Mapping_preprocess/CAM_clock_paper_version.m. Source Code 2: MatLab Code of synchronization. Acknowledgments: This work is sponsored by Innovation and Technology Commission, Hong Kong under Contract No. ITS/334/15FP. Special thanks to Jieming Li for his help in building the image identification algorithm of this work. Author Contributions: Jingxuan Sun and Boyang Li designed the overall system. In addition, Boyang Li developed the vehicle platform and Jingxuan Sun developed the identification algorithms, locating algorithms and post image processing system. Yifan Jiang developed the on-board targets identification. Jingxuan Sun and Boyang Li designed and performed the experiments. Jingxuan Sun analyzed the experiment results and wrote the paper. Chih-yung Wen is in charge of the whole project management. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations The following abbreviations are used in this manuscript: UAV: SAR: GCS: AAT: OSD: FOV: FPS:

Unmanned Aerial Vehicle Search and Rescue Ground Control System Auto Antenna Tracker On Screen Display Field of view Frame Per Second

References 1. 2. 3.

4. 5.

6. 7.

8.

9. 10.

11.

Indonesia Airasia Flight 8501. Available online: https://en.wikipedia.org/wiki/Indonesia_Air-Asia_Flight_ 8501 (accessed on 20 October 2016). Qz8501: Body of First Victim Identified. Available online: http://english.astroawani.com/airasia-qz8501news/qz8501-body-first-victim-identified-51357 (accessed on 20 October 2016). Airasia Crash Caused by Faulty Rudder System, Pilot Response, Indonesia Says. Available online: https://www.thestar.com/news/world/2015/12/01/airasia-crash-caused-by-faulty-rudder-systempilot-response-indonesia-says.html (accessed on 20 October 2016). Goodrich, M.A.; Morse, B.S.; Gerhardt, D.; Cooper, J.L.; Quigley, M.; Adams, J.A.; Humphrey, C. Supporting wilderness search and rescue using a camera-equipped mini uav. J. Field Robot. 2008, 25, 89–110. [CrossRef] Goodrich, M.A.; Cooper, J.L.; Adams, J.A.; Humphrey, C.; Zeeman, R.; Buss, B.G. Using a mini-uav to support wilderness search and rescue: Practices for human-robot teaming. In Proceedings of the 2007 IEEE International Workshop on Safety, Security and Rescue Robotics, Rome, Italy, 27–29 September 2007. Goodrich, M.A.; Morse, B.S.; Engh, C.; Cooper, J.L.; Adams, J.A. Towards using unmanned aerial vehicles (UAVs) in wilderness search and rescue: Lessons from field trials. Interact. Stud. 2009, 10, 453–478. Morse, B.S.; Engh, C.H.; Goodrich, M.A. Uav video coverage quality maps and prioritized indexing for wilderness search and rescue. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, Osaka, Japan, 2–5 March 2010. Doherty, P.; Rudol, P. A uav search and rescue scenario with human body detection and geolocalization. In Proceedings of the Australasian Joint Conference on Artificial Intelligence, Gold Coast, Australia, 2–6 December 2007. Habib, M.K.; Baudoin, Y. Robot-assisted risky intervention, search, rescue and environmental surveillance. Int. J. Adv. Robot. Syst. 2010, 7, 1–8. Tomic, T.; Schmid, K.; Lutz, P.; Domel, A.; Kassecker, M.; Mair, E.; Grixa, I.L.; Ruess, F.; Suppa, M.; Burschka, D. Toward a fully autonomous uav: Research platform for indoor and outdoor urban search and rescue. IEEE Robot. Autom. Mag. 2012, 19, 46–56. [CrossRef] Waharte, S.; Trigoni, N. Supporting search and rescue operations with uavs. In Proceedings of the IEEE 2010 International Conference on Emerging Security Technologies (EST), Canterbury, UK, 6–7 September 2010.

Sensors 2016, 16, 1778

12. 13. 14. 15. 16. 17. 18.

19.

20.

21.

22. 23. 24. 25. 26.

27. 28.

24 of 24

Naidoo, Y.; Stopforth, R.; Bright, G. Development of an uav for search & rescue applications. In Proceedings of the IEEE AFRICON 2011, Livingstone, Zambia, 13–15 September 2011. Bernard, M.; Kondak, K.; Maza, I.; Ollero, A. Autonomous transportation and deployment with aerial robots for search and rescue missions. J. Field Robot. 2011, 28, 914–931. [CrossRef] Cummings, M. Designing Decision Support Systems for Revolutionary Command and Control Domains. Ph. D. Thesis, University of Virginia, Charlottesville, VA, USA, 2004. Olsen, D.R., Jr.; Wood, S.B. Fan-out: Measuring human control of multiple robots. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004. Davis, J.W.; Keck, M.A. A two-stage template approach to person detection in thermal imagery. WACV/MOTION 2005, 5, 364–369. Lee, D.J.; Zhan, P.; Thomas, A.; Schoenberger, R.B. Shape-based human detection for threat assessment. In Proceedings of the SPIE 5438, Visual Information Processing XIII, Orlando, FL, USA, 15 July 2004. Mikolajczyk, K.; Schmid, C.; Zisserman, A. Human detection based on a probabilistic assembly of robust part detectors. In European Conference on Computer Vision, Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany; pp. 69–82. Rudol, P.; Doherty, P. Human body detection and geolocalization for uav search and rescue missions using color and thermal imagery. In Proceedings of the 2008 IEEE Aerospace Conference, Montana, MT, USA, 1–8 March 2008. Wu, J.; Zhou, G. Real-time uav video processing for quick-response to natural disaster. In Proceedings of the 2006 IEEE International Conference on Geoscience and Remote Sensing Symposium, Denver, CO, USA, 31 July–4 August 2006. Suzuki, T.; Meguro, J.; Amano, Y.; Hashizume, T.; Hirokawa, R.; Tatsumi, K.; Sato, K.; Takiguchi, J.-I. Information collecting system based on aerial images obtained by a small uav for disaster prevention. In Proceedings of the 2007 International Workshop and Conference on Photonics and Nanotechnology, Pattaya, Thailand, 16–18 December 2007. Xi, C.; Guo, S. Image target identification of uav based on sift. Proced. Eng. 2011, 15, 3205–3209. Li, C.; Zhang, G.; Lei, T.; Gong, A. Quick image-processing method of uav without control points data in earthquake disaster area. Trans. Nonferrous Metals Soc. China 2011, 21, s523–s528. [CrossRef] United Eagle Talon Day Fatso FPV Carrier. Available online: http://www.x-uav.cn/en/content/?463.html (accessed on 20 October 2016). Hardkernel Co., Ltd. Ocam: 5mp USB 3.0 Camera. Available online: http://www.hardkernel.com/main/ pro-ducts/prdt_info.php?g_code=G145231889365 (accessed on 20 October 2016). Chen, Y.; Hsiao, F.; Shen, J.; Hung, F.; Lin, S. Application of matlab to the vision-based navigation of UAVs. In Proceedings of the 2010 8th IEEE International Conference on Control and Automation (ICCA), Xiamen, China, 9–11 June 2010. Gopro hero4 Silver. Available online: http://shop.gopro.com/APAC/cameras/hero4-silver/CHDHY-401EU.html (accessed on 20 October 2016). Hero3+ Black Edition Field of View (FOV) Information. Available online: https://gopro.com/support/ articles-/hero3-field-of-view-fov-information (accessed on 20 October 2016). © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).