Advanced 3D Photogrammetric Surface Reconstruction of Extensive

0 downloads 0 Views 9MB Size Report
Aug 26, 2018 - The focal points and focal lengths, along with the distortion and radius of ..... then the distance between the vertex and its point of intersection.
sensors Article

Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition Michele Calì 1, * 1 2

*

ID

and Rita Ambu 2

ID

Department of Electric, Electronics and Computer Engineering, University of Catania, V.le A. Doria, 6, 95125 Catania, Italy Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, via Marengo 2, 09123 Cagliari, Italy; [email protected] Correspondence: [email protected]; Tel.: +39-095-738-2400

Received: 10 August 2018; Accepted: 24 August 2018; Published: 26 August 2018

 

Abstract: This paper proposes a replicable methodology to enhance the accuracy of the photogrammetric reconstruction of large-scale objects based on the optimization of the procedures for Unmanned Aerial Vehicle (UAV) camera image acquisition. The relationships between the acquisition grid shapes, the acquisition grid geometric parameters (pitches, image rates, camera framing, flight heights), and the 3D photogrammetric surface reconstruction accuracy were studied. Ground Sampling Distance (GSD), the necessary number of photos to assure the desired overlapping, and the surface reconstruction accuracy were related to grid shapes, image rate, and camera framing at different flight heights. The established relationships allow to choose the best combination of grid shapes and acquisition grid geometric parameters to obtain the desired accuracy for the required GSD. This outcome was assessed by means of a case study related to the ancient arched brick Bridge of the Saracens in Adrano (Sicily, Italy). The reconstruction of the three-dimensional surfaces of this structure, obtained by the efficient Structure-From-Motion (SfM) algorithms of the commercial software Pix4Mapper, supported the study by validating it with experimental data. A comparison between the surface reconstruction with different acquisition grids at different flight heights and the measurements obtained with a 3D terrestrial laser and total station-theodolites allowed to evaluate the accuracy in terms of Euclidean distances. Keywords: accuracy; Ground Sampling Distance; Structure-from-Motion algorithms; acquisition grid optimization; Digital Surfaces Models

1. Introduction Automated photogrammetry using UAV image acquisition for digital surface reconstruction has become more widespread in recent years. This can be attributed to the enhanced performance of UAV [1–3] and to the development of different computer vision algorithms [4] and computational techniques, which have greatly speeded up the processing time and the quality of the reconstruction [5–7]. These techniques have been used for different purposes, including shape detection [8,9] and 3D surface reconstruction of large-scale elements, where a high number of photos is necessary, such as natural environments and geographical configurations [10–12], buildings and urban textures [13–15], archaeological sites [16,17], and industrial installations [18,19]. In many of these applications, there is an urgent need for the reconstruction of 3D structures from the 2D images collected from a UAV camera quickly and with a great accuracy. When a UAV is used simply as a platform to acquire images along with a pre-programmed grid and the GPS-enabled trajectory is at a predetermined frame rate [20], it is likely that the acquisition Sensors 2018, 18, 2815; doi:10.3390/s18092815

www.mdpi.com/journal/sensors

Sensors 2018, 18, 2815

2 of 17

of more images and/or their integration with other images may be necessary to obtain the required accuracy in the three-dimensional reconstruction [21]. When both the dimensions of the object being reconstructed and the accuracy increase, the computational time of the algorithms also increases significantly, thus limiting them to high-speed reconstruction applications. Researchers have proposed the use of improved algorithms for different situations based on early SfM algorithms [22]. A variety of SfM strategies have emerged, including incremental [23,24], hierarchical [25], and global [26–28] approaches. Actually, the procedures developed in the computer vision community have focused more on the speed of the implemented procedures and on the success in image orientation. To the knowledge of the authors, no scientific work up to now has presented systematic data concerning the improvement of results in a photogrammetric reconstruction against different acquisition grid shapes and acquisition grid geometric parameters (pitches, image rates, camera framing, flight heights). The goal of the research was to determine the relationship between acquisition grid shapes, acquisition grid geometric parameters, and the accuracy obtained in the 3D reconstruction of large objects’ surfaces. The parameters investigated were therefore the grid geometric parameters: pitches, image rates, camera framing, and flight heights. The simplest three types of grids shapes (rectangular, elliptical, and cylindrical) were intended to be compared each other by studying, for each of them, the influence of pitches, image rates, camera framing, and flight heights on the accuracy of the surfaces’ reconstruction. The study highlighted how much Ground Sampling Distance (GSD), the necessary number of photos to assure the desired overlapping, and the surface reconstruction accuracy were related to grid shapes, image rate, and camera framing at different flight heights (15 m, 20 m, 30 m, 40 m, and 50 m). The case study of the Bridge of the Saracens in Adrano (Sicily), a valuable example of Roman architecture characterized by an elongated longitudinal shape, geometric singularities, multiple and variously inclined features, and different lighting levels, offered relevant results referring to the examined relationships to support the methodology by validating it with experimental data. Images, obtained by a CMOS 12 MP sensor, were used in Pix4Dmapper version 3 commercial software to generate dense point clouds. Function Based Method (FBM) [29] and Area Based Matching (ABM) [30,31] algorithms were employed to evaluate the degree of overlapping in the image acquired. Some cutting edge matching techniques with binary descriptors were used to quickly and accurately match keypoints. The quality of Digital Surfaces Models (DSM) obtained from UAV image acquisitions was evaluated by comparing the photogrammetric reconstructions to the data acquired by the 3D laser scanner and total station-theodolites. The paper presents the following structure divided into five sections, excluding the introduction. Section 1 is devoted to a description of the materials and methods used. In the Section 2, the process of image acquisition and surface reconstruction models are illustrated. In the Section 3, by means of a statistical synthesis of the surface reconstruction data, the relationship between acquisition grid shapes, acquisition grid parameters, and accuracy in 3D reconstruction is analyzed and discussed. Final considerations and conclusions are drawn in Sections 4 and 5, respectively. 2. Materials and Methods The fieldwork involved the following steps: acquisition, reconstruction, and analysis sessions. Initially, the aerial photo shooting was performed with different grid geometric parameters and configurations. For the aerial photogrammetric acquisition, an amateur UAV Hexacopter with Lipo 4S cells (4000 mha, 14.4 V, 576 Wh—over 20 min autonomy) was used (Figure 1a,b)

Sensors 2018, 18, 2815

3 of 17

Sensors 2018, 18, 2815

3 of 17

(a)

(b)

(c)

(e)

(d)

(f)

Figure 1. 1. (a) (a) UAV UAVplatform; platform; (b) (b) batteries batteries and and camera; camera; (c) (c) gimbal gimbal and and camera; camera; (d) (d) LCD LCD screen screen on on the the Figure radio control; (e) laser scanner Konica Minolta 9v-I; (f) Geodimeter 480 total station-theodolites. radio control; (e) laser scanner Konica Minolta 9v-I; (f) Geodimeter 480 total station-theodolites.

The control board was an Ardupilot APM 2.6 with Arducopter 3.1.5 flying software and a PC The control board was an Ardupilot APM 2.6 with Arducopter 3.1.5 flying software and a PC Mission Planner ground station. The board was equipped with a gyroscope, an accelerometer, a Mission Planner ground station. The board was equipped with a gyroscope, an accelerometer, magnetometer, a barometer which supplied the processor with 3D data concerning position and a magnetometer, a barometer which supplied the processor with 3D data concerning position and acceleration, and an electric speed controller (ESC) HobbyKing F30 (HK, Hong Kong, China) for the acceleration, and an electric speed controller (ESC) HobbyKing F30 (HK, Hong Kong, China) for the brushless motor. brushless motor. An action camera GoPro Hero 4 (Woodman Labs, San Francisco, CA, USA) Black Edition An action camera GoPro Hero 4 (Woodman Labs, San Francisco, CA, USA) Black Edition (Figure 1c) was positioned beneath the drone on a ‘Gimbal’ support. This support allowed the (Figure 1c) was positioned beneath the drone on a ‘Gimbal’ support. This support allowed the camera to rotate along three axes (pitch, roll, yaw) controlled by a digital board and manoeuvred by camera to rotate along three axes (pitch, roll, yaw) controlled by a digital board and manoeuvred by brushless motors which could dampen any drone shift keeping the camera motionless. The camera brushless motors which could dampen any drone shift keeping the camera motionless. The camera had a CMOS 12 MP 1/2.9″ solid-state sensor which could even sense electromagnetic data, so it was had a CMOS 12 MP 1/2.9” solid-state sensor which could even sense electromagnetic data, so it was also possible to check the radiometric data in the image pixels. A video transmitter provided real also possible to check the radiometric data in the image pixels. A video transmitter provided real time time shots on a 7″ LCD monitor incorporated into the radio control unit (Figure 1d). The UAV shots on a 7” LCD monitor incorporated into the radio control unit (Figure 1d). The UAV platform and platform and camera characteristics are summarized in Tables 1 and 2, respectively. camera characteristics are summarized in Tables 1 and 2, respectively. Table 1. UAV Platform. Table 1. UAV Platform.

Technical Specifications Technical Specifications Frame Engine Frame EngineEngine size [mm] Engine size (mm) Engine Weight [g] Engine Weight (g) Idle current@10v Idle current@10v[A] (A) Batteries Batteries Max Power[Wh] (Wh) Max Power Rotors Rotors GPS GPS Flying software Flying software UAV Weight (g) UAV Weight [g]

Value/Typology Value/Typology Hexacopter T-Motors 2216 (RoHS, Hong Kong, China) Hexacopter T-Motors 2216 (RoHS, Φ27.8 × 34Hong Kong, China) Φ27.8 75 × 34 75 0.040.04 LipoLipo 4S—4000 mha 4S—4000 mha 576576 Nylon10 5 pitch Nylon10 × 5×pitch uBlox LEA6H uBlox LEA6H Arducopter 3.1.5 Arducopter 11203.1.5 1120

Table 2. GoPro Hero 4 Black Edition. Table 2. GoPro Hero 4 Black Edition. Parameter Parameter Sensor Sensor Focal length (Fl ) (mm) Focal length (Fl) [mm] Sensor width (SW ) (mm) Sensor width (SW) [mm] Sensor length (Sl ) (mm) Sensor length (Sl) [mm] ISO sensitivity Lens ISOrange sensitivity Burst shooting (fps) Lens range Weight (g)

Burst shooting [fps] Weight [g]

Value Value CMOS MP1/2.9″ 1/2.9” CMOS 1212MP 15.5 15.5 16.8 16.8 22.4 22.4 80–6400 f/2.0–f/5.9 80–6400 2.3 f/2.0–f/5.9 198 2.3 198

Sensors 2018, 18, 2815

4 of 17

Sensors 2018, 18, 2815

4 of 17

GPS GPSLEA6H LEA6H(uBlox, (uBlox,Thalwil, Thalwil,Switzerland) Switzerland) with withGround GroundStation StationPC PCMission MissionPlanner Plannersoftware software (version 1) converted a discrete number of points into geo-referenced coordinates (GRC) by generating (version 1) converted a discrete number of points into geo-referenced coordinates (GRC) by waypoint acquisition In thisgrids. way, it to define different configurations of 3D grids. generating waypointgrids. acquisition Inwas this possible way, it was possible to define different configurations From each GRC, an image was acquired [32]. Figure 2a shows an example of a rectangular waypoint of 3D grids. From each GRC, an image was acquired [32]. Figure 2a shows an example of a acquisition grid at a flight height of 40 m. The numbers in the figure indicate the points in which the rectangular waypoint acquisition grid at a flight height of 40 m. The numbers in the figure indicate drone started, finished, changed trajectory. 2b shows, instead, an elliptical waypoint the points in which the and drone started,its finished, andFigure changed its trajectory. Figure 2b shows, instead, acquisition grid at a flight height (h ) of 30 m. v an elliptical waypoint acquisition grid at a flight height (ℎ ) of 30 m.

(b)

(a)

(c) Figure 2. Waypoint acquisition grids: (a) Rectangular at ℎ = 40 m; (b) Elliptical at ℎ = 30 m; (c) Figure 2. Waypoint acquisition grids: (a) Rectangular at hv = 40 m; (b) Elliptical at hv = 30 m; Cylindrical ℎ = 40 m. (c) Cylindrical hv = 40 m.

The commercial software Pix4Dmapper was used to create a polygonal mesh of the surface commercial software Pix4Dmapper was used to create a polygonal of the surface fromThe image collections captured by UAVs. The method employed for mesh mosaicking imagesfrom was image collections captured by UAVs. The method employed for mosaicking images was Bundle Block Bundle Block Adjustment (BBA) with SfM algorithms [33]. Each image has six exterior parameters: Adjustment with SfMwith algorithms image has six exterior parameters: thepoint camera’s the camera’s(BBA) position along its roll, [33]. pitch,Each and yaw angles, which mapped each scene (x, y, position with its roll, pitch, and angles, which mapped each scenewere point y, z) to z) to the along corresponding image point (x’,yaw y’). The parameters of these equations as (x, follows: (xcthe , yc, corresponding image point (x’, and y’). The of thesematrix equations as follows: (xc , ypitch, zc) was the camera’s position (mij)parameters was the rotation 3 × 3were defined by the roll, and c , zc ) was the camera’s position and (mThere the rotation matrix 3 × 3 defined the roll, pitch,point, and yaw anglesfx yaw angles of the camera. more fixed parameters: (xp, yby p) was the main whereas ij ) waswere of thefycamera. Therelength were more ) was lengths, the mainalong point, with whereas and were focal ratios.fixed Theparameters: focal points(xand the fdistortion and p , ypfocal x and f y were focal length ratios. focal points and focalby lengths, along with the distortion and radius of theflight. radial radius of the radialThe lens, were determined a calibration procedure before each acquisition lens, were determined by described a calibration procedure eachpolygonal acquisitionmeshes flight. were generated in By means of the method, 20 before different By means of thedifferent described method, 20 different meshes generated in correspondence of 20 waypoint acquisition grids polygonal by combining three were different acquisition correspondence of 20 different waypoint grids combining different grid shapes (rectangular, elliptical, andacquisition cylindrical), fiveby flight heightsthree (15 m, 20 m, acquisition 30 m, 40 m,grid and shapes elliptical, and cylindrical), flight heights (15 m,acquisition 20 m, 30 m,grid 40 m,shapes and 50 and m), 50 m),(rectangular, and different camera framing. The five relationship between and differentgrid camera framing. The relationship grid shapes and acquisition grid acquisition geometric parameters, as wellbetween as GSD acquisition and 3D reconstruction accuracy at a certain geometric parameters, as well as GSD and 3D reconstruction accuracy at a certain image overlap value, image overlap value, was determined. was determined. For rectangular camera grid acquisitions, different camera framing orientations were used, For rectangular camera grid acquisitions, different camera framing orientations were used, obtaining two different image collection sets. The first set of images was acquired by vertically obtaining different image collection The first set be of images by verticallyGrid oriented oriented two aerial photo shooting, which sets. will hereinafter referredwas to acquired as the “Rectangular with aerial photo shooting, which will hereinafter be referred to as the “Rectangular Grid with Vertical Vertical Camera” (RGVC). The RGVC was capable of generating orthomosaic image collections. The Camera” (RGVC). The RGVC wasacquired capable of orthomosaic image collections. The second second image collection set was bygenerating a camera oriented differently for each image using a image set camera, was acquired a camera oriented differently each“Rectangular image usingGrid a gimbal gimbalcollection and action whichbywill hereinafter be referred to for as the with Oscillating Camera” (RGOC).

Sensors 2018, 18, 2815

5 of 17

and action camera, which will hereinafter be referred to as the “Rectangular Grid with Oscillating Camera” (RGOC). The image collection sets in elliptical and cylindrical camera grid acquisitions were made exclusively by a camera oriented differently for each image, and will hereinafter be referred to as the “Elliptical Grid” (EG) referring to the first acquisition grid, and the “Cylindrical Grid” (CG) referring to the second acquisition grid. In the case study of the bridge surface reconstruction, the accuracy of the obtained twenty surface reconstructions was evaluated with the data acquired using a 3D laser scanning device Konica Minolta 9v-I (Konica Minolta, Ramsey, NJ, USA), precision ≤2 mm (Figure 1e), and a total station Geodimeter 480 (Geoglobex, Monza, Italy), distance accuracy ≤3 mm (Figure 1f). 3. Image Acquisition and Surface Reconstructions 3.1. Image Acquisition In the case study of the bridge surface reconstruction, the aerial image acquisition phases are illustrated as follows. Aerial image acquisitions were made with RGVC, RGOC, EG, and CG at five flight heights: 15 m, 20 m, 30 m, 40 m, and 50 m. The acquisition pitches p (m) in the grids forming the waypoints (Figure 3) were a function of the required Ground Sampling Distance (GSD) (cm/pixel) and overlap. They were calculated according to the following Equation (1) [34]:  p=

ImWp × GSD 100



× (1 − overlap)

(1)

where ImWp was the image width [pixel]. When the acquired images had a different length and width (ImL and ImW), it was necessary to define two different pitches to have a constant value of the overlap along the two orthogonal directions in the acquisition grids (Width Overlap and Length Overlap). In the RGVC considering the longitudinal extension as the reference point and positioning the grids as shown in Figure 2a, it was possible to determine the longitudinal pitch (pl ) and (orthogonal to the first) a transversal pitch (pt ), which ensured a constant overlap value of 66% (Figure 3). In Figure 3 are highlighted three areas, respectively represented with blue, orange and green colour, corresponding to different acquired images and the zone of their overlap, which defines the length and width of overlapping. The dots represent the subsequent positions of the UAV from which the three images were acquired. In the present study, in which the image dimensions in pixel (ImL p and ImWp ) were 4000 pixels and 3000 pixels, respectively, the following acquisition pitch p values were established:  pl =  pt =

ImL p × GSD 100



ImWp × GSD 100



× (1 − overlap) = 



× (1 − overlap) =

4000 × GSD 100



3000 × GSD 100



× 33.3

(2)

× 33.3

(3)

where GSD (cm/pixel) was calculated according to the following Equation (4) as a function of hv : GSD =

hv × Sw × 100 Fl × ImW

(4)

In this equation, Sw was the sensor width and Fl was the focal length (See Table 2). Therefore, in RGVC, an exactly constant value of the overlap value equal to 66% along the two orthogonal directions (Table 3a) was obtained.

Sensors 2018, 18, 2815

6 of 17

Table 3. (a) GSD, overlaps, and pitches in RGVC; (b) GSD, overlaps, and pitches in RGOC; (c) GSD, overlaps, and pitches in EG; (d) GSD, overlaps, and pitches in CG.

(a) hv (m)

GSD (cm/pixel)

ImW (m)

ImL (m)

W. Over. (%)

L. Over. (%)

pt (m)

pl (m)

15 20 30 40 50

0.36 0.54 0.90 1.26 1.63

10.8 16.3 27.1 37.9 48.8

14.5 21.7 36.1 50.6 65.0

66.0% 66.0% 66.0% 66.0% 66.0%

66.0% 66.0% 66.0% 66.0% 66.0%

3.7 5.5 9.2 12.9 16.6

4.9 7.4 12.3 17.2 22.1

(b) hv (m)

(◦ )

GSD (cm/pixel)

ImW (m)

ImL (m)

W. Over. (%)

L. Over. (%)

pt (m)

pl (m)

15

0.0 4.0 25.5

0.36 0.36 0.40

10.8 10.9 12.0

14.5 14.5 16.0

66.0% 66.0% 66.0%

66.0% 66.1% 69.3%

3.7 3.7 4.1

4.9 4.9 4.9

20

0.0 9.9 31.2

0.54 0.55 0.63

16.3 16.5 19.0

21.7 22.0 25.3

66.0% 66.0% 66.0%

66.0% 66.0% 70.5%

5.6 5.6 6.5

7.5 7.5 7.5

30

4.0 25.6

0.93 1.00

28.0 30.0

37.3 40.1

66.0% 66.0%

66.0% 68.3%

9.5 10.2

12.3 12.3

40

6.1 35.6

1.32 1.43

39.6 42.8

52.8 57.1

66.0% 66.0%

66.0% 68.5%

13.5 14.6

17.2 17.2

50

7.2 28.7

1.71 1.85

51.2 55.6

68.3 74.1

66.0% 66.0%

66.0% 68.7%

17.4 18.9

22.1 22.1

ϑr

(c) hv (m)

(◦ )

GSD (cm/pixel)

ImW (m)

ImL (m)

Tan. Over. (%)

Trans. Over. (%)

ptan (m)

15

23.2 35.0

0.41 40.76

12.3 15.3

16.4 20.4

66.0% 66.0%

66.1% 78.3%

4.2 5.2

20

27.7 38.7

0.67 0.77

20.1 23.0

26.8 30.7

66.0% 66.0%

66.0% 80.3%

6.8 7.8

30

33.6 41.3

1.15 1.28

34.4 38.3

45.9 51.1

66.0% 66.0%

66.1% 80.1%

11.7 13.0

40

38.7 42.4

1.69 1.79

53.6 50.7

71.5 67.6

66.0% 66.0%

66.0% 79.8%

17.2 18.2

50

38.7 43.0

2.15 2.30

64.5 69.0

86.1 92.0

66.0% 66.0%

66.0% 80.6%

21.9 23.5

hv (m)

ϑ e (◦ )

GSD (cm/pixel)

ImW (m)

ImL (m)

W. Over. (%)

L. Over. (%)

pt (m)

pl (m)

15

0.0 90.0

0.36 0.43

10.8 13.0

14.5 17.3

66.0% 66.0%

66.0% 71.7%

3.7 4.4

4.9 4.9

20

0.0 90.0

0.54 0.61

16.3 18.4

21.7 24.6

66.0% 66.0%

66.0% 70.0%

5.5 6.3

7.4 7.4

30

0.0 90.0

0.90 0.98

27.1 29.3

36.1 39.0

66.0% 66.0%

66.0% 68.5%

9.2 10.0

12.3 12.3

40

0.0 90.0

1.26 1.34

37.9 40.1

50.6 53.5

66.0% 66.0%

66.0% 67.8%

12.9 13.6

17.2 17.2

50

0.0 90.0

1.63 1.70

48.8 50.9

65.0 67.9

66.0% 66.0%

66.0% 67.4%

16.6 17.3

22.1 22.1

ϑe

(d)

In RGOC, EG, and CG, the values of grid shapes and acquisition pitches p were established to ensure overlap values close to 66% along the two orthogonal directions. These values of overlap close

Sensors 2018, 18, 2815

7 of 17

to 66% in RGVC, RGOC, EG, and CG ensured that each keypoint was captured in at least three different shots (Figure 4), thus avoiding the risk of insufficient feature overlaps across images in post-flight image processing, as well as the subsequent risk of failure in 3D reconstruction. This result is also reported in2018, other research projects [35], to ensure full coverage in aerial laser scanning. 6 of617of 17 Sensors 2018, 2815 Sensors 18,18, 2815

Figure 3. Waypoint and overlap in RGVC at ℎ = 40 m.

Figure 3. Waypoint and overlap in RGVC at hv = 40 m. Figure 3. Waypoint and overlap in RGVC at ℎ = 40 m.

Figure 4. Common keypoint in three sequentially captured images in EG at ℎ = 30 m.

Figure 4. Common keypoint in three images EG transversal at ℎ = 30 m.direction, In the the overlap value equalsequentially to 66% wascaptured imposed alonginthe FigureRGOC, 4. Common keypoint in three sequentially captured images in EG at hv = 30 m. determining the values of transversal pitches (pt). Using longitudinal pitches (pl) equal to RGVC, it Inchecked the RGOC, thevalue overlap valuealong equal 66% was imposed along the less transversal was that the obtained thetolongitudinal direction was never than 66%direction, (Table In the The RGOC, overlap value the equal to(p66% was imposed the direction, determining the the values of transversal pitches t). Using longitudinal pitches (pl) transversal equal to RGVC, it 3b). gimbal rotation ϑr around horizontal axis parallel to thealong longitudinal direction was synchronized with transverse x [m] UAVdirection platformwas in relation to the center line was checked the the value obtaineddistance along the longitudinal never less than 66% (Table determining the that values of transversal pitches (ptof ). the Using longitudinal pitches (p ) equal to RGVC, l and with the flight [m] according to Equation (5):parallel to 3b). The gimbal rotation ϑhr varound thealong horizontal axis the longitudinal direction was 66% it was checked that theheight value obtained the longitudinal direction was never less than

with rotation the transverse distance x [m] 𝑥of−the UAVparallel platform relation to the center line was 𝑤 /2 (Tablesynchronized 3b). The gimbal ϑr around axis tointhe longitudinal direction the =tohorizontal tan (5) and with the flight height hv [m] according Equation (5): ℎ −UAV ℎ synchronized with the transverse distance x (m) of the platform in relation to the center line and 𝑥 − 𝑤 /2for the object width (Figure 5). [m] stands for the object height and wo [m](5): stands with thewhere flighthoheight hv (m) according toEquation = tan (5)

ℎ − was ℎ imposed along the tangential direction In EG (Figure 6), the overlap value equal to 66% (Tangential Overlap), thus obtaining the value of the tangential pitch (ptan), and it was checked that the x − w o /2 1 stands where ho [m] stands for the object height wo − [m] for the object width (Figure 5). ϑr and = tan (5) overlap value obtained along the transversal direction (Transversal Overlap), orthogonal to the first, hv − ho imposed In EG (Figure 6), the overlap value equal to 66% was along the tangential direction was never less than 66% (Table 3c). In EG, the gimbal rotation ϑe around the tangential axis to the (Tangential Overlap), thus obtaining the value of the tangential pitch (ptan), and it was checked that the orbit was with theand minimum from acquired object according where helliptic forsynchronized the object height wo (m)distance standsρfor thethe object width (Figure 5). to o (m) stands overlap value obtained along the transversal direction (Transversal Overlap), orthogonal to the first, the following Equation (6): In (Figure 6), the value equal 66% rotation was imposed along the tangential wasEG never less than 66%overlap (Table 3c). In EG, the to gimbal ϑe around the tangential axis todirection the 𝜌 (Tangential Overlap), thus obtaining the value of the tangential pitch (p ), and it was checked tan = tan elliptic orbit was synchronized with theminimum distance ρ from the acquired object according to the (6) that ℎ −ℎ

overlap obtained along the value following Equation (6): the transversal direction (Transversal Overlap), orthogonal to the first, was never less than 66% (Table 3c). In EG, the gimbal 𝜌 rotation ϑe around the tangential axis to the  = tan (6) elliptic orbit was synchronized with the minimum distance ℎ − ℎ ρ from the acquired object according to the following Equation (6): ρ ϑe = tan−1 (6) hv − ho

Sensors 2018, 18, 2815

8 of 17

Sensors 2018, 18, 2815

7 of 17

Sensors 2018, 2815 Sensors 2018, 18, 18, 2815

7 of7 17 of 17

Figure 5. Gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at Figure 5. Gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at Figure ℎ = 15 m. 5. Gimbal rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at hv = 15 m. Figure rotation ϑr around the horizontal axis parallel to the longitudinal direction in EG at ℎ =5.15Gimbal m. ℎ = 15 m.

Figure 6. Waypoint and overlap in EG at ℎ = 15 m.

Figure andoverlap overlapininEG EGatatℎhv==1515m. m. Figure 6. 6. Waypoint Waypoint and Figure 6. Waypoint and the overlap in EG = 15 m. axis at the various flight Equation (6) was also used to determine values of at theℎ ellipses

Equation was also to determine the values of the ellipses axis at the various heights.(6) In(6) particular, theused axis were the chosen to obtain a value of the gimbal rotation ϑe around Equation was also used to lengths determine values of the ellipses axis at the various flightflight heights. Equation (6) was also used to determine the values of the ellipses axis at the various flight heights. In particular, the axis lengths were chosen to obtain a value of the gimbal rotation ϑe around the tangential axis close to 30° at point A and close to 40° at point L. It was seen that these values In particular, the axis lengths were chosen to obtain a value of the gimbal rotation ϑe around the In particular, axis lengths chosen toto obtain a value gimbal ϑe around allowed usaxis to obtain both (tangential transversal) close to rotation 66% at these the same theheights. tangential closethe to 30°the atoverlap pointwere Avalues and close 40°and at point of L.the It was seen that values tangential axis close to close 30◦ attopoint Apoint and A close to 40◦ to at 40° point L. It was seen that these values allowed the tangential axis 30° at and close at point L. It was seen that these values time. allowed us to obtain both the overlap values (tangential and transversal) close to 66% at the same us to allowed obtain In both values and transversal) closetotothe 66% at the overlap gimbal ϑ(tangential c around the(tangential horizontal axis transversal) parallel longitudinal was usCG, tothe obtain bothrotation the overlap values and close tothe 66%same ataxis thetime. same time. In CG, the gimbal around horizontal axis parallel to the longitudinal axis synchronized with rotation the radial ϑdirection of the the cylindrical trajectory according to the following c time. In CG, the gimbal rotation ϑc around the horizontal axis parallel to the longitudinal axis was Equation (7) (Figure 7): was synchronized with the radial direction of the cylindrical trajectory according to the following In CG, the gimbal rotation ϑ c around the horizontal axis parallel to the longitudinal axis was synchronized with the radial direction of the cylindrical trajectory according to the following synchronized with 𝑥 − 𝑤 /2 trajectory according to the following Equation (7)(7) (Figure 7):7):the radial direction of the cylindrical Equation (Figure  = tan (7) Equation (7) (Figure 7): ℎ x−−ℎw/2 o /2 ϑc = tan−1𝑥 − 𝑤 /2 (7) − /2 ho /2  = tan 𝑥h−v 𝑤 (7)  = tan ℎ − ℎ /2 (7) ℎ − ℎ /2

Figure 7. Gimbal rotation ϑc in CG at ℎ = 15 m.

Figure7. Gimbalrotation rotation ϑ ϑϑcc ininCG m. Figure 7.7.Gimbal Gimbal 1515 m. Figure rotation CGatatℎℎ hv===15 m. c

Sensors 2018, 18, 2815

9 of 17

Sensors 2018, 18, 2815

8 of 17

By replacing the minimum distance from the bridge (dmin ) in Equation (1) with the value hv , By replacing the minimum distance from the bridge (dmin) in Equation (1) with the value hv, pitch values that ensured overlap values close to 66% were determined. In Figure 8, the waypoint at pitch values that ensured overlap values close to 66% were determined. In Figure 8, the waypoint at hv = 40 m is shown. Values of GSD overlap and pitches are shown in Table 3d. ℎ = 40 m is shown. Values of GSD overlap and pitches are shown in Table 3d. Sensors 2018, 18, 2815

9 of 17

(d) hv [m] 𝒆 [°] GSD [cm/pixel] ImW [m] ImL [m] W. Over. [%] L. Over. [%] pt [m] pl [m] 0.0 0.36 10.8 14.5 66.0% 66.0% 3.7 4.9 15 90.0 0.43 13.0 17.3 66.0% 71.7% 4.4 4.9 0.0 0.54 16.3 21.7 66.0% 66.0% 5.5 7.4 20 90.0 0.61 18.4 24.6 66.0% 70.0% 6.3 7.4 0.0 0.90 27.1 36.1 66.0% 66.0% 9.2 12.3 30 90.0 0.98 39.0 68.5% 10.0 12.3 Figure 8.8.Waypoint in = 40 m. Figure29.3 Waypoint inCG CGat at ℎh66.0% v = 40 m. 0.0 1.26 37.9 50.6 66.0% 66.0% 12.9 17.2 40 Table 3. (a) GSD, overlaps, (b) GSD, overlaps, and pitches in RGOC; (c) GSD, 1.34 and pitches 40.1in RGVC; 53.5 67.8% 13.6 17.2 EG, Table 490.0 shows the geometric parameters (grid pitches, 66.0% width, and length) in RGVC, RGOC, overlaps,0.0 and pitches 1.63 in EG; (d) GSD,48.8 overlaps, and pitches in CG. 65.0 66.0% 66.0% 16.6 22.1 and CG. 50 The number of photos (n. GRC) for each acquisition set in order to obtain the overlap values 1.70 50.9 67.9(a) 66.0% 67.4% 17.3 22.1 close to 66%90.0 is also reported.

hv [m] GSD [cm/pixel] ImW [m] ImL [m] W. Over. [%] L. Over. [%] pt [m] pl [m] Table geometric parameters (grid pitches, width, and length) RGOC, EG, Table 4. Geometric characteristics of: RGVC, RGOC, EG, and CG.in RGVC, 15 4 shows the 0.36 10.8 14.5 66.0% 66.0% 3.7 4.9 and CG.20 The number of photos (n.16.3 GRC) for 21.7 each acquisition to obtain 5.5 the overlap 0.54 66.0%set in order 66.0% 7.4 values RGVC—Rectangular Grid with Vertical Camera RGOC—Rectangular Grid with Oscillating Camera close to 30 66% is also reported. 0.90 27.1 36.1 66.0% 66.0% 9.2 12.3 hv (m) 15

pt (m)

40 50 3.7

pl (m)

Width (m)

Length (m)

n. GRC

hv (m)

pt (m)

pl (m)

Width (m)

Length (m)

1.26 37.9 50.6 66.0% 66.0% 12.9 17.2 3.7; 4. Geometric of: CG. 4.9 Table14.8 73.5characteristics 80 15RGVC, RGOC, 4.9EG, and 15.5 73.5 1.63 48.8 65.0 66.0%4.1 66.0% 16.6 22.1

n. GRC 80

5.6; 20 5.5 7.4 81 Camera 60 20 7.4 Grid24.1 81 60 RGVC—Rectangular Grid22 with Vertical RGOC—Rectangular with Oscillating Camera 6.5 (b) 30 pt [m] 9.2 pl [m] 12.3 Width27.6 32 hv [m] 30 pt 9.5; 41.7[m] Length 86 [m] n. 40 hv [m] [m] Length86[m] n. GRC [m]10 pl 12.3 [m] Width GRC 13.5; [%] L. Over. [%] pt [m] pl [m] hv40[m]3.712.9 GSD 14.8 [cm/pixel] ImW [m]8028 ImL [m] 𝒓 [°] 15 4.917.2 73.5 15 40 W. 3.7;Over. 4.1 4.9 15.5 73.5 80 38.7 103.2 17.2 59.4 103.2 35 14.6 0.36 66.0% 3.7 4.960 20 5.5 0.0 7.4 22 81 10.8 60 14.520 5.6;66.0% 6.5 7.4 24.1 81 17.4; 50 16.6 22.1 49.8 110.5 24 50 22.1 77.1 110.5 30 18.9 30 15 9.2 4.012.3 27.6 86 10.9 32 14.530 9.5;66.0% 10 12.3 41.7 86 0.36 66.1% 3.7 4.940 CG—Cylindrical 40 12.925.517.2 EG—Elliptical 38.7 14.6 17.2 59.4 Grid 103.2 0.40 Grid 103.212.0 28 16.040 13.5;66.0% 69.3% 4.1 50 49.8 110.5 24GRC 50 77.1(m) Length 110.5(m) hv (m) 16.6 ptan22.1 (m) Width (m) Length (m) n. hv (m)17.4;pt18.9 (m) 22.1 pl (m) Width 0.0 0.54 16.3 21.7 66.0% 66.0% 5.6 EG—Elliptical Grid 3.7; CG—Cylindrical Grid 15 4.2; 5.2 26 8016.5 36 22.0 15 4.9 30 73.5 20 9.9 0.55 66.0% 66.0% 5.6 4.4 hv [m] ptan [m] Width [m] Length [m] n. GRC hv [m] pt [m] pl [m] Width [m] Length [m] 5.5; 31.2 0.63 70.5% 6.5 6.8; 7.4 40 81 1520 4.2; 5.27.8 26 36 80 9019.0 3628 25.315 20 3.7;66.0% 4.4 4.9 30 73.5 6.3 4.0 0.93 28.0 37.3 66.0% 66.0% 9.5 30 11.7; 13 56 110 22 30 9.2; 10 12.3 60 86 20 6.8; 7.8 36 90 28 20 5.5; 6.3 7.4 40 81 30 12.9; 3040 11.7; 13 56 110 22 30 9.2; 10 12.3 60 86 17.2; 18.2 76 130 20 40.1 40 17.2 80 103.2 25.6 1.00 30.0 66.0% 68.3% 10.2 13.6 40 17.2; 76 130 39.6 20 52.840 12.9;66.0% 13.6 17.2 80 103.2 6.118.2 1.32 66.0% 13.5

40 35.6 1.43 42.8 57.1 66.0% Figure 99 shows Figure shows the the EG EG at at 30 30 m m and and the the RGVC RGVC at at 50 50 m. m. 7.2 1.71 51.2 68.3 66.0% 50 28.7 1.85 55.6 74.1 66.0%

68.5% 66.0% 68.7%

14.6 17.4 18.9

4.935 7.5 208 7.5 n. GRC 7.5 144 208 12.3 88 144 88 70 12.3 17.270 17.2 22.1 22.1 30 n. GRC

(c) hv [m] 𝒆 [°] GSD [cm/pixel] ImW [m] ImL [m] Tan. Over. [%] Trans. Over. [%] ptan [m] 23.2 0.41 12.3 16.4 66.0% 66.1% 4.2 15 35.0 40.76 15.3 20.4 66.0% 78.3% 5.2 27.7 0.67 20.1 26.8 66.0% 66.0% 6.8 20 38.7 0.77 23.0 30.7 66.0% 80.3% 7.8 33.6 1.15 34.4 45.9 66.0% 66.1% 11.7 Figure 9. EG at ℎ = 30 m and RGVC at ℎ = 50 m. 30 at hv = 3051.1 m and RGVC at hv = 50 m. 41.3 1.28 Figure 9. EG 38.3 66.0% 80.1% 13.0 38.7 1.69 53.6 71.5 66.0% 66.0% 17.2 3.2. Surface Reconstruction 40 42.4 1.79 50.7 67.6 66.0% 79.8% 18.2 Once 38.7 the image 2.15 collections for the reconstruction were acquired, the system almost 64.5 86.1 66.0% 66.0% 21.9 50 automatically calibrated the cameras based on the exchangeable image file (EXIF) information, 43.0 2.30 69.0 92.0 66.0% 80.6% 23.5 also reported in [7], and the Fisheye lens camera model (Pix4D), found in the images, and the GCPs aligned them into the 3D space and produced a complete and single mesh using SfM algorithms. The reconstruction of the surface is based on the following steps:

Sensors 2018, 18, 2815

10 of 17

Sensors 2018, 18, 2815

10 of 17

3.2. Surface Reconstruction

features irrespective of their scale, and were rotation. Studies the performance of such Once the image collections for position, the reconstruction acquired, theonsystem almost automatically feature descriptors are given in [36]. calibrated the cameras based on the exchangeable image file (EXIF) information, also reported in [7], 2. theMatching points, as well as approximate values image and position and orientation provided and Fisheye lens camera model (Pix4D), found in of thethe images, the GCPs aligned them into the by Ardupilot APM 2.6, are used in BBA (Bundle Block Adjustment) [37,38] to reconstruct the 3D space and produced a complete and single mesh using SfM algorithms. The reconstruction of the exact position and orientation of the camera for every acquired image. surface is based on the following steps: 3.

Based on this reconstruction, the matching points are verified and their 3D coordinates are

calculated. searches The geo-reference system used by is uBlox LEA6H PC Mission Algorithm for matching points analyzing allwith the Groundstation acquired images. Matching Planner software (version 1), based on GPS measurements from the UAV Ardupilot APM 2.6 techniques with binary descriptors, together with FBM and ABM, are employed to identify during the flight. features irrespective of their position, scale, and rotation. Studies on the performance of such 4. feature Those descriptors 3D points are to form a triangulated irregular network in order to obtain a areinterpolated given in [36]. polygonal mesh of surfaces (Figure 10a,b)values [39,40].ofAt this stage,position a denseand 3D model can increase 2. Matching points, as well as approximate the image orientation provided the spatial resolution of the triangle structure [41]. by Ardupilot APM 2.6, are used in BBA (Bundle Block Adjustment) [37,38] to reconstruct the 5. The polygonal mesh of surfaces is used to project every image pixel, obtaining a mapped exact position and orientation of the camera for every acquired image. texture surface (Figure 10d), and to calculate the georeferenced orthomosaic [42]. 3. Based on this reconstruction, the matching points are verified and their 3D coordinates are In detail, from 3D feature points by theLEA6H SfM, the mesh data were obtained by calculated. The the geo-reference systemcalculated used is uBlox with Groundstation PC Mission using Delaunay triangulation. Then, theon mesh used as an outline of UAV the object, which was2.6 Planner software (version 1), based GPSwas measurements from the Ardupilot APM projected onto the plane of the images to get the estimated depth maps. These maps were optimized during the flight. and corrected using the pixel matching algorithm based on the patch. Finally, dense point cloud data 4. Those 3D points are interpolated to form a triangulated irregular network in order to obtain were obtained by fusing these depth maps. From the detected accurate cloud points, a 3D polygonal a polygonal mesh of surfaces (Figure 10a,b) [39,40]. At this stage, a dense 3D model can increase mesh was obtained (point 4 of previous reconstruction steps). The polygonal mesh obtained can be the spatial resolution of the triangle structure [41]. easily transformed, through open source algorithms, into a nurbs surface (Figure 10c) for different The polygonal mesh of surfaces is used to project every image pixel, obtaining a mapped texture 5. applications. Figure 10 shows some steps of the described procedure in the case study of the bridge surface (Figure 10d), and surface reconstruction (CG at to ℎ calculate = 15 m). the georeferenced orthomosaic [42].

1.

(a)

(c)

(b)

(d)

Figure 10. Surface reconstruction: (a) polygonal mesh of surfaces; (b) polygonal mesh visualized Figure 10. Surface reconstruction: (a) polygonal mesh of surfaces; (b) polygonal mesh visualized with with shade; (c) nurbs surface; (d) mapped texture surface. shade; (c) nurbs surface; (d) mapped texture surface.

The flowchart relative to the reconstruction algorithm is summarized in Figure 11. Within the blocks highlighted in yellow, the equivalent open source shown. In detail, from the 3D feature points calculated by thealgorithm SfM, the is mesh data were obtained by using

Delaunay triangulation. Then, the mesh was used as an outline of the object, which was projected onto the plane of the images to get the estimated depth maps. These maps were optimized and corrected using the pixel matching algorithm based on the patch. Finally, dense point cloud data were obtained by fusing these depth maps. From the detected accurate cloud points, a 3D polygonal mesh was obtained (point 4 of previous reconstruction steps). The polygonal mesh obtained can be easily transformed, through open source algorithms, into a nurbs surface (Figure 10c) for different applications. Figure 10 shows some steps of the described procedure in the case study of the bridge surface reconstruction (CG at hv = 15 m).

Sensors 2018, 18, 2815

11 of 17

Sensors 2018, 18, 2815

11 of 17

The flowchart relative to the reconstruction algorithm is summarized in Figure 11. Within the blocks highlighted in yellow, the equivalent open source algorithm is shown. Sensors 2018, 18, 2815

11 of 17

Figure 11. Flowchart of the main steps of the surface reconstruction algorithm.

In the Reference [43], it is possible to find the necessary information to also implement the reconstruction algorithms used in open source software, for obtaining results similar to those of the commercial software. In the same paper, the matching features algorithm written in pseudocode can be found. Figure 11. 11. Flowchart Flowchart of of the the main main steps steps of of the the surface surfacereconstruction reconstructionalgorithm. algorithm. Figure

4. Reconstructions Accuracy Evaluation In the the Reference Reference [43], [43], itit isis possible possible to to find find the the necessary necessary information information to to also also implement implement the the In The accuracy evaluation of the 20 reconstructions in the case study of the bridge was performed reconstruction algorithms algorithms used used in in open open source source software, software, for for obtaining obtaining results results similar similar to to those those of of the the reconstruction by measuring two reference shapes: thethe pounding upper surface (pus) and theinsouth side of can the commercial software. In the same paper, matching features algorithm written pseudocode commercial software. In the same paper, the matching features algorithm written in pseudocode can north-east be found. found. surveyed arch (arc) of the bridge. Using the terrestrial laser scanner and the total be station-theodolites, the coordinates of 29 keypoints on the pounding upper surface (Figure 12a) and the coordinates of 11 keypoints on the south side of the north-east surveyed arch (Figure 12b) were 4. Reconstructions Reconstructions Accuracy Evaluation 4. Accuracy Evaluation acquired. The accuracyevaluation evaluation ofthe the2020reconstructions reconstructions case study of the bridge was performed The inin thethe case study of the bridge was performed by The accuracy choice of these two of shapes, being on two mutually orthogonal positions, allowed the right by measuring two reference shapes: the pounding upper surface (pus) and the south side of the measuring two reference shapes: the pounding upper surface (pus) and dimensions. the south sideSuch of the north-east accuracy evaluation of the reconstructions entirely along the three shapes had, north-eastarch surveyed archbridge. (arc) of the the bridge. Usinglaser thescanner terrestrial laser scanner and the total surveyed (arc) of the Using terrestrial and the total station-theodolites, indeed, a favorable spatial distribution, which allowed an accurate validation of the length, width, station-theodolites, the coordinates of 29 keypoints on the pounding upper surface 12a) and the keypoints on profile the pounding upper surface the(Figure coordinates of andcoordinates height and of the29reconstructed evaluation of the arch.(Figure Six of 12a) theseand points (outlined with the coordinates of 11 keypoints on the south side of the north-east surveyed arch (Figure 12b) were 11 keypoints on the south side of the north-east surveyed arch (Figure 12b) were acquired. green markers in Figure 10a) were those used as GCPs. acquired. The choice of these two shapes, being on two mutually orthogonal positions, allowed the right accuracy evaluation of the reconstructions entirely along the three dimensions. Such shapes had, indeed, a favorable spatial distribution, which allowed an accurate validation of the length, width, and height and the reconstructed profile evaluation of the arch. Six of these points (outlined with green markers in Figure 10a) were those used as GCPs.

(a)

(b)

Figure 12. (a) 29 keypoints on the pounding upper surface; (b) 11 keypoints on the south side of the Figure 12. (a) 29 keypoints on the pounding upper surface; (b) 11 keypoints on the south side of the north-east arch. arch. north-east

The partial scans were acquired by positioning the scanner’s sensor plane parallel to the upper The choice of these two shapes, being on two mutually orthogonal positions, allowed the right surface. The comparison was(a)carried out using Meshlab [44] and CloudCompare open (b) Such[45–48] accuracy evaluation of the reconstructions entirely along the three dimensions. shapes had, source software. Moreover, Meshlab was used to align the partial scans with the 3D model produced indeed, a favorable spatial distribution, which allowed an (b) accurate validation ofsouth the length, width, Figure 12. (a) 29Instead, keypoints on the pounding upper surface; 11 employed keypoints ontothe side of the by Pix4Dmapper. the CloudCompare software was estimate the surface and height and the reconstructed profile evaluation of the arch. Six of these points (outlined with north-east arch.the mesh obtained with Pix4Dmapper and the point cloud obtained with 3Dgreen deviation between laser markers in Figure 10a) were those used as GCPs. scanning. The The partial partial scans scans were were acquired acquiredby bypositioning positioningthe thescanner’s scanner’s sensor sensor plane plane parallel parallel to to the the upper upper The ICP algorithm, implemented in Meshlab, was used to align each partial view obtained with surface. The comparison was carried out using Meshlab [44] and CloudCompare [45–48] open source surface. The comparison was carried out using Meshlab [44] and CloudCompare [45–48] open the 3D laser scanning with the Pix4Dmapper triangular mesh model. In order to compare the two software. Moreover, Meshlab was used align the partial scansscans withwith the 3D produced by source software. Moreover, Meshlab wasto used to align the partial themodel 3D model produced by Pix4Dmapper. Instead, the CloudCompare software was employed to estimate the surface deviation between the mesh obtained with Pix4Dmapper and the point cloud obtained with 3D laser scanning. The ICP algorithm, implemented in Meshlab, was used to align each partial view obtained with the 3D laser scanning with the Pix4Dmapper triangular mesh model. In order to compare the two

Sensors 2018, 18, 2815

12 of 17

Pix4Dmapper. Instead, the CloudCompare software was employed to estimate the surface deviation between the mesh obtained with Pix4Dmapper and the point cloud obtained with 3D laser scanning. The ICP algorithm, implemented in Meshlab, was used to align each partial view obtained with Sensors 2815 of 17 the 3D2018, laser18,scanning with the Pix4Dmapper triangular mesh model. In order to compare the two12data types, the cloud-to-mesh distance function offered by CloudCompare was selected as it was considered data types, the to cloud-to-mesh distance function distance offered by CloudCompare selected as it was as more robust local noise. The cloud-to-mesh function computed was the distances between considered as more robust to local noise. The cloud-to-mesh distance function computed the each vertex of the point cloud to the nearest triangle of the mesh surface. The distance between the distances between each vertex of point cloud the nearestprojection triangle of The two was calculated as follows. Inthe cases where thetoorthogonal of the the mesh vertexsurface. laid inside distance between the two was calculated as follows. In cases where the orthogonal projection of the the surface defined by a triangle, then the distance between the vertex and its point of intersection vertex laid inside surface defined by a(acc) triangle, then the distance betweenthese the vertex andbetween its point on the surface wasthe calculated. Accuracy was measured by evaluating distances of intersection on the surface was calculated. Accuracy (acc) was measured by evaluating these the obtained points and the homologous points in the 3D reconstructions. For a precise accuracy distances between the obtained points and the homologous points in the 3D reconstructions. For evaluation, for both the two shapes, the accuracy was evaluated by introducing the standard deviationa precise accuracy evaluation, bothlength the twoofshapes, the accuracy was introducing (σ) of such differences for a for typical the shape (Equations (8)evaluated and (9)). by With referencethe to standard deviation (σ) of such differences for a typical length of the shape (Equations (8) and (9)). the pounding upper surface (pus), the typical distance considered was the mean value ABmean of the With reference the= pounding upper surface (pus), thethe typical distance considered the mean mean 14 distances A x Btox (x 1, 2, . . . 14), and for the arch (arc), observed typical distance was was the value R𝐴𝐵 of the 14 distances 𝐴 𝐵 (x = 1, 2, …14), and for the arch (arc), the observed typical value mean of the 11 rays Ry (y = A, B, . . . M). distance was the mean value Rmean of the 11 rays Ry (y = A, B, …M). 14 14 11 for x = 1, 2, . . . 14 (8) acc =  = 𝑎𝑐𝑐 pus = σ × ∑14 A x Bx = (8) ABmean for 𝑥 = 1, 2, … 14 A x𝐵Bxrecontruction ) 𝜎σAB × × 𝐴𝐵 𝜎 AB× ∑ x=1(𝐴 𝐵 measured − −𝐴 𝑚𝑒𝑎𝑛 11 1 11 1   = acc = 𝑎𝑐𝑐arc = σ × ∑ M = σ × R 𝜎R × ∑y= A (𝑅Ry measured − −R𝑅y reconstruction ) 𝜎AB × 𝑅mean 𝑚𝑒𝑎𝑛

for x = A, B, . . . M for 𝑥 = A, B, … M

(9) (9)

Figure deviation distances (σAB in Figure 13 13shows showsthe themean meandistance distance(cm) [cm]and andstandard standard deviation distances (𝜎 ; σ;R )𝜎calculated ) calculated cm in the case of surface reconstruction with CGCG at hat v =ℎ15=m. in cm in the case of surface reconstruction with 15 m.

(a)

(b)

Figure 13. Error in mean distances [cm] and its standard deviation distances [cm] distribution graphs Figure 13. Error in mean distances (cm) and its standard deviation distances (cm) distribution graphs in CG at 15 m: (a) pounding upper surface; (b) south side of the north-east surveyed arch. in CG at 15 m: (a) pounding upper surface; (b) south side of the north-east surveyed arch.

In Table 5, the values of the mean error distance ( 𝐴𝐵 ; 𝑅 ) and standard deviation In Table of theinmean distance (ABmean ; Rare ) and standard deviation distances meanshown. distances (𝜎 5,; the 𝜎 )values expressed cm inerror the 20 reconstructions These values enabled us to (σ ; σ ) expressed in cm in the 20 reconstructions are shown. These values enabled us to measure AB R the accuracy through each of the 20 reconstructions which were taken into consideration measure in the accuracy through each of the 20 reconstructions which were taken into consideration in the the present work. present work. Table 5. Error in mean distances and its standard deviations in cm in the 20 surface reconstructions. Acq. Grid Tipology RGVC 15 m RGVC 20 m RGVC 30 m RGVC 40 m RGVC 50 m RGOC 15 m RGOC 20 m RGOC 30 m RGOC 40 m RGOC 50 m

𝑨𝑩𝒎𝒆𝒂𝒏

𝝈𝑨𝑩

𝒂𝒄𝒄𝒑𝒖𝒔

𝑹𝒎𝒆𝒂𝒏

𝝈𝑹

𝒂𝒄𝒄𝒂𝒓𝒄

0.65 1.07 1.71 2.73 3.36 0.54 0.93 1.48 2.14 2.60

1.8 1.9 1.9 2.0 2.1 1.2 1.3 1.4 1.4 1.5

0.85 0.49 0.31 0.18 0.14 1.54 0.83 0.48 0.33 0.26

1.08 1.69 2.70 6.17 8.38 0.6 1.03 1.58 2.98 3.96

2.9 2.9 3.0 3.0 3.1 1.9 2.0 2.1 2.2 2.2

0.35 0.20 0.12 0.05 0.04 0.88 0.50 0.30 0.15 0.11

Acq. Grid Tipology EG 15 m EG 20 m EG 30 m EG 40 m EG 50 m CG 15 m CG 20 m CG 30 m CG 40 m CG 50 m

𝑨𝑩𝒎𝒆𝒂𝒏 0.45 0.78 1.28 1.87 2.50 0.31 0.52 0.82 1.15 1.54

𝝈𝑨𝑩 1.1 1.2 1.2 1.3 1.3 0.40 0.58 0.74 0.80 0.80

𝒂𝒄𝒄𝒑𝒖𝒔 𝑹𝒎𝒆𝒂𝒏 𝝈𝑹 2.02 1.07 0.65 0.41 0.31 4.67 2.77 1.53 1.09 0.81

0.70 1.26 2.03 4.76 6.43 0.5 0.84 1.30 2.95 3.97

1.7 1.7 1.8 1.8 1.8 0..0 1.1 1.1 1.1 1.2

𝒂𝒄𝒄𝒂𝒓𝒄 0.84 0.47 0.27 0.12 0.09 2.30 1.09 0.70 0.31 0.21

Sensors 2018, 18, 2815

13 of 17

Table 5. Error in mean distances and its standard deviations in cm in the 20 surface reconstructions. Acq. Grid Tipology

ABmean

σ AB

accpus Rmean

σR

accarc

Acq. Grid Tipology

ABmean

σ AB

accpus Rmean

σR

accarc

RGVC 15 m RGVC 20 m RGVC 30 m RGVC 40 m RGVC 50 m

0.65 1.07 1.71 2.73 3.36

1.8 1.9 1.9 2.0 2.1

0.85 0.49 0.31 0.18 0.14

1.08 1.69 2.70 6.17 8.38

2.9 2.9 3.0 3.0 3.1

0.35 0.20 0.12 0.05 0.04

EG 15 m EG 20 m EG 30 m EG 40 m EG 50 m

0.45 0.78 1.28 1.87 2.50

1.1 1.2 1.2 1.3 1.3

2.02 1.07 0.65 0.41 0.31

0.70 1.26 2.03 4.76 6.43

1.7 1.7 1.8 1.8 1.8

0.84 0.47 0.27 0.12 0.09

RGOC 15 m RGOC 20 m RGOC 30 m RGOC 40 m RGOC 50 m

0.54 0.93 1.48 2.14 2.60

1.2 1.3 1.4 1.4 1.5

1.54 0.83 0.48 0.33 0.26

0.6 1.03 1.58 2.98 3.96

1.9 2.0 2.1 2.2 2.2

0.88 0.50 0.30 0.15 0.11

CG 15 m CG 20 m CG 30 m CG 40 m CG 50 m

0.31 0.52 0.82 1.15 1.54

0.40 0.58 0.74 0.80 0.80

4.67 2.77 1.53 1.09 0.81

0.5 0.84 1.30 2.95 3.97

0..0 1.1 1.1 1.1 1.2

2.30 1.09 0.70 0.31 0.21

In particular, the mean error of distances (ABmean ; Rmean ) indicated the mean value of the accuracy of every acquisition method, and the standard deviation error of distances (σAB ; σR ) indicated the extent of the error distribution. The most accurate reconstructions were those characterized by smaller values of mean error for distance and smaller values of standard deviation error. Table 5 shows values of acc pus (cm) and acc arc (cm) evaluated in cm with Equations (8) and (9), respectively. When considering a comparison of the 20 different types of surface reconstruction related to the number of photos used (n. GRC) and the mean value of GSD offered by each of them, the following parameters were calculated ξ (cm2 ) by multiplying the accuracy acc by GSD and the number of photos (n. GRC): ξ pus = acc pus × GSD × n.GRC (10) ξ arc = acc arc × GSD × n.GRC

(11)

The inferior values of these parameters allowed us to improve the accuracy and the speed of reconstruction at the same time. In Table 6, the values of the products ξ (cm2 ) for the 20 different types of surface reconstruction are shown. Table 6. ξ factors for the 20 surface reconstructions. Acq. Grid Tipology

ξ pus (cm2 )

ξ arc (cm2 )

Acq. Grid Tipology

ξ pus (cm2 )

ξ arc (cm2 )

RGVC 15 m RGVC 20 m RGVC 30 m RGVC 40 m RGVC 50 m

24.7 15.9 8.9 6.5 5.5

4.2 3.6 2.6 1.9 1.5

EG 15 m EG 20 m EG 30 m EG 40 m EG 50 m

33.5 21.5 17.4 14.3 12.3

13.9 9.4 7.3 4.1 3.5

RGOC 15 m RGOC 20 m RGOC 30 m RGOC 40 m RGOC 50 m

47.1 29.3 18.7 16.0 13.7

26.7 17.7 11.5 7.3 6.1

CG 15 m CG 20 m CG 30 m CG 40 m CG 50 m

369.0 230.2 126.5 98.8 81.0

165.3 90.5 57.8 28.1 21.0

The parameters ξ enabled us to qualify every method which was analyzed in this study, regardless of the value of GSD, which did not include the acquisition orientation, and the position in relation to the part in acquisition. 5. Data Comparison and Discussions The comparative analysis of the data obtained from the twenty reconstructions highlighted some interesting points of discussion and provided useful information for the photogrammetric surface reconstruction of large-scale objects. In all the reconstructions which were studied, the accuracy resulted in Gaussian-like distributions. The accuracy was always proportional to GSD, but the number of the images which needed to be acquired to obtain the desired accuracy varied according to the grid shapes and the acquisition parameters utilized. Normalizing at one the sum of the two factors

Sensors 2018, 18, 2815

14 of 17

Sensors 2018, 18, 2815

14 of 17

(ξ pus + ξ arc = ξ), it appeared clear (Figure 14) that the acquisition with CG enabled us to obtain a quality of the The reconstruction highly superior at all the In inversely Figure 14 proportional the values ofto ξ factor as image overlaps were proportional to flight flight heights. height and grid pitch normalized forsynergic the grid effects shapes of at 15 m shapes, flight height were showed in green colors.instead, The values ξ studied. The grid grid pitch, and camera framing, couldofnot factor normalized for the grid shapes at 20 m flight height were showed in blu colors. The values of always be predicted and their right combination might provide an advanced accuracy ξin factor normalized for the grid shapes at 30 m flight height were showed in brown colors. The values photogrammetric surface reconstruction. of ξ factor normalized in the at 40 us m to flight heightGSD, wereflight showed in red colors and lastly The equations from (1) grid to (7)shapes permitted correlate altitudes, and overlap with the values of ξ factor normalized in the grid shapes at 50 m flight height were showed in grey colors. one another. In particular, it was possible to use such equations to locate the flight heights, which On average, CG grid shape improved the accuracy by more than a factor of six/seven. assured thethe desired values of GSD and overlap.

Figure 14. 𝜉 factor normalized. Figure 14. ξ factor normalized.

The acquisition with a single RGVC did not often suffice to reach the desired accuracy. In fact, The image overlapsacceptable were proportional to GSD, flight height andgreatly inversely proportional to grid pitch although it produced values of it lacked in the acquisition of surfaces studied. The to synergic effects of gridasshapes, pitch,wall andofcamera framing, instead, could not always orthogonal the shot direction, well asgrid the side the bridge. be predicted and their right combination might provide an advanced in photogrammetric Moving on from the acquisition with RGVC to the acquisition accuracy with RGVO, it was possible to surface reconstruction. keep the transversal overlap steady (66%) when increasing the transversal pitch. This occurred The equations from(1), (1) at to the (7) permitted to correlate GSD, altitudes, overlap with oneϑr according to Equation expense ofus GSD. Moreover, theflight rotation of theand camera of the angle another. In particular, it was possible to use such equations to locate the flight heights, which assured according to Equation (5), involved an increase of the longitudinal overlap. the desired values of GSD and overlap. Similar to image overlap, the surface deviation values and the distances between feature points The acquisition with a RGVC did not often suffice reachdependent the desiredonaccuracy. In fact, appeared to be inversely single proportional to flight height and to highly grid acquisition although it produced acceptable values of GSD, it lacked greatly in the acquisition of surfaces type. Overlap in the acquisitions with constant grid pitch seemed to be extremely variable, especially orthogonal to the grid. shot direction, as well asaround the sidethe wall of thewith bridge. in the elliptical In the acquisition bridge elliptical grids, some efficiency in Moving on from the acquisition with RGVC to the acquisition with terms of number of images against the surface area being covered wasRGVO, lost. it was possible to keep the transversal overlap steady (66%) when increasing the transversal pitch. This occurred according to The conducted analysis allowed us to check that all reconstructions contained errors Equation (1), attothe expense of GSD. Moreover, theespecially rotation offor theRGVC. cameraThis of the angle ϑr according proportional flight height, but this was true, acquisition, being onetoof Equation (5), involved an increase of the longitudinal overlap. the most used types, provided the worst results both in terms of accuracy (acc) and parameter 𝜉. Similar to image overlap, surfaceus deviation values the distances between The acquisition with CGthe allowed to get the bestand results. This kind of grid,feature thankspoints to the appeared to be inversely proportional to flight height and highly dependent on grid acquisition type. modern technologies and to a good GPS system, was easily implementable with a high accuracy in a Overlap in the acquisitions with constant grid pitch seemed to be extremely variable, especially in the system of acquisition by UAV. elliptical grid. In the acquisition around the bridge with elliptical grids, some efficiency in terms of number of images against the surface area being covered was lost. 6. Conclusions The conducted analysis allowed us to check that all reconstructions contained errors proportional A replicable and generalizable methodology was illustrated in order to improve the quality of to flight height, but this was true, especially for RGVC. This acquisition, being one of the most used the 3D digital surface reconstruction of large-scale objects by the photogrammetric technique. Using types, provided the worst results both in terms of accuracy (acc) and parameter ξ. commercial software (Pix4Dmapper) based on the Structure-from-Motion algorithms, the existing relationships between the grid shapes, the acquisition grid parameters, the image overlap, and the accuracy of reconstruction were evaluated and discussed. The proposed relationships enabled us to

Sensors 2018, 18, 2815

15 of 17

The acquisition with CG allowed us to get the best results. This kind of grid, thanks to the modern technologies and to a good GPS system, was easily implementable with a high accuracy in a system of acquisition by UAV. 6. Conclusions A replicable and generalizable methodology was illustrated in order to improve the quality of the 3D digital surface reconstruction of large-scale objects by the photogrammetric technique. Using commercial software (Pix4Dmapper) based on the Structure-from-Motion algorithms, the existing relationships between the grid shapes, the acquisition grid parameters, the image overlap, and the accuracy of reconstruction were evaluated and discussed. The proposed relationships enabled us to obtain the appropriate selection of flight heights, acquisition grid shape, and camera framing in correspondence to a pre-established overlap value and required GSD. The experiments conducted on the reconstruction of the Bridge of the Saracens in Adrano (Sicily) illustrated the effectiveness of the developed methodology, which enhanced the 3D reconstruction of a highly complex architectural structure with the desired accuracy. The errors of surface reconstruction were evaluated statistically using measures with a 3D laser scanner and total station-theodolites measurements. The experimental results indicated that in large-scale objects characterized by an elongated longitudinal shape, geometric singularities, and multiple and variously inclined features, the proposed method could improve the accuracy, increasing the speed of reconstruction at the same time by more than a factor of six/seven. The authors believe that it might be interesting to apply the study to more complex acquisition grid shapes (i.e., a zig-zag grid). Author Contributions: The authors contributed equally to this work. Funding: This research received no external funding. Acknowledgments: The authors wish to thank Eng. Riccardo Zammataro and Dr. Giuseppe D’Angelo for their technical contribution in the development of work. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6. 7. 8.

9.

Chen, S.; Laefer, D.F.; Mangina, E. State of technology review of civilian UAVs. Recent Pat. Eng. 2016, 10, 160–174. [CrossRef] Cano, E.; Horton, R.; Liljegren, C.; Bulanon, D.M. Comparison of small unmanned aerial vehicles performance using image processing. J. Imaging 2017, 3, 4. [CrossRef] Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [CrossRef] Byrne, J.; Laefer, D.F.; O’Keeffe, E. Maximizing feature detection in aerial unmanned aerial vehicle datasets. J. Appl. Remote Sens. 2017, 11, 025015. [CrossRef] Aguilar, W.G.; Angulo, C. Real-time video stabilization without phantom movements for micro aerial vehicles. EURASIP J. Image Video Process. 2014, 2014, 46. [CrossRef] Heng, L.; Honegger, D.; Lee, G.H.; Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M. Autonomous visual mapping and exploration with a micro aerial vehicle. J. Field Robot. 2014, 31, 654–675. [CrossRef] Byrne, J.; O’Keeffe, E.; Lennon, D.; Laefer, D.F. 3D reconstructions using unstabilized video footage from an unmanned aerial vehicle. J. Imaging 2017, 3, 15. [CrossRef] Gaszczak, A.; Brekon, T.P.; Han, J. Real-time people and vehicle detection from UAV imagery. In Intelligent Robots and Computer Vision XXVIII: Algorithms and Techniques, Proceedings SPIE 7878, San Francisco Airport, CA, USA, 23–27 January 2011; Röning, J., Casasent, D.P., Hall, E.L., Eds.; SPIE: Bellingham, WA, USA, 2011; Article 78780B. [CrossRef] Karim, S.; Zhang, Y.; Laghari, A.A.; Asif, M.R. Image processing based proposed drone for detecting and controlling street crimes. In Proceedings of the 17th IEEE International Conference on Communication Technology, Chengdu, China, 27–30 October 2017; pp. 1725–1730.

Sensors 2018, 18, 2815

10.

11. 12.

13.

14. 15. 16. 17. 18.

19.

20.

21. 22.

23.

24.

25.

26.

27. 28.

29.

16 of 17

Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [CrossRef] Liénard, J.; Vogs, A.; Gatziolis, D.; Strigul, N. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction. Measurement 2016, 81, 264–269. [CrossRef] Liang, Z.; Guo, Q.; Xu, J.; Hu, J. A UAV Image and Geographic Data Integration Processing Method and Its Applications. In Proceedings of the 2017 2nd International Conference on Communication and Information Systems, Wuhuan, China, 7–9 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 268–272. Ham, Y.; Han, K.K.; Lin, J.J.; Golparvar-Fard, M. Visual monitoring of civil infrastructure systems via camera-equipped Unmanned Aerial Vehicles (UAVs): A review of related works. Vis. Eng. 2016, 4, 1. [CrossRef] Koutsoudis, A.; Vidmar, B.; Ioannakis, G.; Arnaoutoglou, F.; Pavlidis, G.; Chamzas, C. Multi-image 3D reconstruction data evaluation. J. Cult. Herit. 2014, 15, 73–79. [CrossRef] Zhang, W.; Li, M.; Guo, B.; Li, D.; Guo, G. Rapid texture optimization of three-dimensional urban model based on oblique images. Sensors 2017, 17, 911. [CrossRef] [PubMed] Campana, S. Drones in Archaeology. State-of-the-art and Future Perspectives. Archaeol. Prospect. 2017, 24, 275–296. [CrossRef] Sauerbier, M.; Eisenbeiss, H. UAVs for the documentation of archaeological excavations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, XXXVIII, 526–531. Li, Q.; Li, D.C.; Wu, Q.F.; Tang, L.W.; Huo, Y.; Zhang, Y.X.; Cheng, N. Autonomous navigation and environment modeling for MAVs in 3-D enclosed industrial environments. Comput. Ind. 2013, 64, 1161–1177. [CrossRef] Rabbani, T. Automatic Reconstruction of Industrial Installations Using Point Clouds and Images; Publications on Geodesy, 62; NCG Nederlandse Commissie voor Geodesie Netherlands Geodetic Commission: Delft, The Netherlands, 2006; pp. 1–175. Moulon, P.; Monasse, P.; Marlet, R. Adaptive structure from motion with a contrario model estimation. In Computer Vision—ACCV 2012; Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2012; pp. 257–270. Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the International Conference on 3D Vision-3DV2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134. Beardsley, P.A.; Torr, P.H.S.; Zisserman, A. 3D model acquisition from extended image sequences. In Lecture Notes in Computer Science, Computer Vision—ECCV’96; Buxton, B., Cipolla, R., Eds.; Springer: Berlin, Germany, 1996; pp. 683–695. Mohr, R.; Veillon, F.; Quan, L. Relative 3-D reconstruction using multiple uncalibrated images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 15–17 June 1993; pp. 543–548. Dellaert, F.; Seitz, S.M.; Thorpe, C.E.; Thrun, S. Structure from motion without correspondence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition-CVPR 2000, Hilton Head Island, SC, USA, 15 June 2000; pp. 557–564. Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the Efficiency of Hierarchical Structure-and-Motion. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1594–1600. Moulon, P.; Monasse, P.; Marlet, R. Global fusion of relative motions for robust, accurate and scalable structure. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 3248–3255. Crandall, D.J.; Owens, A.; Snavely, N.; Huttenlocher, D.P. SfM with MRFs: Discrete-continuous optimization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2841–2853. [CrossRef] [PubMed] Sweeney, C.; Sattler, T.; Höllerer, T.; Turk, M. Optimizing the viewing graph for structure-from-motion. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 801–809. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [CrossRef]

Sensors 2018, 18, 2815

30. 31. 32. 33.

34. 35. 36. 37. 38. 39. 40. 41.

42. 43. 44. 45.

46.

47.

48.

17 of 17

Zhang, Q.; Li, Y.; Blum, R.S.; Xiang, P. Matching of images with projective distortion using transform invariant low-rank textures. J. Vis. Commun. Image Represent. 2016, 38, 602–613. [CrossRef] Lamis, G.; Draa, A.; Chikhi, S. An ear biometric system based on artificial bees and the scale invariant feature transform. Expert Syst. Appl. 2016, 57, 49–61. Qu, Y.; Huang, J.; Zhang, X. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera. Sensors 2018, 18, 225. [CrossRef] Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [CrossRef] Pix4d Manual. Available online: http://pix4d.com (accessed on 10 August 2018). Hinks, T.; Carr, H.; Laefer, D.F. Flight optimization algorithms for aerial LiDAR capture for urban infrastructure model generation. J. Comput. Civ. Eng. 2009, 23, 330–339. [CrossRef] Mikolajczyk, K.; Schmid, C. An affine invariant interest point detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2002; pp. 128–142. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In International Workshop on Vision Algorithms; Springer: Berlin/Heidelberg, Germany, 1999; pp. 298–372. Lu, F.; Hartley, R. A fast optimal algorithm for L 2 triangulation. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2007; pp. 279–288. Szeliski, R.; Scharstein, D. Symmetric sub-pixel stereo matching. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2002; pp. 525–540. Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [CrossRef] [PubMed] Strecha, C.; Von Hansen, W.; Van Gool, L.; Fua, P.; Thoennessen, U. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [CrossRef] Strecha, C.; Van Gool, L.; Fua, P. A generative model for true orthorectification. In Proceedings of the ISPRS Congress Beijing, Beijing, China, 3–11 July 2008. No. CVLAB-CONF-2008-09. Torres, J.C.; Arroyo, G.; Romo, C.; De Haro, J. 3D Digitization using structure from motion. In Proceedings of the CEIG-Spanish Computer Graphics Conference, At Jaén, Spain, 12–14 September 2012. Falkingham, P.L. Generating a Photogrammetric Model Using Visual SFM, and Post-Processing with Meshlab; Technical Report; Brown University: Providence, RI, USA, 2012. Calì, M.; Oliveri, S.M.; Sequenzia, G.; Fatuzzo, G. Error control in UAV image acquisitions for 3D reconstruction of extensive architectures. In Advances on Mechanics, Design Engineering and Manufacturing; Eynard, B., Nigrelli, V., Olivieri, S., Peris-Fajarnes, G., Rizzuti, S., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2017; pp. 1211–1220. Zanetti, E.M.; Bignardi, C. Structural analysis of skeletal body elements: Numerical and experimental methods. In Biomechanical Systems Technology: Volume 3, Muscular Skeletal Systems; World Scientific Publishing: Singapore, 2009; pp. 185–225. Dewez, T.J.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J. Facets: A Cloudcompare Plugin to Extract Geological Planes from Unstructured 3D Point Clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI, 799–804. [CrossRef] Rajendra, Y.D.; Mehrotra, S.C.; Kale, K.V.; Manza, R.R.; Dhumal, R.K.; Nagne, A.D.; Vibhute, A.D. Evaluation of Partially Overlapping 3D Point Cloud’s Registration by using ICP variant and CloudCompare. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL, 891–897. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).