Position, Orientation and Velocity Detection of Unmanned ... - MDPI

5 downloads 52 Views 6MB Size Report
Jul 29, 2017 - translational pose (i.e., surge, sway, and heave) and surge rate as the ... University of New Hampshire's Chase Ocean Engineering facilities. 2.
sensors Article

Position, Orientation and Velocity Detection of Unmanned Underwater Vehicles (UUVs) Using an Optical Detector Array Firat Eren 1, * ID , Shachak Pe’eri 2 , May-Win Thein 3 , Yuri Rzhanov 1 , Barbaros Celikkol 3 and M. Robinson Swift 3 1 2 3

*

Center for Coastal and Ocean Mapping, University of New Hampshire, 24 Colovos Road, Durham, NH 03824, USA; [email protected] National Oceanic and Atmospheric Administration (NOAA), 1315 East West Highway, Silver Spring, MD 20910, USA; [email protected] Department of Mechanical Engineering, University of New Hampshire, 33 Academic Way, Durham, NH 03824, USA; [email protected] (M.-W.T.); [email protected] (B.C.); [email protected] (M.R.S.) Correspondence: [email protected]; Tel.: +1-603-862-0676

Received: 30 June 2017; Accepted: 27 July 2017; Published: 29 July 2017

Abstract: This paper presents a proof-of-concept optical detector array sensor system to be used in Unmanned Underwater Vehicle (UUV) navigation. The performance of the developed optical detector array was evaluated for its capability to estimate the position, orientation and forward velocity of UUVs with respect to a light source fixed in underwater. The evaluations were conducted through Monte Carlo simulations and empirical tests under a variety of motion configurations. Monte Carlo simulations also evaluated the system total propagated uncertainty (TPU) by taking into account variations in the water column turbidity, temperature and hardware noise that may degrade the system performance. Empirical tests were conducted to estimate UUV position and velocity during its navigation to a light beacon. Monte Carlo simulation and empirical results support the use of the detector array system for optics based position feedback for UUV positioning applications. Keywords: optical detector array; Unmanned Underwater Vehicles; optical communication; pose detection

1. Introduction Unmanned Underwater Vehicles (UUVs) provide an operational platform for long periods of deployment (on the order of hours) and in depths that are too dangerous for divers [1–5]. UUVs are typically categorized into two groups: (1) Remotely Operated Vehicles (ROVs), which are tethered to the surface ship and manually operated; and (2) Autonomous Underwater Vehicles (AUVs), which are autonomous and provide more flexibility but at the expense of shorter battery life. For both types of UUVs, accurate position and orientation (i.e., pose) detection during navigation is critical for mission operations (e.g., pipeline inspection, surveying and dynamic positioning) where the UUV is required to maintain its pose with respect to a given object of interest [6,7]. One such example is data muling, applications in which a hover capable UUV maintains its pose with respect to an optical transmitter and conducts data transfer [8] (Figure 1a). Accurate positioning is also critical for AUV docking into a physical structure (typically cone type design) to recharge batteries and for data transfer [9] (Figure 1b). Accurate AUV position feedback via its own on-board sensor detection system would help enable successful docking. However, one important challenge for accurate underwater positioning is the communication between platforms, as the radio signals that provide high measurement precision above the water surface do not work efficiently underwater [10]. Sensors 2017, 17, 1741; doi:10.3390/s17081741

www.mdpi.com/journal/sensors

Sensors 2017, 17, 1741

2 of 18

underwater positioning is the communication between platforms, as the radio signals that provide Sensors 2017, 17, 1741 high measurement precision above the water surface do not work efficiently underwater [10]. 2 of 18

(a)

(b)

Figure UnderwaterVehicle Vehicle(UUV) (UUV) positioning examples: (a) Remotely Operated Figure1.1. Unmanned Unmanned Underwater positioning examples: (a) Remotely Operated Vehicle Vehicle (ROV) dynamic positioning with respect to a light source; and (b) Autonomous Underwater (ROV) dynamic positioning with respect to a light source; and (b) Autonomous Underwater Vehicle Vehicle docking Thefield lightacts field as a guiding with axis conventions and (AUV)(AUV) docking station.station. The light as acts a guiding beaconbeacon with the axisthe conventions and position position and orientation detection parameters in this study. and orientation detection parameters used inused this study.

Currently, most UUVs navigate by using acoustic communication [11–15], as acoustic Currently, most UUVs navigate by using acoustic communication [11–15], as acoustic communication allows for long range of operation (several kilometers) and are commercially communication allows for long range of operation (several kilometers) and are commercially available. available. However, in areas with large traffic volume, such as harbor and recreational fishing areas, However, in areas with large traffic volume, such as harbor and recreational fishing areas, the marine the marine environment is acoustically noisy and reduces the performance of the acoustic environment is acoustically noisy and reduces the performance of the acoustic communication. Recent communication. Recent studies have demonstrated the potential use of both acoustic and optical studies have demonstrated the potential use of both acoustic and optical communication for AUV communication for AUV docking [16–19]. In these systems, acoustic communication is used for docking [16–19]. In these systems, acoustic communication is used for navigation towards the docking navigation towards the docking station (at ranges longer than 10 m). At shorter ranges (within 10 m), station (at ranges longer than 10 m). At shorter ranges (within 10 m), video imagery is used to guide video imagery is used to guide the vehicle into the docking station. In addition, several studies have the vehicle into the docking station. In addition, several studies have investigated the use of video investigated the use of video imagery with vision-based algorithms for UUV positioning applications imagery with vision-based algorithms for UUV positioning applications [20–22]. A system consisting [20–22]. A system consisting of an optical detector array for underwater free-space optical of an optical detector array for underwater free-space optical communication is demonstrated in [23]. communication is demonstrated in [23]. However, the system performance has not yet been However, the system performance has not yet been evaluated for UUV positioning applications. evaluated for UUV positioning applications. This paper presents an optical detector array detector system developed to detect UUV pose This paper presents an optical detector array detector system developed to detect UUV pose during its navigation. The specific goal is to develop a cost-effective prototype sensor system that during its navigation. The specific goal is to develop a cost-effective prototype sensor system that provides position feedback for applications that require precise UUV positioning (e.g., for UUV provides position feedback for applications that require precise UUV positioning (e.g., for UUV docking and station keeping applications) [24]. The design used in this study includes a prototype docking and station keeping applications) [24]. The design used in this study includes a prototype hemispherical photodetector array (5 × 5 array) mounted on a UUV platform as the detector unit and hemispherical photodetector array (5 × 5 array) mounted on a UUV platform as the detector unit and a fixed, single light source with Gaussian beam pattern as a beacon. The performance of the detector a fixed, single light source with Gaussian beam pattern as a beacon. The performance of the detector array is evaluated based on its capability to detect UUV pose in two different approaches through array is evaluated based on its capability to detect UUV pose in two different approaches through simulation and empirical measurements. In the first approach, Monte Carlo simulations are conducted simulation and empirical measurements. In the first approach, Monte Carlo simulations are to evaluate UUV pose detection during surge, pitch and yaw motions with respect to the light conducted to evaluate UUV pose detection during surge, pitch and yaw motions with respect to the source. (Roll motion detection is not considered due to the use of a single light source with Gaussian light source. (Roll motion detection is not considered due to the use of a single light source with beam pattern which is incapable of roll detection.) The impact of varying environmental parameters Gaussian beam pattern which is incapable of roll detection.) The impact of varying environmental (i.e., changing water column turbidity and temperature) and hardware noise on the pose detection parameters (i.e., changing water column turbidity and temperature) and hardware noise on the pose performance is also evaluated in the Monte Carlo simulations through Total Propagated Uncertainty detection performance is also evaluated in the Monte Carlo simulations through Total Propagated (TPU) calculations. In the second approach, empirical measurements are conducted to detect UUV Uncertainty (TPU) calculations. In the second approach, empirical measurements are conducted to translational pose (i.e., surge, sway, and heave) and surge rate as the UUV platform dynamically detect UUV translational pose (i.e., surge, sway, and heave) and surge rate as the UUV platform approaches a fixed light source in a straight-line trajectory. The error tolerance requirements for this dynamically approaches a fixed light source in a straight-line trajectory. The error tolerance study was based on UUV hovering and docking applications. Therefore, the error tolerances were requirements for this study was based on UUV hovering and docking applications. Therefore, the ◦ kept within ±0.5 m horizontally in x-axis (parallel to the seafloor), ±0.2 m in y- and z-axis, and ±5 in pitch and yaw [7,9,25]. In addition, the typical entrance velocity of an AUV to a docking station is

Sensors 2017, 17, 1741

3 of 18

error tolerances Sensors 2017, 17, 1741were

kept within ±0.5 m horizontally in x-axis (parallel to the seafloor), ±0.2 3m of in 18 y- and z-axis, and ±5° in pitch and yaw [7,9,25]. In addition, the typical entrance velocity of an AUV to a docking station is up to 0.5 m/s with an error tolerance of ±0.2 m/s [12,26]. The empirical tests are up to 0.5 m/s with an error of tolerance of ±0.2 m/sChase [12,26]. The Engineering empirical tests are conducted at the conducted at the University New Hampshire’s Ocean facilities. University of New Hampshire’s Chase Ocean Engineering facilities. 2. Materials and Methods 2. Materials and Methods

2.1. Hardware Design and the Optical Detector Array Output 2.1. Hardware Design and the Optical Detector Array Output The design of the detector array is developed from optical detector array simulator results that The design of the detector array is developed from optical detector array simulator results that the the authors obtained via the evaluation of different array geometries based on their capability to authors obtained via the evaluation of different array geometries based on their capability to generate generate a unique pose feedback to the UUV [27,28]. The simulator results showed that a a unique pose feedback to the UUV [27,28]. The simulator results showed that a hemispherical frame hemispherical frame design with a 5 × 5 photo-detector array is sufficient to generate the desired design with a 5 × 5 photo-detector array is sufficient to generate the desired position and orientation position and orientation feedback to the UUV with a detection accuracy of 0.2 m in translation (surge, feedback to the UUV with a detection accuracy of 0.2 m in translation (surge, sway, and heave) and sway, and heave) and 10° in orientation (pitch and yaw) based on a Spectral Angle Mapper algorithm 10◦ in orientation (pitch and yaw) based on a Spectral Angle Mapper algorithm (Figure 2). The (Figure 2). The hemispherical design offered a larger field-of-view than a planar array design and hemispherical design offered a larger field-of-view than a planar array design and provided more provided more accurate position detection capability. Analytical position and orientation detection accurate position detection capability. Analytical position and orientation detection algorithms were algorithms were also developed through static calibrations based on a single light beacon and a also developed through static calibrations based on a single light beacon and a detector array mounted detector array mounted on the bow of the UUV [29]. The light source used in the simulations was on the bow of the UUV [29]. The light source used in the simulations was modeled as a point source modeled as a point source Gaussian beam with peak intensity at the origin (i.e., at x = 0, y = 0, z = 0). Gaussian beam with peak intensity at the origin (i.e., at x = 0, y = 0, z = 0).

(a)

(b)

Figure 2. relative geometries between thethe light source andand the Figure 2. Sample Samplesimulator simulatoroutputs outputsfor fordifferent different relative geometries between light source light detector array with pixel intensities normalized at maximum intensity obtained at x = 4 the light detector array with pixel intensities normalized at maximum intensity obtained at x = 4 m: m: (a) alignment alignment with withx x= =5 m 5 m offset; (b) translation (along the y-axis) and rotational offset (a) offset; andand (b) translation (along the y-axis) and rotational offset (pitch (pitch and yaw). and yaw).

The prototype optical detector array design developed in this study includes the following The prototype optical detector array design developed in this study includes the following hardware components: twenty-five photodiodes (Thorlabs SM05PD1A) that form the 5 × 5 array, two hardware components: twenty-five photodiodes (Thorlabs SM05PD1A) that form the 5 × 5 array, Analog to Digital converter (ADC) boards, an on-board computer (OBC) with 1 GHz ARM Cortex two Analog to Digital converter (ADC) boards, an on-board computer (OBC) with 1 GHz ARM processor, a power supply (5 V) and reverse-bias circuitry to increase the optical dynamic range. Each Cortex processor, a power supply (5 V) and reverse-bias circuitry to increase the optical dynamic photodiode is placed in a waterproof acrylic fixture and the fixtures are concentric to the hemisphere range. Each photodiode is placed in a waterproof acrylic fixture and the fixtures are concentric to center (Figure 3). The photodiodes sample intensity readings at 5 Hz with 10-bit resolution (0–1023 the hemisphere center (Figure 3). The photodiodes sample intensity readings at 5 Hz with 10-bit dynamic range). The spacing between the photodiodes is kept at 0.06 m to eliminate potential resolution (0–1023 dynamic range). The spacing between the photodiodes is kept at 0.06 m to eliminate cross-talk between the optical detectors [30]. Empirical measurements verified that there was no potential cross-talk between the optical detectors [30]. Empirical measurements verified that there was cross-talk observed in the proposed system. no cross-talk observed in the proposed system. An Eye Clean-Ace 400 W metal halide light source with T17 bulb is used as a guiding beacon in An Eye Clean-Ace 400 W metal halide light source with T17 bulb is used as a guiding beacon in this study. The light source is operated with an M59 ballast which provides constant power for stable this study. The light source is operated with an M59 ballast which provides constant power for stable operation. The color temperature for the light source is 6500 K with a Color Rendering Index (CRI) operation. The color temperature for the light source is 6500 K with a Color Rendering Index (CRI) value of 90. value of 90.

Sensors 2017, 17, 1741

4 of 18

Sensors 2017, 17, 1741

4 of 18

(a)

(b)

Figure 3. The optical detector array system prototype: (a) top view; and (b) side view.

Figure 3. The optical detector array system prototype: (a) top view; and (b) side view. Image moment invariant algorithm is used to convert the optical input (i.e., signature images) sampled on the detector array into poseis feedback reader the is referred [31,32](i.e., for more details).images) Image moment invariant algorithm used to(the convert opticaltoinput signature The image moment invariant algorithm is given as: sampled on the detector array into pose feedback (the reader is referred to [31,32] for more details). 𝑞 The image moment invariant algorithm is1given 𝑀 = ∑(𝑦 as: − 𝑦 )𝑝 (𝑧 − 𝑧 ) 𝐼 𝑝𝑞

𝑆

1

𝑖

𝑜

𝑗

𝑜

𝑖,𝑗

(1)

𝑖,𝑗



p

q

zo Ii,j (1) − q.yoS) is zthe M pqof=order p(yand where 𝑀𝑝𝑞 is the moment value of intensities (𝐼𝑖,𝑗 ) and is j − summation S i,j i calculated such that 𝑆 = ∑𝑖,𝑗 𝐼𝑖,𝑗 . 𝑦𝑖 and 𝑧𝑗 are the row and column coordinates, respectively, of the array elements. The parameters 𝑦𝑜 and 𝑧𝑜 denote the coordinates of the pixel with the maximum where intensity M pq is the value of order and q. S=is(𝑦the summation of intensities ( Ii,j )accuracy and is calculated in moment the signature image. Thatpis, 𝑃𝑚𝑎𝑥 to 𝑜 , 𝑧𝑜 ) is determined with sub-pixel such that S = I . y and z are the row and column coordinates, respectively, of the array elements. ∑ provide more i,j accurate i jcalculations of the moment invariants. This results in higher positioning i,j compared to the Spectral Angle Mapper algorithm used in [28]. Image moment invariants resolution The parameters yo and the(p coordinates the pixel withreal-time the maximum intensity o denote are determined up tozsecond order = 0, 1, 2 and q =of 0, 1, 2) for efficient calculation without in the signature image.significant That is, P with sub-pixel accuracy to provide ) is determined max = (y o , zooutput sacrificing accuracy. The of the image moment invariants algorithm is a more accurate of the invariants. This results in values highercorresponding positioning resolution compared 3 ×calculations 3 matrix, where eachmoment element in the matrix provides unique to relative pose between the lightMapper source and the light detector to the Spectral Angle algorithm used inarray. [28]. Image moment invariants are determined up to second order (p = 0, 1, 2 and q = 0, 1, 2) for efficient real-time calculation without sacrificing significant 2.2. Calibration Procedures and the Pose Detection Algorithm accuracy. The output of the image moment invariants algorithm is a 3 × 3 matrix, where each element Two types ofunique calibration procedures are conducted to accurately determine UUV poseand the in the matrix provides values corresponding to relative pose between the the light source relative to a light beacon. The first calibration procedure is the geometrical calibration conducted in light detector array.

controlled water clarity conditions. In this calibration scheme, the relative pose between the optical detector array and the light source is varied, and the pose detection algorithms are developed 2.2. Calibration Procedures and the Pose Detection Algorithm accordingly. The second calibration procedure identifies hardware and environmental parameters thattypes contributes to the TPU of the pose are estimations. These photodiode response Two of calibration procedures conducted to parameters accurately include: determine the UUV pose relative to temperature variations in ambient water conditions, hardware noise and system cross-talk. to a light beacon. The first calibration procedure is the geometrical calibration conducted in The controlled underwater light field includes turbidity caused by suspended matter (sediment and biological) water clarity conditions. In this calibration scheme, the relative pose between the optical detector carried by currents or turbulence. As a first-order approximation, the simulations are conducted array and the light source is varied, and the pose detection algorithms are developed accordingly. under uniform light field conditions.

The second calibration procedure identifies hardware and environmental parameters that contributes 2.2.1. of Geometrical andThese the Pose Detection Algorithm to the TPU the pose Calibration estimations. parameters include: photodiode response to temperature variations in ambient water conditions, hardware noise and system cross-talk. The geometrical calibration procedure include translational motion along the The x-, y-underwater and z-axis light field includes turbidity by yaw). suspended matter (sediment andcalibration biological) carried by and orientation (i.e.,caused pitch and The experimental translational is conducted in currents a well-controlled environment at University of New Hampshire (UNH) Jere A. Chase Ocean or turbulence. As a first-order approximation, the simulations are conducted under uniform light Engineering facilities. The detector array is immersed underwater in a wave/tow tank with field conditions. dimensions of 2.44 m in depth, 3.6 m in width, and 36 m in length. The tank is outfitted with a cable-driven tow carriage with extendsAlgorithm through the length of the tank and that can 2.2.1. Geometrical Calibration and actuation the Pose that Detection move up to 2.0 m/s. Water clarity conditions in the tank are assumed to be constant with measured −1 The geometrical calibration diffuse attenuation coefficient, procedure 𝐾𝐷 , of 0.09 minclude [27]. translational motion along the x-, y- and z-axis

and orientation (i.e., pitch and yaw). The experimental translational calibration is conducted in a well-controlled environment at University of New Hampshire (UNH) Jere A. Chase Ocean Engineering facilities. The detector array is immersed underwater in a wave/tow tank with dimensions of 2.44 m in depth, 3.6 m in width, and 36 m in length. The tank is outfitted with a cable-driven tow carriage with actuation that extends through the length of the tank and that can move up to 2.0 m/s.

Sensors 2017, 17, 1741

5 of 18

Sensors 2017, 17, 1741 5 of 18 Water clarity conditions in the tank are assumed to be constant with measured diffuse attenuation coefficient, K D , of 0.09 m−1 [27]. Translationaloffsets offsetsbetween betweenthe the optical detector array the light source are varied from Translational optical detector array andand the light source are varied from 4.5 m 4.5 m to 8.5 m at 1 m increments along the x-axis, from −0.6 m to 0.6 m at 0.3 m increments along to 8.5 m at 1 m increments along the x-axis, from −0.6 m to 0.6 m at 0.3 m increments along y-axis, y-axis, and0 from 0 mm toat 0.80.2 mm at increments 0.2 m increments The source light source is located the origin and from m to 0.8 along along z-axis.z-axis. The light is located at theatorigin (i.e., (i.e., x = 0 m, y = 0 m, and z = 0 m). A total of 125 different position configurations are used for x = 0 m, y = 0 m, and z = 0 m). A total of 125 different position configurations are used for calibration calibration with 200 samples collected at each configuration (Figure 4). The photodiode intensity with 200 samples collected at each configuration (Figure 4). The photodiode intensity variations for all variations for all calibration are of approximately of the maximum calibration measurements are measurements approximately 1% the maximum1% photodiode intensity.photodiode The image intensity. The image intensities are not perfectly symmetrical about the yand intensities are not perfectly symmetrical about the y- and z-axis (i.e., with referencez-axis to y =(i.e., 0 m with and y = 0 mexplanation and z = 0 m). A possible explanation is potential geometrical misalignments in zreference = 0 m). Atopossible is potential geometrical misalignments in the experimental setup and the experimental setup and non-uniform water conditions that may have caused sub-pixel (i.e., non-uniform water conditions that may have caused sub-pixel (i.e., between two detectors) shifts of between two detectors) shifts of the light field the light field with respect to the detector array.with respect to the detector array.

Figure 4. Sample underwater experimental calibration images taken at 4.5 m away from the light source Figure 4. Sample underwater experimental calibration images taken at 4.5 m away from the light in the x-direction. The intensity images (represented by a 10-bit dynamic range (0–1023)) represent the source in the x-direction. The intensity images (represented by a 10-bit dynamic range (0–1023)) intersection of optical detector array system with the light field from the guiding beacon at different represent the intersection of optical detector array system with the light field from the guiding beacon y- and z-axis offsets. at different y- and z-axis offsets.

For CarloCarlo simulations, a separate calibrationcalibration procedure isprocedure conducted in y- and z-axis; For the theMonte Monte simulations, a separate is x-,conducted in pitch; and yaw directions/motions using the developed simulator [27]. The experimental translational x-, y- and z-axis; pitch; and yaw directions/motions using the developed simulator [27]. The geometry is repeated for the simulated One percent of the maximum intensity experimental translational geometry translation is repeatedcalibrations. for the simulated translation calibrations. One is addedofasthe mean noise level to theisimages empirical Pitch and yaw offset percent maximum intensity added to as simulate mean noise level to conditions. the images to simulate empirical quantification is conducted a similar manner to that of the translational quantification. conditions. Pitch and yawinoffset quantification is conducted in a similar manner to The thatrelative of the ◦ pitch and yaw between the light source and detector array isthe changed in 1 increments from translational quantification. The relative pitchthe and yaw between light source and the detector ◦ to 15◦ . − 15 array is changed in 1° increments from −15° to 15°. UUV , on the UUV x-axis position is calculated from the maximum photodiode intensity value, PD 𝑃𝐷max 𝑚𝑎𝑥 , on array. Because light lightenergy energydecays decaysexponentially exponentiallyinin the water column, as described by the Beer’s array. Because the water column, as described by the Beer’s law law [28], an exponential fit is made to the measurements such that: [28], an exponential fit is made to the measurements such that: 𝑒𝑠𝑡 b11x𝑥est 𝑃𝐷𝑚𝑎𝑥 = 𝑎 𝑒−−𝑏 PD max = a11e

𝑥𝑒𝑠𝑡 =

−log(𝑃𝐷𝑚𝑎𝑥 /𝑎1) 𝑏1

(2) (2) (3)

where 𝑃𝐷𝑚𝑎𝑥 denotes the maximum photodiode intensity, 𝑎1 and 𝑏1 are coefficients obtained from the calibration procedure and 𝑥𝑒𝑠𝑡 is the estimated x-axis offset between the light source and the detector. UUV pitch and yaw angles are calculated from the 3 × 3 image moment invariant matrix

Sensors 2017, 17, 1741

6 of 18

− log( PDmax /a1) (3) b1 Sensors 2017, 17, 1741 6 of 18 where PDmax denotes the maximum photodiode intensity, a1 and b1 are coefficients obtained from the calibration procedure and xest is the estimated x-axis 𝑀 offset between the light source and the detector. using the ratio of matrix indices. Specifically, the 32 ratio is found to correlate with pitch offset (not 𝑀33 UUV pitch and yaw angles are calculated from the 3 × 3 image moment invariant matrix using the 𝑀23 M32 shown whereas theSpecifically, ratio correlates with the yaw offset and, expected, degrades ratiohere), of matrix indices. the M33 ratio is found to correlate withas pitch offset (not shown with xest =

𝑀33

M23 here), whereas correlates with5). the yaw offset and, as expected, degrades with offset offset distance andthe increasing (Figure M33 ratio noise

distance and increasing noise (Figure 5).

Figure𝑀5.

M23 M

ratio values for changing yaw angles at varying x-axis offsets (4 m, 6 m, and 8 m) and

33 23 Figure 5. levels ratio yaw angles at z-axis varying x-axis (4 m, and 8 m)=and noise (0%,values 0.5%, for andchanging 1.0%) and with y- and offsets at offsets zero: (a) ∆x 6=m, 4 m, noise 0%;noise 𝑀

33

(b) (0%, ∆ x =0.5%, 4 m, noise 0.5%; (c) = 4 m,y-noise 1%; (d)offsets ∆x = 6 m, 0%; Δ𝑥 (e) ∆x m, noise noise == 0%; levels and =1.0%) and∆xwith and= z-axis at noise zero:=(a) = 4= 6m, 0.5%; (f) ∆ x = 6 m, noise = 1%; (g) ∆x = 8 m, noise = 0%; (h) ∆x = 8 m, noise = 0.5%; and (i) ∆x (b) Δ𝑥 = 4 m, noise = 0.5%; (c) Δ𝑥 = 4 m, noise = 1%; (d) Δ𝑥 = 6 m, noise = 0%; (e) = Δ𝑥8 m, = 6 m, noise = 1%. noise = 0.5%; (f) Δ𝑥 = 6 m, noise = 1%; (g) Δ𝑥 = 8 m, noise = 0%; (h) Δ𝑥 = 8 m, noise = 0.5%; and (i) Δ𝑥 = 8 m, noise = 1%. The analysis shows that there is a linear relationship between the moment index ratios and the pitch yaw angles. also invariant to the changing x-axis (i.e., decreasing light energy) The and analysis shows This thatratio thereis is a linear relationship between the moment index ratios and the as well as the yand z-axis offsets, and thus enables robust angle estimation. Accordingly, pitch and pitch and yaw angles. This ratio is also invariant to the changing x-axis (i.e., decreasing light energy) yaw angle estimates are estimated by using the following models:

as well as the y- and z-axis offsets, and thus enables robust angle estimation. Accordingly, pitch and yaw angle estimates are estimated by using the following models: M23 θest = a2

+ b2 M𝑀 3323

𝜃𝑒𝑠𝑡 = 𝑎2 ψest

𝜓

𝑀33

+ 𝑏2

M = a3 32 + b3 M𝑀 33

=𝑎

32

(4)

+𝑏

(4) (5)

(5)

𝑒𝑠𝑡 angles, 3 and a ,3a , b and b are the coefficients obtained where θest and ψest are estimated pitch and yaw 3 𝑀33 1 2 2 from the calibration procedure. Another calibration procedure is conducted to estimate the y- and where 𝜃𝑒𝑠𝑡offsets. and 𝜓The are estimated pitch andthat yawMangles, and 𝑎1 , 𝑎2 , 𝑏2 and 𝑏3 are the coefficients 𝑒𝑠𝑡 calibration z-axis results show 21 and M12 correlate linearly with y- and z-axis obtained calibration procedure. Another offsets.from The the y- and z-axis offsets are calculated as: calibration procedure is conducted to estimate the

y- and z-axis offsets. The calibration results show that 𝑀21 and 𝑀12 correlate linearly with y- and zyest = a4 Mas: (6) axis offsets. The y- and z-axis offsets are calculated 21 + b4 𝑦𝑒𝑠𝑡 = 𝑎4 𝑀21 + 𝑏4

(6)

𝑧𝑒𝑠𝑡 = 𝑎5 𝑀12 + 𝑏5

(7)

Sensors 2017, 17, 1741

7 of 18

zest = a5 M12 + b5

(7)

where yest and zest are estimated y- and z-axis offsets, and a4 , a5 , b4 , and b5 are the calibration coefficients from the linear fit. However, the developed models for y- and z-axis are observed to change significantly with offsets in other degree-of-freedoms (DOF). For example, the y-axis model changes with yaw and the z-axis model changes with pitch. Therefore, the distance detection system is designed and calibrated to adapt to varying UUV pitch and yaw orientations with respect to that of the given light source. Here, the y- and z-axis estimates are updated based on the x-axis, pitch and yaw estimates based on 3-D vector analysis such that dx = xest (k + 1) − xest (k) R=

dx cos(θest ) cos(ψest )

(8) (9)

dy = Rcos(θest ) sin(ψest )

(10)

dz = Rsin(θest )

(11)

yest (k + 1) = yest (k) + dy

(12)

zest (k + 1) = zest (k) + dz

(13)

where dx is the distance travelled along the x-axis at the current time step, k. R is the magnitude of the 3-D vector between sequential UUV positions, and dy and dz are the calculated travel along the y- and z-axis, respectively. The pose detection algorithm is provided in Algorithm 1. Algorithm 1. Pose Estimation Obtain the image sampled on the detector array Compute xest , yest , zest , θest and ψest Update yest based on xest , θest and ψest Update zest based on xest , θest and ψest Return xest , yest , zest , θest and ψest

2.2.2. Photodiode Response to Temperature Variations To determine the temperature dependence, the photodiodes are immersed in a seven-liter water bath circulator at temperatures varying from 20 ◦ C to 60 ◦ C at 10 ◦ C increments. A coherent light source (532 nm Z-Bolt SCUBA underwater dive laser) is used to provide a stable input to the photodiodes. Experimental results show that the output voltage from photodiodes decreases as the temperature increases. The temperature sensitivity is found to be 2 mV/◦ C as follows: 

 mV Vo = −2 ◦ T (◦ C) + 576(mV) C

(14)

where T is the water temperature, and Vo is the voltage output. Based on the environment temperature, the voltage reading can be adjusted and the effect of the varying temperatures can be accounted for in the position and orientation detection algorithms. 2.2.3. Hardware and System Cross-Talk The major sources of hardware noise are: signal shot noise, σs ; background shot noise, σb ; dark-current shot noise, σdc ; Johnson noise, σj ; amplifier noise, σamp ; and ADC-generated quantization

Sensors 2017, 17, 1741

8 of 18

noise, σq , as described in [28]. The hardware noise sources are assumed to be mutually independent and to be additive, uniform and unipolar [32]. Accordingly, the net noise is: σn =

q

2 + σ2 + σ2 2 σs2 + σb2 + σdc amp + σq j

(15)

In addition, cross-talk may occur from signal transmission in cables or in their connection to the reverse-bias circuitry. The net noise as measured from all the noise sources was in the range of 1 mV using 3.3 m SMA cables from the photodiodes to the reverse-bias circuitry and the serial communication losses. Therefore, it is concluded that the cable and circuit connection cross-talk in the system is not significant. 3. Results 3.1. Monte Carlo Simulations The pose estimations (x-, y- and z-axis; pitch; and yaw) and TPU are obtained in the Monte Carlo simulations. The key steps include: (1) characterization of model uncertainty parameters; (2) generation of the uncertainty distribution for each hardware and environmental parameters (i.e., parameter ensemble); (3) simulation of the relative position and orientation estimations based on the parameter ensemble; and (4) evaluation of the position and orientation statistics for system TPU. Based on the calibration results, the uncertainty of four key parameters is characterized and shown in Table 1. Table 1. Uncertainty parameters used in the Monte Carlo simulations. Parameter

Standard Deviation (1σ )

Water column temperature variation Net electronic noise (max) Kd

3 ◦C 50 mV 0.015 m−1

The hardware related uncertainties are determined as described in Section 2. Water column temperature variations of up to 3 ◦ C and Kd changes of up to 0.015 m−1 are considered reasonable according to tank conditions. Uniform water conditions are assumed in the simulations (i.e., constant temperature and Kd values in the water column). Environmental parameter variations (i.e., temperature and Kd ) are assumed to have Gaussian distribution. The uncertainty parameters are used as input to the Monte Carlo simulation with a realization number of Ns = 2000. A 5 × 5 image is created after each realization based on the relative pose between the light source and the detector array. Using the developed estimation models, the relative pose of the UUV to the light source is estimated and the pose statistics are extracted to determine system TPU. The simulation conditions have the detector array mounted on the bow of the UUV. Two sets of reference trajectories are generated for the UUV to follow for pose estimation during the navigation (shown in Table 2): 1.

2.

The UUV undergoing diving motion (i.e., motion restricted to the xz-plane). The initial relative offsets between the light source and the optical detector array are ∆x = 8.5 m, ∆y = 0 m, ∆z = 0 m, ∆θ = 10◦ , and ∆ψ = 0◦ . The diving motion was conducted in clear (K D = 0.09 m−1 ) water conditions. The UUV undergoing zigzag motion (i.e., diving and heading motion in 3-D space with initial offsets in the y- and z-axis). The initial relative offsets between the light source and the optical detector array are ∆x = 8.5 m, ∆y = 0.2 m, ∆z = −0.1 m, ∆θ = 0◦ , and ∆ψ = 10◦ . The zigzag motion simulation was conducted in clear (K D = 0.09 m−1 ) and turbid (K D = 0.2 m−1 ) water conditions.

Sensors 2017, 17, 1741

9 of 18

In both trajectory simulations, the forward velocity, uo , is assumed to be a constant 0.5 m/s. The UUV offset from the light beacon in x-direction ranges from 8.5 m to 4.5 m, which is designed to be consistent with the calibration experiments. Table 2. UUV Reference trajectories used in the Monte Carlo simulations. Case I UUV Diving Motion Time (s) t=0 x (m) 8.5 y (m) 0 z (m) 0 pitch (◦ ) 10 yaw (◦ ) 0 Sensors 2017, 17, 17410.5 uo (m/s)

t = 2.6 7.22 0 0.22 8.7 0 0.5

t = 5.2 5.93 0 0.38 5.2 0 0.5

Case II UUV Zigzag Motion t=8 4.53 0 0.45 0 0 0.5

t=0 8.5 0.2 −0.1 0 10 0.5

t = 2.6 7.22 0.31 0.06 4.5 −4.5 0.5

t = 5.2 5.94 0.12 −0.06 −8 −5.9 0.5

t=8 4.56 0.2 0 9.2 10 0.5 9 of 18

3.1.1. Diving Motion 3.1.1. Diving Motion The simulated navigation results of a UUV in diving motion are shown in Figure 6. The final The simulated navigation results of a UUV in diving motion are shown in Figure 6. The final nominal translational estimation errors (i.e., ensemble average of all the estimations for Ns = 2000) nominal translational estimation errors (i.e., ensemble average of all the estimations for Ns = 2000) for the x-, y- and z-axis are 0.02 m, 0.02 m, and 0.18 m, respectively. The final nominal orientation for the x-, y- and z-axis are 0.02 m, 0.02 m, and 0.18 m, respectively. The final nominal orientation estimation errors for pitch and yaw are ◦5° and ◦1°, respectively. The pose estimation Confidence estimation errors for pitch yaw 5 and 1 , yaw respectively. The pose estimation Confidence Interval (CI) bounds (95%)and along theare x-axis and for and pitch angles suggest a decreasing trend Interval (CI) bounds (95%) along the x-axis and for yaw and pitch angles suggest a decreasing as the UUV approaches the light source. The CI bounds for the x-axis starts with 0.83 m at t = 0trend s andas the reduces UUV approaches the light source. The CI bounds for the x-axis starts with 0.83 m at t = 0 to ±0.43 m at t = 7.8 s. Similarly, the CI bounds for the pitch angle starts to within ±2° at ts =and 0 reduces ±0.43 mtoatless t =than 7.8 s. the CI bounds angle starts within ±2◦ at s thentodecreases ±1°Similarly, at t = 8 s. Although there isfor nothe yawpitch motion during thetodiving motion, t = 0hardware s then decreases to less than 1◦ at t =2)8degrades s. Although thereimagery is no yaw the diving noise (described in the±Section the input andmotion results during with uncertainty motion, hardware noiseYaw (described in the 2) degrades imagery andmotion results and with estimations in yaw. uncertainty is Section approximately ±2° atthe theinput beginning of the ◦ at the beginning of the motion uncertainty yaw. Yaw uncertainty is approximately decreasesestimations to less thanin ±1° at the end of the motion. The decrease ± in2pose estimation uncertainties is ratio (SNR)The as the UUV in approaches the light source as andattributed decreasesto tothe lessincreasing than ±1◦ signal-to-noise at the end of the motion. decrease pose estimation uncertainties demonstrated in Figure 5. is attributed to the increasing signal-to-noise ratio (SNR) as the UUV approaches the light source as demonstrated in Figure 5.

Figure 6. Monte Carlo simulation results when UUV undergoes diving motion. Red dashed curves Figure Monte Carlo simulation when UUV undergoes motion.and Redthe dashed denote the6.UUV motion, blue curvesresults denote the nominal positiondiving estimations gray curves regions denote the UUV motion, blue curves denote the nominal position estimations and the gray regions denote the 95% CI. (a) The UUV trajectory in diving motion; (b) x-axis estimations; (c) y-axis estimations; denoteestimations; the 95% CI.(e)(a) The UUV trajectory diving motion; (b) x-axis estimations; (c) y-axis (d) z-axis yaw estimations; and (f)inpitch estimations. estimations; (d) z-axis estimations; (e) yaw estimations; and (f) pitch estimations.

The CI bounds for both the y- and z-axis slightly increase as the UUV approaches the light source from ±0.18 m at the beginning of the motion to approximately ±0.2 m at the end of the motion for y-axis and ±0.22 m to ±0.28 m for z-axis. This is likely caused by the propagation of uncertainty in x-, pitch and yaw estimations, which are used to update the y- and z-axis estimations.

Sensors 2017, 17, 1741

10 of 18

The CI bounds for both the y- and z-axis slightly increase as the UUV approaches the light source from ±0.18 m at the beginning of the motion to approximately ±0.2 m at the end of the motion for y-axis and ±0.22 m to ±0.28 m for z-axis. This is likely caused by the propagation of uncertainty in x-, pitch and yaw estimations, which are used to update the y- and z-axis estimations. 3.1.2. Three-Dimensional Zigzag Motion in Clear Waters (K D = 0.09 m−1 ) The simulated navigation results of a UUV in 3-D zigzag motion are shown in Figure 7. The final nominal translational estimation errors (i.e., ensemble average of all the estimations for Ns = 2000) for 2017, the x-, and z-axis are 0.13 m, 0.13 m, and 0.05 m, respectively. The final nominal orientation10 of 18 Sensors 17,y1741 estimation errors for pitch and yaw are 3◦ and 4◦ , respectively. The uncertainty values throughout the motion demonstrate similar pattern to the diving motion. uncertainties associated associated with x-axis,apitch and yaw motions decrease as The the estimation UUV approaches the light source due with x-axis, pitch and yaw motions decrease as the UUV approaches the light source due to increasing to increasing SNR. The x-axis uncertainty starts from ±0.83 m at t = 0 s and decreases to ±0.46 m at Thepitch x-axis uncertainty starts from drops ±0.83 m at t ±2° = 0 stoand ±0.46 m the at t yaw = 8 s.estimation The t = 8SNR. s. The estimation uncertainty from lessdecreases than ±1°,towhereas ◦ ◦ pitch estimation 2 to lessy-than 1 , whereas the yawuncertainties estimation uncertainty uncertainty drops uncertainty from ±3° todrops less from than ± ±1°. The and±z-axis estimation increase from drops from ±3◦ to less than ±1◦ . The y- and z-axis estimation uncertainties increase from ±0.18 m ±0.18 m at t = 0 s to ±0.3 m at t = 8 s for y-axis and from ±0.2 m to ±0.28 m for z-axis. Again, this increase at t = 0 s to ±0.3 m at t = 8 s for y-axis and from ±0.2 m to ±0.28 m for z-axis. Again, this increase is is attributed to the propagation of uncertainty in x-, pitch and yaw estimations which are used to attributed to the propagation of uncertainty in x-, pitch and yaw estimations which are used to update update and z-axis estimations. the y-the andy-z-axis estimations.

Figure 7. Monte Carlo simulation results for three-dimensional zigzag motion. Red dashed curves denote UUVCarlo motion, blue curves denote nominal position estimations and the gray regionscurves Figure 7. the Monte simulation results forthe three-dimensional zigzag motion. Red dashed denote 95%motion, confidence interval. The UUV in zigzag estimations motion (3D space); (b)gray x-axis denote thethe UUV blue curves(a) denote the trajectory nominal position and the regions estimations; (c) y-axis estimations; (d) z-axis estimations; (e) yaw estimations; and (f) pitch estimations. denote the 95% confidence interval. (a) The UUV trajectory in zigzag motion (3D space); (b) x-axis

estimations; (c) y-axis estimations; (d) z-axis estimations; (e) yaw estimations; and (f) pitch 3.1.3. Three-Dimensional Zigzag Motion in Turbid Waters K D = 0.2 m−1 estimations. The performance of the pose detection system is also tested in waters with higher turbidities. −1 conditions as a potential 3.1.3. Three-Dimensional Motion TurbidPortsmouth Waters 𝐾𝐷Harbor, = 0.2 m Monte Carlo simulations Zigzag are conducted to in represent NH − 1 deployment site (K D = 0.2 m ) [33,34]. The zigzag motion demonstrated in Section 3.1.2 is used The performance of the pose detection system is also tested in waters with higher turbidities. to compare the performance of the system under the same configuration. The simulation results in

Monte Carlo simulations are conducted to represent Portsmouth Harbor, NH conditions as a potential deployment site (𝐾𝐷 = 0.2 m−1) [33,34]. The zigzag motion demonstrated in Section 3.1.2 is used to compare the performance of the system under the same configuration. The simulation results in turbid water conditions demonstrate that the final nominal errors do not change significantly when compared to simulations conducted in clear waters (𝐾𝐷 = 0.09 m−1) (Figure 8). The final nominal

Sensors 2017, 17, 1741

11 of 18

turbid water conditions demonstrate that the final nominal errors do not change significantly when compared to simulations conducted in clear waters (K D = 0.09 m−1 ) (Figure 8). The final nominal estimation errors for the x-, y- and z-axis are 0.26 m, 0.17 m and 0.01 m, respectively. These results indicate 0.13 m increase in x-axis, 0.04 m increase in y-axis, and 0.04 m decrease in z-axis. The final nominal orientation estimation errors for pitch and yaw are 3◦ and 4◦ , respectively, and are similar to the simulation results conducted in clear waters. However, higher turbidity results in higher pose detection uncertainties, especially for the y-axis, z-axis, pitch and yaw estimations. The simulation results show that final pose detection uncertainties are ±0.36 m, ±0.8 m and ±0.86 for x-, y- and z-axis, respectively, indicating a 0.1 m decrease in x-axis, 0.5 m increase in y-axis and 0.6 m increase in z-axis uncertainty compared the results obtained in less turbid water conditions. Similarly, the final pitch Sensors 2017, 17, 1741 11 of 18 and yaw detection uncertainties are ±8◦ and ±9◦ for pitch and yaw, respectively, which indicate 7◦ pitch and 6◦ yaw increase in uncertainty, compared to the simulation results obtained in less turbid water conditions. The increase in pose detection uncertainty in turbid waters is attributed to the light water conditions. The increase in pose detection uncertainty in turbid waters is attributed to the light attenuation and diffusion of the light beam due to multi-path scattering. attenuation and diffusion of the light beam due to multi-path scattering.

Figure 8. Monte Carlo simulation results for three-dimensional zigzag motion. Red dashed curves denote blue curvesresults denotefor thethree-dimensional nominal position zigzag estimations andRed the dashed gray regions Figurethe 8. UUV Montemotion, Carlo simulation motion. curves denote confidence interval. (a)denote The UUV trajectoryposition in zigzag motion (3D space); (b) x-axis denotethe the95% UUV motion, blue curves the nominal estimations and the gray regions estimations; y-axis estimations; (d) z-axis estimations; (e) yawinestimations; and (f) estimations. denote the (c) 95% confidence interval. (a) The UUV trajectory zigzag motion (3Dpitch space); (b) x-axis

estimations;

(c)

y-axis

estimations;

(d)

z-axis

estimations;

(e)

yaw

estimations;

and

3.2. Empirical (f) pitch Measurements estimations. The detection performance of the optical detector array system is also evaluated through empirical 3.2. Empirical Measurements measurements at the UNH Ocean Engineering facilities. The metal halide light source was used as The beacon detection of wall the optical detector array through a guiding andperformance placed on the of the wave/tow tank system (Figure is 9).also Theevaluated detector array is empirical the the UNH Ocean facilities. The(i.e., metal halidex-, light source was mounted onmeasurements a tow carriageatwith ability to Engineering vary translational offsets, varying y- and z-axis used as atoguiding beacon and placed on the wall of the wave/tow tank (Figure 9). The detector array positions simulate UUV trajectories). is mounted on a tow carriage with the ability to vary translational offsets, (i.e., varying x-, y- and z-axis positions to simulate UUV trajectories).

empirical measurements at the UNH Ocean Engineering facilities. The metal halide light source was used as a guiding beacon and placed on the wall of the wave/tow tank (Figure 9). The detector array is mounted on a tow carriage with the ability to vary translational offsets, (i.e., varying x-, y- and z-axis positions to simulate UUV trajectories). Sensors 2017, 17, 1741

12 of 18

Sensors 2017, 17, 1741

12 of 18

(a) Figure 9. Cont.

(b)

(c)

Figure 9. (a) Schematic diagram of the experimental setup in the wave and tow tank; (b) image of the Figure 9. (a) Schematic diagram of the experimental setup in the wave and tow tank; (b) image of the detector array submerged into the water on the moving carriage platform; and (c) image of the light detector array submerged into the water on the moving carriage platform; and (c) image of the light source mounted on the wall of the tank. source mounted on the wall of the tank.

Two sets of dynamic wave/tow tank tests are conducted to evaluate the detector array’s pose Two sets of dynamic tank tests conducted to evaluate detector pose detection capability. In thewave/tow first test, there is noare yz-offset between the lightthe source and array’s the detector detection capability. In the first test, there is no yz-offset between the light source and the detector array (i.e., the light source and the detector array positioned at a center-to-center distance from each array source the detector positioned center-to-center distance from other)(i.e., withthe anlight 8.5 m x-axisand separation (i.e.,array x = 8.5 m, y = 0 at m aand z = 0 m). In the second test,each the other) with an 8.5 m x-axis separation (i.e., x = 8.5 m, y = 0 m and z = 0 m). In the second test, detector array offset relative to the light source is maintained within a maximum offset of x = 8.5the m, detector array offset relative to the light source is maintained within a maximum offset of x = 8.5 m, y = 0.6 m and z = 0.8 m. In both tests, the detector array initial position is x = 8.5 m away from the light ysource = 0.6 and m and z = 0.8 m. In both tests, the detector array initial position is x = 8.5 m away from the then approaches the light source with a velocity of 0.5 m/s. After the detector array reaches light source and (x then approaches the the lightlight source with the a velocity 0.5 m/s. After the detector array its final position = 4.5 m away from source), detectorofarray is kept stationary to observe reaches its final position (x =during 4.5 m UUV away station from the light source), the detector array is kept stationary to the detection performance keeping. observe the detection performance during UUV station keeping. 3.2.1. Test 1 Results 3.2.1. Test 1 Results For Test 1, the initial state is such that x = 8.5 m, y = 0 m, and z = 0 m, and the final state at x = 4.5 For Test 1, the initial state is such that x = 8.5 m, y = 0 m, and z = 0 m, and the final state at m, y = 0, and z = 0 m. This first set of empirical results shows that the detector array estimation x = 4.5 m, y = 0, and z = 0 m. This first set of empirical results shows that the detector array estimation performance is highly accurate throughout the detector array motion (see Figure 10). At the initial performance is highly accurate throughout the detector array motion (see Figure 10). At the initial state (x = 8.5 m away from the light source), position estimation errors for all axis are within 0.02 m. state (x = 8.5 m away from the light source), position estimation errors for all axis are within 0.02 m. Here, because the estimation results for y-axis, z-axis and x-axis velocity estimates are noisy, they are smoothed with a moving average window (bin size: 10 samples). The steady state estimation errors are 0.1 m, 0.0 m and 0.08 m for the x-, y- and z-axis, respectively. The x-axis velocity estimations are also within 0.2 m/s throughout the course of the motion with a final error of 0.07 m/s. Measurement noise observed in the y-axis, z-axis and forward velocity estimations can be attributed to the combination of hardware noise, mechanical vibrations

Sensors 2017, 17, 1741

13 of 18

Here, because the estimation results for y-axis, z-axis and x-axis velocity estimates are noisy, they are smoothed with a moving average window (bin size: 10 samples). The steady state estimation errors are 0.1 m, 0.0 m and 0.08 m for the x-, y- and z-axis, respectively. The x-axis velocity estimations are also within 0.2 m/s throughout the course of the motion with a final error of 0.07 m/s. Measurement noise observed in the y-axis, z-axis and forward velocity estimations can be attributed to the combination of hardware noise, mechanical vibrations along the tow carriage platform path, and environmental factors (e.g., non-uniform water conditions). Noise is most prevalent along the z-axis after the detector array has reached the final stationary state. The observed noise can Sensors 2017, 17,to1741 13 of 18 be attributed the pitching of the detector platform even after the forward motion of the tow carriage is stopped.

Figure 10. Empirical results for position estimations for Test 1: (a) reference and x-axis position estimate; Figure 10. reference Empiricaland results for smoothed position estimations for Test(c)1:raw (a) and reference and y-axis x-axisposition position (b) velocity raw and velocity estimates; smoothed estimate; (b) velocity reference and raw and smoothed velocity estimates; (c) raw and smoothed estimates; and (d) raw and smoothed z-axis position estimates. y-axis position estimates; and (d) raw and smoothed z-axis position estimates.

3.2.2. Test 2 Results 3.2.2. Test 2 Results For Test 2, the initial state is at x = 8.5 m, y = 0.6 m, and z = 0.8 m, and the final state is at For Test 2, the initial state is at x = 8.5 m, y = 0.6 m, and z = 0.8 m, and the final state is at x = 4.5 m, x = 4.5 m, y = 0.6, and z = 0.8 m. At the beginning of Test 2, when the platform is stationary at its y = 0.6, and z = 0.8 m. At the beginning of Test 2, when the platform is stationary at its initial starting initial starting position, position estimation errors of 0.5 m, 0.15 m and 0.15 m are observed in the position, position estimation errors of 0.5 m, 0.15 m and 0.15 m are observed in the x-, y- and z-directions, respectively (Figure 11). However, as the detector array approaches the light x-, y- and z-directions, respectively (Figure 11). However, as the detector array approaches the light source, these errors reduce and finally reach to 0.1 m, 0.01 m and 0.01 m at the final stationary state. source, these errors reduce and finally reach to 0.1 m, 0.01 m and 0.01 m at the final stationary state. This phenomenon is also attributed to an increasing SNR as the UUV approaches to the light source. This phenomenon is also attributed to an increasing SNR as the UUV approaches to the light source. Maximum x- and z-axis estimation errors of 0.8 m and 0.2 m, respectively, are observed when the Maximum x- and z-axis estimation errors of 0.8 m and 0.2 m, respectively, are observed when the system transitions from being stationary to becoming dynamic at t = 3 s. The maximum y-axis initial system transitions from being stationary to becoming dynamic at t = 3 s. The maximum y-axis initial position estimation error is 0.2 m. The detection system is able to track the carriage velocity with position estimation error is 0.2 m. The detection system is able to track the carriage velocity with a a maximum velocity estimate error of 0.14 m/s error during the platform motion. maximum velocity estimate error of 0.14 m/s error during the platform motion.

Sensors 2017, 17, 1741 Sensors 2017, 17, 1741

14 of 18 14 of 18

Figure 11. Empirical results for position estimations for Test 2: (a) reference and x-axis position estimate; Figure 11. reference Empiricaland results for position for Test(c) 2: raw (a) and reference and y-axis x-axis position position (b) velocity raw and smoothedestimations velocity estimates; smoothed estimate; and (b) velocity reference and z-axis raw and smoothed velocity estimates; (c) raw and smoothed estimates; (d) raw and smoothed position estimates. y-axis position estimates; and (d) raw and smoothed z-axis position estimates.

4. Discussion 4. Discussion The evaluation of the pose detection results showed that the developed system can provide The evaluation of theperformance. pose detection results that developed system satisfactory pose detection The errorsshowed obtained forthe all directional motionscan (x-,provide y- and satisfactory pose detection performance. The errors obtained for all directional motions (x-, y- and z-axis; pitch; and yaw) were within the required positioning criteria. Monte Carlo simulation results z-axis; pitch; and yaw) were within the required positioning criteria. Monte Carlo simulation results showed that the pose estimation parameters demonstrated in this study were good indicators of pose. showed thatthe theratio poseofestimation demonstrated in this gooddetection indicators(i.e., of pose. M23 Specifically, the imageparameters moment matrix indices used forstudy pitchwere and yaw M𝑀 3323 Specifically, the M32ratio of the image moment matrix indices used for pitch and yaw detection (i.e., 𝑀 for pitch and M33 for yaw) were invariant to changes in offsets in all axial directions (Figure 5). As33 𝑀32 pitchsingle and models for yaw) wereand invariant to changeswere in offsets in all that axialwere directions (Figure 5). As afor result, for pitch yaw estimations developed valid throughout thea 𝑀33 UUV navigation. Similarly, the logarithmic model based on the photodiode intensity was also a good result, single models for pitch and yaw estimations were developed that were valid throughout the indicator of x-axisSimilarly, estimates.the The pose estimation uncertainties in x-axis, pitch and yaw directions UUV navigation. logarithmic model based on the photodiode intensity was also a good significantly decreased as the distance between the detector array and the light source decreased. indicator of x-axis estimates. The pose estimation uncertainties in x-axis, pitch and yaw directions At closer ranges, the pose detection models performed better, as aand result of thesource increasing SNR of significantly decreased as the distance between the detector array the light decreased. At the image sampled on the detectormodels array. performed This finding was also dynamicSNR empirical closer ranges, the pose detection better, as a verified result ofby thethe increasing of the experiments. Theon initial estimation errors in the x-, y-was and also z-axisverified observed thedynamic beginning of the image sampled the detector array. This finding by at the empirical motion (∆xi = The 0.5 m, ∆yi =estimation 0.15 m anderrors ∆zi =in 0.15 significantly of the motion experiments. initial them) x-,decreased y- and z-axis observedatatthe theend beginning of the (∆x = 0.1 m, ∆y = 0.01 m and ∆z = 0.01 m). In this study, the positioning requirements were based f f m, Δ𝑦𝑖 = 0.15 mf and Δ𝑧𝑖 = 0.15 m) decreased significantly at the end of the motion motion (Δ𝑥𝑖 = 0.5 on and docking applications [7,9,25]. However, it should be mentioned that the positioning (Δ𝑥hovering 𝑓 = 0.1 m, Δ𝑦𝑓 = 0.01 m and Δ𝑧𝑓 = 0.01 m). In this study, the positioning requirements were based requirements varydocking with respect to the size and design configuration. example, inthat conethe type docking on hovering and applications [7,9,25]. However, it should For be mentioned positioning station used in [9], error tolerances in yand z-axis were set as ± 1 m and yaw error tolerance wastype set requirements vary with respect to the size and design configuration. For example, in cone ◦ to be within ±45used . Another is the pole type stations in±1 which theyaw UUV latches onto docking station in [9], example error tolerances in yanddocking z-axis were set as m and error tolerance awas poleset within 1 m of the pole [35]. This type of design makes the x-axis positioning requirements more to be within ±45°. Another example is the pole type docking stations in which the UUV latches flexible but require tolerances inThis orientation. onto a pole within 1tighter m of the pole [35]. type of design makes the x-axis positioning requirements main but challenge the pose detectioninalgorithms is to estimate five DOF motion from a single moreThe flexible requireintighter tolerances orientation. image. This challenge manifests itself in coplanar motion. example, even zerofrom y- oraz-axis The main challenge in the pose detection algorithms is For to estimate five DOFwith motion single offsets, yaw angle motion offsets cause erroneous y-axis detection whereas pitch angle motion image. This challenge manifests itself in coplanar motion. For example, even with zero y- oroffsets z-axis cause erroneous z-axis estimations. Thus,erroneous decoupling specific motions is not trivial (i.e., differentiating offsets, yaw angle motion offsets cause y-axis detection whereas pitch angle motion offsets

Sensors 2017, 17, 1741

15 of 18

Sensors 2017, 17, 1741

15 of 18

cause erroneous z-axis estimations. Thus, decoupling specific motions is not trivial (i.e., differentiating y-axis displacement from yaw motion and z-axis displacement from pitch motion). To y-axis displacement from yaw andx-axis, z-axis pitch displacement from pitch motion). To in overcome this overcome this challenge, the motion estimated and yaw motions were used 3-D vector challenge, to theupdate estimated and yaw motions used in vector to update the geometry the x-axis, y-axis pitch and z-axis positions aswere described in3-D Section 2. geometry As observed from the y-axis and z-axis positions the as described in Section 2. As observed frompitch the Monte Carlo simulations, the Monte Carlo simulations, propagation of uncertainties in x-axis, and yaw angle estimations propagation uncertainties inand x-axis, pitch and yaw angle the uncertainty in yincreased theof uncertainty in yz-axis estimations. As theestimations increasing increased pose estimation uncertainties and estimations. As the increasing pose positioning estimation uncertainties z-axis may to adversely in y- z-axis and z-axis may adversely affect the UUV performance,inityis and recommended further affect thethe UUV positioning performance, it is recommended to further improve the pose algorithms by improve pose algorithms by fusing the data from other sensors that are located on the UUV (e.g., fusing the data from other sensors that are located onwith the UUV (e.g., Inertial Unit and Inertial Measurement Unit and Doppler Velocity Log) the output from theMeasurement optical detector array. Doppler Log) with the output theCarlo optical detector array. The Velocity empirical measurements and from Monte simulations identified a variety of factors that The empirical measurements and Monte Carlo simulationsinidentified a variety of factors that contribute to the accuracy of the pose detection demonstrated this paper. First, the developed contribute the accuracy ofdetector the posearray detection demonstrated this paper. First, theoff-the-shelf developed system is a to proof-of-concept constructed with costinefficient commercial system components is a proof-of-concept detector array constructed with cost efficient commercial off-the-shelf optical and ADCs. The pose detection performance could be improved by using ADCs optical components ande.g., ADCs. The posewhich detection be improved by using ADCs with higher resolution, 16-bit ADC, couldperformance increase the could radiometric resolution sampled on withdetector higher array. resolution, e.g., 16-bit ADC, which could increase resolutionwith sampled the In addition, using a green light source at 532the nmradiometric and a photodetector peak sensitivity approximately at 532 nm could increase effective and the pose detection on the detector array. In addition, using a green lightboth source at 532range nm and a photodetector with performance. Second, water clarity plays critical role in both the pose detection study peak sensitivity approximately at 532 nmacould increase effective rangeperformance. and the poseThe detection results showedSecond, the detector in the tank conditions with average performance. water array clarityperformance plays a critical roleUNH in thewave/tow pose detection performance. The study 𝐾 m−1. By thearray developed simulator, it UNH is possible to predict the performance of the results showed theusing detector performance in the wave/tow tank conditions with average 𝐷 = 0.09 detector attenuation coefficients using Beer’s the lawperformance [27,36]. The ofpose K D = 0.09array m−1 . for By different using thediffuse developed simulator, it is possible to predict the detection performance of the optical detector array systems wasBeer’s also evaluated byThe simulating turbid detector array for different diffuse attenuation coefficients using law [27,36]. pose detection −1. simulating water conditions in Portsmouth Harbor, NH with anwas average 𝐾𝐷 = 0.2 mby The results turbid showedwater that performance of the optical detector array systems also evaluated − 1 the pose estimation uncertainties significantly increased in turbid waters. In addition to the light conditions in Portsmouth Harbor, NH with an average K D = 0.2 m . The results showed that the energy attenuation, the turbidsignificantly water conditions cause multi-path scattering thattodiffuse theenergy light pose estimation uncertainties increased in turbid waters. In addition the light beam and result a uniform over the detector array. The comparison obtained attenuation, the in turbid waterdistribution conditions cause multi-path scattering that diffuse of theimages light beam and in clearinand turbid water conditions demonstrated in Figure 12. result a uniform distribution overisthe detector array. The comparison of images obtained in clear and turbid water conditions is demonstrated in Figure 12.

(a)

(b)

Figure 12. The sampled light field in the simulator in two different water clarity conditions at the Figure 12. The sampled light field in the simulator in two different water clarity conditions at the−1 same same distance to the light source: (a) image sampled in relatively clear waters with 𝐾𝐷 = 0.09 m ; and distance to the light source: (a) image sampled in relatively clear waters with K D = 0.09 m−1 ; and −1. (b) image sampled in turbid waters with 𝐾𝐷 = 0.2 m− (b) image sampled in turbid waters with K D = 0.2 m 1 .

In Figure 12a, in clear waters, the Gaussian light field can be observed on the detector array. As In Figureincreases, 12a, in clear waters, theisGaussian light field can beThus, observed on the detector array. As the turbidity the light beam attenuated and diffused. the sampled light field on the the turbidity increases, the light beam is attenuated and diffused. Thus, the sampled light field on the detector array in turbid waters deviates from Gaussian beam characteristics (Figure 12b). As a result, detector array in turbid waters deviates fromfor Gaussian beam beam characteristics (Figure As a result, the position detection algorithms developed the Gaussian assumption result12b). in significantly the position detection algorithms developed for the Gaussian beam assumption result in significantly higher pose estimation uncertainty in turbid waters than in clear waters (Section 3.1.3). In addition, higher estimation uncertainty in turbid waters than in cleartowaters 3.1.3). Inlight addition, in field pose experiments, Beer’s law assumption may not be sufficient model(Section the underwater field in field experiments, Beer’s law assumption may not be sufficient to model the underwater light

Sensors 2017, 17, 1741

16 of 18

field conditions [37]. These results demonstrate the importance of conducting pose calibration of the proposed system in field experiments for enhanced performance. Other potential factors that affected the performance of the pose detection system includes additional environmental factors, such as inhomogeneous water clarity, currents and biofouling. Sediment plumes occurring in water can create water clarity inhomogeneity and therefore have the potential to significantly reduce the visibility conditions in the water. Water waves and currents also play a detrimental role during UUV positioning, e.g., docking. These forces cause both platforms (i.e., light source and UUV) to drift out of position. To minimize these effects, the light beacon should be mechanically designed to withstand these forces. In terms of a UUV, the currents can be measured onboard with a velocity sensor [9]. These measurements can then be used to compensate for the current-induced disturbance forces in the UUV control system (e.g., via automatic feedforward control). Copper components should be included in the design of the detector array and light source to prevent biofouling to cause issues in long underwater deployments [12]. 5. Conclusions The goal of this study was to evaluate the performance of a proposed optical detector array unit for relative positioning of a UUV with respect to a light source. Monte Carlo simulations and empirical measurement results with a 5 × 5 photodetector array showed that the maximum pose estimation errors were within 0.13 m, 0.13 m, and 0.18 m for the x-, y- and z-directions, respectively; within 5◦ for pitch and yaw; and within 0.2 m/s for x-axis velocity estimations. The TPU of the pose detection system was also evaluated by taking into account the variation in hardware and environmental parameters that could degrade the system performance. The Monte Carlo simulation results showed that the pose estimation uncertainty (2σ) of the detector array at noise levels of 1% of the maximum photodiode intensity were less than ±0.46 m in the x-direction, ±0.3 m for the y-direction and the z-direction, and less than ±1◦ for pitch and yaw motion. These results support the potential use of a detector array for UUV positioning applications in acoustically noisy environments, such as in Portsmouth Harbor, NH. Acknowledgments: This study is funded by UNH/NOAA Joint Hydrographic Center grant NA15NOS4000200, LINK foundation Ocean Engineering and Instrumentation fellowship, Naval Education Engineering Center (NEEC), Naval Sea Systems Command (NAVSEA) and UNH Dissertation Year fellowship. The authors would also like to thank to Martin Renken for technical discussions, Paul Lavoie for his contributions on the design of the optical detector array and also Matt Birkebak and Timothy Brown for their assistance in the experimental work. Author Contributions: Firat Eren designed and performed the experiments, analyzed the data, and wrote the manuscript; Shachak Pe’eri contributed to designing the experiments, materials and analysis tools and edited the manuscript; May-Win Thein and Yuri Rzhanov contributed to the materials and analysis tools, and edited the manuscript; and Barbaros Celikkol and M. Robinson Swift contributed to editing the manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4.

5.

Willcox, J.S.; Bellingham, J.G.; Zhang, Y.; Baggeroer, A.B. Performance metrics for oceanographic surveys with autonomous underwater vehicles. IEEE J. Ocean. Eng. 2001, 26, 711–725. [CrossRef] Eustice, R.; Camilli, R.; Singh, H. Towards Bathymetry-Optimized Doppler Re-Navigation for AUVs. In Proceedings of the Oceans 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1–7. Cruz, N.A.; Matos, A.C. The MARES AUV, A Modular Autonomous Robot for Environment Sampling. In Proceedings of the Oceans 2008, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–6. [CrossRef] Sakai, H.; Tanaka, T.; Mori, T.; Ohata, S.; Ishii, K.; Ura, T. Underwater Video Mosaicing Using AUV and Its Application to Vehicle Navigation. In Proceedings of the 2004 International Symposium on Underwater Technology, Taipei, Taiwan, 20–23 April 2004; pp. 405–410. [CrossRef] Kong, J.; Cui, J.H.; Wu, D.; Gerla, M. Building Underwater Ad-Hoc Networks and Sensor Networks for Large Scale Real-Time Aquatic Applications. In Proceedings of the 2005 MILCOM IEEE Military Communications Conference, Atlantic City, NJ, USA, 17–20 October 2005. [CrossRef]

Sensors 2017, 17, 1741

6.

7.

8. 9. 10. 11.

12.

13.

14. 15.

16.

17.

18.

19. 20. 21. 22. 23. 24. 25. 26.

17 of 18

Vasilescu, I.; Detweiler, C.; Doniec, M.; Gurdan, D.; Sosnowski, S.; Stumpf, J.; Rus, D. AMOUR V: A Hovering Energy Efficient Underwater Robot Capable of Dynamic Payloads. Int. J. Robot. Res. 2010, 29, 547–570. [CrossRef] Dunbabin, M.; Corke, P.; Vasilescu, I.; Rus, D. Data Muling over Underwater Wireless Sensor Networks Using an Autonomous Underwater Vehicle. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation ICRA, Orlando, FL, USA, 15–19 May 2006; pp. 2091–2098. [CrossRef] Doniec, M.; Topor, I.; Chitre, M.; Rus, D. Autonomous, Localization-Free Underwater Data Muling Using Acoustic and Optical Communication. Exp. Robot. 2013, 88, 841–857. Teo, K.; An, E.; Beaujean, P.P.J. A robust fuzzy autonomous underwater vehicle (AUV) docking approach for unknown current disturbances. IEEE J. Ocean. Eng. 2012, 37, 143–155. [CrossRef] Macias, E.; Suarez, A.; Chiti, F.; Sacco, A.; Fantacci, R. A hierarchical communication architecture for oceanic surveillance applications. Sensors 2011, 11, 11343–11356. [CrossRef] [PubMed] Allen, B.; Austin, T.; Forrester, N.; Goldsborough, R.; Kukulya, A.; Packard, G.; Purcell, M.; Stokey, R. Autonomous Docking Demonstrations with Enhanced REMUS Technology. In Proceedings of the 2016 Oceans MTS/IEEE Conference, Boston, MA, USA, 18–21 September 2006; pp. 1–6. [CrossRef] Hobson, B.W.; McEwen, R.S.; Erickson, J.; Hoover, T.; McBride, L.; Shane, F.; Bellingham, J.G. The Development and Ocean Testing of an AUV Docking Station for a 21" AUV. In Proceedings of the 2007 Oceans, Vancouver, BC, Canada, 29 September–4 October 2007; Volume 1–5, pp. 1357–1362. Evans, C.; Keller, M.; Smith, J.S.; Marty, P.; Rigaud, V. Docking Techniques and Evaluation Trials of the SWIMMER AUV: An Autonomous Deployment AUV for Workclass ROVs. In Proceedings of the 2001 MTS/IEEE Conference and Exhibition OCEANS, Honolulu, HI, USA, 5–8 November 2001; pp. 520–528. [CrossRef] Sendra, S.; Lloret, J.; Jimenez, J.M.; Rodrigues, J.J.P.C. Underwater communications for video surveillance systems at 2.4 GHz. Sensors 2016, 16, 1769. [CrossRef] [PubMed] Fukasawa, T.; Noguchi, T.; Kawasaki, T.; Baino, M. Marine Bird, a New Experimental AUV with Underwater Docking and Recharging System. In Proceedings of the 2003 Oceans, San Diego, CA, USA, 22–26 September 2003; Volume 4, pp. 2195–2200. Evans, J.; Redmond, P.; Plakas, C.; Hamilton, K.; Lane, D. Autonomous docking for Intervention-AUVs using sonar and video-based real-time 3D pose estimation. In Proceedings of the 2003 IEEE Oceans, San Diego, CA, USA, 22–26 September 2003; Volume 4, pp. 2201–2210. [CrossRef] Kondo, H.; Okayama, K.; Choi, J.-K.; Hotta, T.; Kondo, M.; Okazaki, T.; Singh, H.; Chao, Z.; Nitadori, K.; Igarashi, M.; et al. Passive acoustic and optical guidance for underwater vehicles. In Proceedings of the IEEE Oceans 2012—Yeosu, Yeosu, Korea, 21–24 May 2012; pp. 1–6. [CrossRef] Krupinski, S.; Maurelli, F.; Grenon, G.; Petillot, Y. Investigation of autonomous docking strategies for robotic operation on intervention panels. In Proceedings of the IEEE Oceans 2008, Quebec City, QC, Canada, 15–18 September 2008; Volume 1–4, pp. 1301–1310. [CrossRef] Li, D.; Chen, Y.; Shi, J.; Yang, C. Autonomous underwater vehicle docking system for cabled ocean observatory network. Ocean Eng. 2015, 109, 127–134. [CrossRef] Bonin-Font, F.; Oliver, G.; Wirth, S.; Massot, M.; Lluis Negre, P.; Beltran, J.-P. Visual sensing for autonomous underwater exploration and intervention tasks. Ocean Eng. 2015, 93, 25–44. [CrossRef] Park, J.-Y.; Jun, B.; Lee, P.; Oh, J. Experiments on vision guided docking of an autonomous underwater vehicle using one camera. Ocean Eng. 2009, 36, 48–61. [CrossRef] Lee, D.; Kim, G.; Kim, D.; Myung, H.; Choi, H.-T. Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean Eng. 2012, 48, 59–68. [CrossRef] Simpson, J.A.; Hughes, B.L.; Muth, J.F. Smart Transmitters and Receivers for Underwater Free-Space Optical Communication. IEEE J. Sel. Areas Commun. 2012, 30, 964–974. [CrossRef] Celikkol, B.; Eren, F.; Pe’eri, S.; Rzhanov, Y.; Swift, M.R.; Thein, M.W. Optical Based Pose Detection for Multiple Unmanned Underwater Vehicles. U.S. Patent 14/680,447, 4 July 2015. Teo, K.; Goh, B.; Chai, O.K. Fuzzy Docking Guidance Using Augmented Navigation System on an AUV. IEEE J. Ocean. Eng. 2015, 40, 349–361. [CrossRef] Conrado de Souza, E.; Maruyama, N. Intelligent UUVs: Some issues on ROV dynamic positioning. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 214–226. [CrossRef]

Sensors 2017, 17, 1741

27.

28. 29.

30. 31. 32.

33. 34. 35. 36.

37.

18 of 18

Eren, F.; Pe’eri, S.; Thein, M.-W. Characterization of optical communication in a leader-follower unmanned underwater vehicle formation. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, MD, USA, 29 April–3 May 2013; p. 872406. [CrossRef] Eren, F.; Pe, S.; Rzhanov, Y.; Thein, M.; Celikkol, B. Optical detector array design for navigational feedback between unmanned underwater vehicles (UUVs). IEEE J. Ocean. Eng. 2016, 41, 18–26. [CrossRef] Eren, F.; Thein, M.; Pe, S.; Rzhanov, Y.; Celikkol, B.; Swift, R. Pose detection and control of multiple unmanned underwater vehicles (UUVs) using optical Feedback. In Proceedings of the OCEANS 2014—TAIPEI, Taipei, Taiwan, 7–10 April 2014; pp. 1–6. [CrossRef] Birkebak, M. Airborne Lidar Bathymetry Beam Diagnostics Using an Underwater Optical Detector Array. Master’s Thesis, University of New Hampshire, Durham, NH, USA, May 2017. Gonzalez, R.; Woods, R. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2002; Volume 190. Rzhanov, Y.; Eren, F.; Thein, M.-W.; Pe’eri, S. An image processing approach for determining the relative pose of unmanned underwater vehicles. In Proceedings of the OCEANS 2014—TAIPEI, Taipei, Taiwan, 7–10 April 2014; pp. 1–4. [CrossRef] Pe’eri, S.; Parrish, C.; Azuike, C.; Alexander, L.; Armstrong, A. Satellite Remote Sensing as a Reconnaissance Tool for Assessing Nautical Chart Adequacy and Completeness. Mar. Geod. 2014, 37, 293–314. [CrossRef] NOAA Office of Coastal Survey. Portsmouth Harbor 1983. Available online: http://www.charts.noaa.gov/ PDFs/13283.pdf (accessed on 2 December 2015). Singh, H.; Bellingham, J.G.; Hover, F.; Lerner, S.; Moran, B.A.; Von der Heydt, K.; Yoerger, D. Docking for an autonomous ocean sampling network. IEEE J. Ocean. Eng. 2001, 26, 498–514. [CrossRef] Pe’eri, S.; Shwaery, G. Light field and water clarity simulation of natural environments in laboratory conditions. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, MD, USA, 23–27 April 2012; p. 83721A. [CrossRef] Cox, W.; Muth, J.F. Simulating channel losses in an underwater optical communication system. J. Opt. Soc. Am. 2014, 31, 920–934. [CrossRef] [PubMed] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents