sensors Article
Three Landmark Optimization Strategies for Mobile Robot Visual Homing Xun Ji * , Qidan Zhu, Junda Ma, Peng Lu and Tianhao Yan College of Automation, Harbin Engineering University, Harbin 150001, China;
[email protected] (Q.Z.);
[email protected] (J.M.);
[email protected] (P.L.);
[email protected] (T.Y.) * Correspondence:
[email protected]; Tel.: +86-451-8251-9406 Received: 18 August 2018; Accepted: 17 September 2018; Published: 20 September 2018
Abstract: Visual homing is an attractive autonomous mobile robot navigation technique, which only uses vision sensors to guide the robot to the specified target location. Landmark is the only input form of the visual homing approaches, which is usually represented by scale-invariant features. However, the landmark distribution has a great impact on the homing performance of the robot, as irregularly distributed landmarks will significantly reduce the navigation precision. In this paper, we propose three strategies to solve this problem. We use scale-invariant feature transform (SIFT) features as natural landmarks, and the proposed strategies can optimize the landmark distribution without over-eliminating landmarks or increasing calculation amount. Experiments on both panoramic image databases and a real mobile robot have verified the effectiveness and feasibility of the proposed strategies. Keywords: mobile robot navigation; visual homing; panoramic vision; scale-invariant feature transform
1. Introduction In robotic community, the topic of autonomous mobile robot navigation has been widely discussed for decades, and various navigation schemes based on different types of sensors have been proven to be effective and employed in practical applications [1–5]. Among the numerous vision-based autonomous navigation solutions, visual homing has drawn extensive attention for its simple model and high navigation accuracy. Using visual homing approaches, the robot can be guided to the specified destination only by comparing its currently captured panoramic image with the pre-stored target image [6–9]. Visual homing is inspired by biological navigation, which can be considered as a kind of qualitative autonomous navigation technique [10,11]. In contrast to the most popular visual SLAM technology [12–15], visual homing does not require any localization or mapping. Instead, it only needs to continuously calculate the direction vector, known as the homing vector, pointing from the current location to the destination. In summary, the robot can achieve the homing task by solely repeating the following three steps: (1) relating the current panoramic image to the target panoramic image; (2) computing the homing vector; and (3) moving based on the calculated homing vector [16–18]. Visual homing requires only one type of input called landmark. In the early stage, researchers often adopted distinctive objects (such as black or brightly colored cylinders) in the environment as artificial landmarks [19,20]. However, since the identification of such landmarks must be manually interfered, this would inevitably limit the application scope, and natural landmarks thus gradually became a better choice. Some homing approaches adopted pixels at fixed locations in the panoramic images as natural landmarks [21,22]. This solution proved to be feasible, but a problem was unsolved that the homing performance would drop sharply when illumination changed Sensors 2018, 18, 3180; doi:10.3390/s18103180
www.mdpi.com/journal/sensors
Sensors 2018, 18, 3180
2 of 21
during the movement of the mobile robot. With the development of computer vision technology, image feature extraction algorithms were proposed, such as SIFT [23] and speed-up robust features (SURF) [24]. The SIFT and SURF features have two outstanding advantages. First, these features are highly distinctive and can be correctly matched with high probability. Second, these features can exhibit excellent invariance to affine distortion, addition of noise and change in illumination [25]. Currently, numerous visual homing approaches use SIFT or SURF features as natural landmarks to complete navigation tasks in practical applications [26–28]. Aiming at guiding the robot to the specific destination accurately, different types of visual homing algorithms have been proposed. Among them, warping and average landmark vector (ALV) are the two most representative homing approaches. Both warping and ALV have been verified to exhibit good homing performance and robustness. These two homing approaches are widely used in practical applications, and many advanced algorithms based on them have been presented. The warping methods distort either of the panoramic images (usually the target image) based on the parameters describing the distance, direction and rotation of the mobile robot’s movement. Multiple parameter combinations produce different forms of the distorted images, and the warping methods will compare all the distorted images with the current image. The optimal parameters will be determined when the distorted image best fits the current image, and in what follows the homing vector can be generated [21]. Based on the original warping method, many advanced warping algorithms have been proposed to further optimize the homing performance. Möller et al. proposed the 2D-warping and min-warping methods, and these two advanced algorithms could achieve the variation of the alignment angle estimation in the environment instead of the external compass in the original 1D-warping [29]. Besides, a SIMD implementation was also adopted based on min-warping so that the tolerance and efficiency can be further enhanced [30,31]. Zhu et al. used SIFT features such as natural landmarks instead of fixed pixels in the image so that the robustness to the illumination can be improved [27]. Recently, several actual experiments were performed to compare these advanced warping methods with other homing approaches in many aspects, such as illumination tolerance and computation speed [32–34]. ALV is considered as one of the simplest homing approaches in concept, and it is often used as a benchmark in the field of visual homing. ALV is defined as an average bearing of all the extracted landmarks with respect to a certain location, and the homing vector can be calculated by the difference between the ALV at the current location and the ALV at the target location [20,26]. Thus far, ALV has also been enriched to further optimize the homing performance. Zhu et al. presented an optimal ALV model based on sparse landmarks [35]. Yu et al. analyzed the effect of landmark vectors and proposed an improved ALV algorithm, called Distance-Estimated Landmark Vector (DELV) [36]. Lee et al. adopted the omnidirectional depth information to further enrich DELV [37]. All these advanced homing algorithms have been verified to be effective and feasible, and, under the guidance of these algorithms, mobile robot can move towards the target location with a better trajectory. As a local navigation strategy, although visual homing exhibits high navigation accuracy in a small-scale environment, it is constrained by the landmark distribution. Most of the visual homing approaches should satisfy as much as possible the equal distance assumption, which describes an ideal landmark distribution state. Due to the lack of depth sensors for traditional visual homing approaches, distance information between key locations and landmarks is difficult to obtain, so the equal distance assumption provides constraint that the landmarks need to be uniformly located the same distance from the target location [28,29]. Although this assumption has been verified to be correct [36], it is always violated in practical applications, and irregularly distributed landmarks will have a negative impact on the performance for visual homing. The landmark distribution is one of the key factors affecting the homing performance. However, research on the optimization of landmark distribution is very limited. Zhu et al. presented two solutions: The first solution is to establish a task-dependent optimization procedure to distinguish the most relevant landmarks, including the spatial distribution, feature selection and updating [38,39].
Sensors 2018, 18, 3180
3 of 21
The second solution is to optimize the landmark distribution by removing some of the landmarks and preserving only theREVIEW landmarks that are uniformly distributed [35]. In addition, since the DELV Sensors 2018, 18, x FOR PEER 3 of 21 algorithm adopts the depth sensor, it is considered to be able to overcome the constraints of the equal distance distance assumption assumption so so that that the the possible possible effects effects can be reduced [37]. The The above above three methods methods can effectively effectively improve improve the the homing homing accuracy accuracy of of the the robot, robot, but but these these methods methods still still have have some some limitations. limitations. The first two two methods methodsachieve achievedistribution distributionoptimization optimization expense discarding landmarks; if atat thethe expense of of discarding landmarks; if the the current location is far from target location, thenumber numberofoflandmarks landmarksthat thatcan canbe be used used as input current location is far from thethe target location, the is too small, thus the probability of potential errors may be increased. Besides, Besides, although although DELV DELV can exhibit exhibit good good homing homing performance, performance, the the use of depth sensors will inevitably increase computational complexity and experimental costs. To avoid the above problems, this paper presents three landmark optimization strategies strategies for mobile robot robotvisual visualhoming. homing. adopt features as natural landmarks, and the proposed WeWe adopt SIFTSIFT features as natural landmarks, and the proposed strategies strategies have two purposes. The first is tomismatching eliminate mismatching landmarks 1). have two purposes. The first purpose is purpose to eliminate landmarks (Strategy 1).(Strategy The second The second tolandmark optimizedistribution the landmark distribution by the idea of weight assignment purpose is topurpose optimizeisthe by the idea of weight assignment (Strategies 2 and 3). (Strategies 2 and 3). The proposed evaluate the value of the each landmark, and, the The proposed strategies evaluate the strategies value of each landmark, and, for landmarks that are for judged landmarks that are judged to be of low quality, the strategies will assign them lower weights, to be of low quality, the strategies will assign them lower weights, weakening their contribution to the weakening their contribution to the homing strategies do not require anyorexcessive homing approaches. These strategies do notapproaches. require anyThese excessive landmark elimination external landmark elimination sensors, and the effect of the optimal landmark distribution can be sensors, and the effector of external the optimal landmark distribution can be achieved by setting a reasonable achieved by setting a reasonable weight for each landmark. In this paper, we combine our proposed weight for each landmark. In this paper, we combine our proposed strategies with the ALV model to strategies with the ALV model to test their effectiveness. test their effectiveness. The paper is is summarized summarized as the the following following sections: sections: In In Section Section 2, 2,we weintroduce introduce the theALV ALV algorithm algorithm in detail. optimization strategies in in Section 3. In 4, we detail. We Wethen thenpresent presentthe thethree threelandmark landmark optimization strategies Section 3. Section In Section 4, perform a series of experiments onon both image databases and we perform a series of experiments both image databases anda areal realmobile mobilerobot robotalong along with with the related the homing performance of the algorithms. In Section 6, we related analysis. analysis.InInSection Section5,5,we wediscuss discuss the homing performance of the algorithms. In Section 6, draw conclusions andand point out out the the future work. we draw conclusions point future work. Average Landmark Vector 2. Average The ALV model is described in Figure 1. C is the current location of the robot. T T is is the target The is the the ith landmark, where i = 1, 2, .…, . . ,n.n.hhisisthe thedesired desiredhoming homing vector vector pointing pointing from from C location. LLii is towards T. L1 L2
C
h
T
Ln
L3
Figure 1. The The ALV ALV model. The The blue blue square square represents represents the current location. The The red red square square represents represents target location. The The black black circles circles represent represent the extracted extracted landmarks. The The blue blue and and red red arrows arrows stand stand the target for the corresponding landmark vectors. The desired homing vector is denoted as the black arrow. arrow.
For the toto thethe imaging principle of panoramic vision, it can For the current current image imagecaptured capturedatatC,C,according according imaging principle of panoramic vision, it be indicated thatthat the the projection point of Cofwill be be in the center of of thethe panoramic image. We define C can be indicated projection point C will in the center panoramic image. We define
C as the origin and create the image coordinate system. Assuming that the image coordinate of Li is PiC = ( x i , y i ) , the corresponding landmark vector of Li can thus be calculated by:
PC PC 0 CLi = iC Pi P0C
(1)
where P0C = (0, 0) is the image coordinate of C. For all the landmarks observed at C, the average landmark vector can be computed as follows:
Sensors 2018, 18, 3180
4 of 21
as the origin and create the image coordinate system. Assuming that the image coordinate of Li is PiC = ( xi , yi ), the corresponding landmark vector of Li can thus be calculated by: →
CLi =
PiC − P0C
k PiC − P0C k
(1)
where P0C = (0, 0) is the image coordinate of C. For all the landmarks observed at C, the average landmark vector can be computed as follows: →
ALVC =
1 n → CLi n i∑ =1
(2)
Similarly, for the target image captured at T, the landmark vector of Li can be calculated by: →
TLi =
PiT − P0T
k PiT − P0T k
(3)
where PiT is the coordinate of Li in the target image, and P0T = (0, 0) is the coordinate of T. For all the landmarks observed at T, the average vector can be computed as follows: →
ALVT =
1 n → TLi n i∑ =1
(4)
Finally, the unit homing vector can be obtained by the subtraction of the average landmark vector at C and T: → → ALVC − ALVT (5) h= → → k ALVC − ALVT k ALV is a conceptually simple visual homing approach with an easy-to-understand model, and the desired homing vector can be generated only by vector calculation. However, ALV is greatly affected by the landmark distribution. Since all the landmark vectors are set to unit vectors in the original ALV algorithm, the calculated homing vector will be close to the ideal homing vector only when the distance from the target location to each landmark is approximately equal. To solve the landmark distribution problem of ALV, an improved algorithm was proposed based on sparse landmarks, abbreviated as SL-ALV in this paper [35]. We will introduce SL-ALV in detail because we use it as a comparative method with our proposed strategies. SL-ALV optimizes the landmark distribution inspired by the imaging principle of panoramic vision system, which is shown in Figure 2. The panoramic vision system contains a hyperbolic mirror, which reflects all the incident lights onto the imaging plane. F1 and F2 are two foci of the system. According to the imaging principle, all extensions of the incident lights will converge on F1 , and all reflected lights will pass through F2 . A crucial phenomenon can be stated that, for landmarks with the same height as F1 in the panoramic vision system, since the angles between all the corresponding incident lights and the hyperbolic mirror are constant, the angles between the reflected lights and the imaging plane will also remain the same. Hence, the projection pixels of these landmarks must be on a fixed circle of the image, called horizon circle. Wherever the system moves, these projection pixels will not leave the horizon circle.
Sensors2018, 2018,18, 18,x3180 Sensors FOR PEER REVIEW
Hyperbolic mirror
of 21 21 55 of
Incident light F1
Normal
F2
Reflected light Imaging plane
(a)
(b)
Figure2.2.Simplified Simplifiedmodel modelofofpanoramic panoramicvision visionsystem systemand andimaging imagingplane: plane:(a) (a)simplified simplifiedmodel modelofof Figure panoramic vision system including hyperbolic mirror and image plane; and (b) imaging plane. TheThe red panoramic vision including hyperbolic mirror and image plane; and (b) imaging plane. circle is horizon circle. In the SL-ALV algorithm, the landmarks located in the blue shaded area are red circle is horizon circle. In the SL-ALV algorithm, the landmarks located in the blue shaded area selected as the input while the landmarks located in theingrey removed. The white parts denote are selected as the input while the landmarks located the area grey are area are removed. The white parts the invalid areas in the panoramic image.image. denote the invalid areas in the panoramic
Inspiredby bythe theabove abovephenomenon, phenomenon,SL-ALV SL-ALVconsiders considersthe theprojection projectionpixels pixels(i.e., (i.e.,landmarks) landmarks) Inspired located near the horizon circle to be relatively more valuable. On the one hand, the horizon located near the horizon circle to be relatively more valuable. On the one hand, the horizon circlecircle can can provide a reference the optimal landmark distribution, the projection pixels near the provide a reference for thefor optimal landmark distribution, and the and projection pixels near the horizon horizon are substantially samedistance image distance theofcenter of the On image. thehand, other circle are circle substantially the samethe image from thefrom center the image. the On other hand, when the panoramic vision system moves, the projection pixels near the horizon circle will not when the panoramic vision system moves, the projection pixels near the horizon circle will not have largeinshift in position image position so that the stability of the landmark distribution can ensured. be better ahave largeashift image so that the stability of the landmark distribution can be better ensured. In the SL-ALV algorithm, the horizon circle is properly expanded to form a ring area (the In the SL-ALV algorithm, the horizon circle is properly expanded to form a ring area (the width of width of the ring area is usually set to half the width of the valid area, shown as Figure 2b), andthe all the ring area is usually set to half the width of the valid area, shown as Figure 2b), and all the landmarks located outside the ring willdiscarded. be discarded. Therefore, the remaining landmarks landmarks located outside the ring area area will be Therefore, the remaining landmarks can can better satisfy the equal distance assumption, and SL-ALV can thus improve the homing accuracy better satisfy the equal distance assumption, and SL-ALV can thus improve the homing accuracy comparedtotothe theoriginal originalALV ALValgorithm. algorithm.However, However,SL-ALV SL-ALVinevitably inevitablyreduces reducesthe thetotal totalnumber numberof of compared landmarks, which to ato potential problem that thethat smallthe sample sizes will cause landmarks, whichmay maylead lead a potential problem small sample sizesunpredictable will cause errors. This problem is particularly evident when the current location is far from the location, unpredictable errors. This problem is particularly evident when the current location istarget far from the because the similarity the two panoramic images is low in this case, and in a will very target location, becauseof the similarity of the two panoramic images is low inthis thiswill case,result and this limited of the extracted result in number a very limited number oflandmarks. the extracted landmarks. 3. Landmark Optimization Strategies 3. Landmark Optimization Strategies In this section, we introduce the three landmark optimization strategies in detail, including In this section, we introduce the three landmark optimization strategies in detail, including mismatching landmark elimination, landmark contribution evaluation and multi-level matching. mismatching landmark elimination, landmark contribution evaluation and multi-level matching. We We use SIFT features as natural landmarks, and the optimized landmarks have higher accuracy and use SIFT features as natural landmarks, and the optimized landmarks have higher accuracy and more more reasonable distribution. reasonable distribution. 3.1. Strategy 1: Mismatching Landmark Elimination 3.1. Strategy 1: Mismatching Landmark Elimination Strategy 1 is presented to improve the overall accuracy of the extracted landmarks. Based Strategy 1 is presented to improve the overall accuracy of the extracted landmarks. Based on the SIFT on the SIFT scale principle and panoramic image principle, Strategy 1 can effectively detect and scale principle and panoramic image principle, Strategy 1 can effectively detect and eliminate the potential eliminate the potential mismatching landmarks to further optimize the performance of the visual mismatching landmarks to further optimize the performance of the visual homing approaches. homing approaches.
Sensors 2018, 18, 3180
6 of 21
Sensors 2018, 18, x FOR PEER REVIEW
6 of 21
3.1.1. SIFT Scale Principle
3.1.1.As SIFT Scale Principle mentioned in [28], a noteworthy conclusion is drawn that the SIFT scale value is negatively related to the spatial from the landmark to theisviewer. is toSIFT say, if the viewer closer to a As mentioned indistance [28], a noteworthy conclusion drawn That that the scale value isisnegatively certain landmark, the distance SIFT feature, refers to this landmark, will have larger scale value. is related to the spatial fromwhich the landmark to the viewer. That is to asay, if the viewer is This closer because scale indicates the degree of Gaussian blurring required to reveal the distinctive characteristic to a certain landmark, the SIFT feature, which refers to this landmark, will have a larger scale value. of theisfeature. landmark approached, its corresponding SIFT feature needthe to be detected This becauseWhen scalethe indicates theisdegree of Gaussian blurring required to reveal distinctive with a higher of degree of Gaussian Therefore, for the same feature matching pair, weneed can characteristic the feature. Whenblurring. the landmark is approached, itsSIFT corresponding SIFT feature qualitatively estimate the distance from the landmark to the two capture positions by comparing the to be detected with a higher degree of Gaussian blurring. Therefore, for the same SIFT feature scale values of the two SIFT features. matching pair, we can qualitatively estimate the distance from the landmark to the two capture We use SIFTthe implementation from:features. http://www.cs.ubc.ca/~lowe/keypoints/) positions by Lowe’s comparing scale values of(available the two SIFT and the ith SIFT feature can be described as follows: We use Lowe’s SIFT implementation (available from: http://www.cs.ubc.ca/~lowe/keypoints/) and the ith SIFT feature can be described as follows: y f i = ( f ix , f i , f io , f iσ , f id ) (6) x f i = ( f i , f i y , f i o , f iσ , f id ) (6) y where f ixx and f i yare the image coordinates of f i ; f io is the orientation; f iσ is the scale; and f id is o σ where and are the image coordinates of f ; is the orientation; is the scale; and f id f f f f i i i i i the descriptor. is theFigure descriptor. 3 shows the key locations in the simulated scene. Assuming that the viewer captures Figure 3 in theAsimulated scene. Assuming thattwo the images viewer are captures the the panoramicshows imagethe IA key andlocations IB at positions and B, respectively, when the matched, panoramic image I A and I B at positions A and B, respectively, when the two images are matched, the the kth SIFT feature matching pair can be described as follows: kth SIFT feature matching pair can be described as follows: ppk == (( ff i ,, ff j )) (7) (7) k i j where ffii is is the the ith ith SIFT SIFT features features in in IIAA,, and and ffjjisisthe thejth jthSIFT SIFTfeature featurein inIIBB. . where
Lk dA dB A B Key locations locations in in the the simulated simulated scene. scene. AA and and BB represent represent the the two two capture capture positions positions of the Figure 3. Key andddBBseparately separatelydenote denotethe thedistance distancefrom fromLLk ktotoAAand andB.B. viewer, Lkk is the kth landmark, and dAA and
We define between the scale values ofof fi and Δσasasthe We define ∆σ thedifference difference between the scale values fi andfj :fj:
Δσ== ffiσiσ − ffjjσσ ∆σ
(8) (8)
A B Δσ : Therefore, Therefore, ddA and and ddBcan canbe bequalitatively qualitativelycompared comparedbased basedon on ∆σ:
( d A > d B , d AA > dBB , dA < d B, d 0 ∆σ > 0
(9) (9)
3.1.2. Panoramic Panoramic Imaging Imaging Principle Principle 3.1.2. Generally, almost almost all all visual visual homing homing approaches approaches select select panoramic panoramic images images as as input input because because aa Generally, panoramic vision vision system system can can capture capture the the entire entire 360-degree information in in azimuthal azimuthal direction direction and and panoramic 360-degree information close to in in unilateral vertical direction [40]. [40]. Compared to other images, close to 180-degree 180-degreeinformation information unilateral vertical direction Compared to types otheroftypes of panoramic images always contain more environmental information in space. Figure 4 shows the images, panoramic images always contain more environmental information in space. Figure 4 shows schematic diagram of panoramic imaging principle, which can can be summarized as follows: the schematic diagram of panoramic imaging principle, which be summarized as follows:
(1) If the landmarks with the same vertical height are located above F1, their projection points will always lie outside the horizon circle, shown as (A, A’) and (B, B’). The distance between the
Sensors 2018, 18, 3180
(1)
7 of 21
If the landmarks with the same vertical height are located above F1 , their projection points will 7 of 21 always lie outside the horizon circle, shown as (A, A’) and (B, B’). The distance between the projection point and the image center will decrease as the panoramic vision system moves away projection point and the image center will decrease as the panoramic vision system moves away from the landmark. from the landmark. If the landmarks with the same vertical height are located below F1 , their projection points If the landmarks with the same vertical height are located below F1, their projection points will will always lie inside the horizon circle, shown as (C, C’) and (D, D’). The distance between always lie inside the horizon circle, shown as (C, C’) and (D, D’). The distance between the the projection point and the image center will decrease as the panoramic vision system moves projection point and the image center will decrease as the panoramic vision system moves towards the landmark. towards the landmark. thelandmarks landmarksare arelocated locatedthe thesame sameheight heightasasF1F,1their , theirprojection projectionpoints pointswill willalways alwaysbebeon onthe the If Ifthe horizoncircle, circle,shown shownasas(E, (E,E’) E’)and and(F,(F,F’) F’) horizon
Sensors 2018, 18, x FOR PEER REVIEW
(2) (2)
(3) (3)
A
B
F1
E
F
C
D F'
F2
D'C'
E' B' A'
F' D'C'
(a)
E'B'A'
(b)
Figure diagram ofof panoramic imaging principle. (a) (a) Panoramic vision system: F1 and F2 Figure4.4.Schematic Schematic diagram panoramic imaging principle. Panoramic vision system: F1 and are foci foci of the panoramic vision system. A–FA–F areare sixsix landmarks in in the real F2 the are the of the panoramic vision system. landmarks the realscene. scene.The Thevertical vertical heightsofofAAand andB, B,CC and and D, and E and where AA and B are above F1 , FC1,and C heights and FF are, are,respectively, respectively,the thesame, same, where and B are above D are below F , and E and F are the same height as F . A’–F’ are the corresponding projection points on and D are below 1 F1, and E and F are the same height 1 as F1. A’–F’ are the corresponding projection the imaging (b)plane. Top view theview imaging plane: The dotted is denoted horizonascircle. points on the plane. imaging (b) of Top of the imaging plane: circle The dotted circleasisthe denoted the horizon circle.
In conclusion, based on the panoramic imaging principle, we can take advantage of the pixel positions in the panoramic to qualitatively determine thewe distance relation from the system In conclusion, based onimages the panoramic imaging principle, can take advantage of the pixelto the two capture positions. Assuming that the actual distances from the system to the six landmarks positions in the panoramic images to qualitatively determine the distance relation from the system to E and dF , the corresponding pixel distances from the image mentioned in Figure 4 areAssuming dA , dB , dC , that dD , dthe the two capture positions. actual distances from the system to the six landmarks A’ B’ , dC’ , dD’ , dE’ and dF’ . The panoramic principle can thus be center to the projection mentioned in Figure 4 arepoints dA, dBare , dC,ddD,, ddE and dF, the corresponding pixel distances from the image formulated as follows: A’ B’ C’ F’ center to the projection points are d A , d , d , dD’, dE’ and panoramic principle can thus be 0 d . The 0 d < dB , if d A > d B > r formulated as follows: 0 0 (10) dC < d D , if dC < d D < r A B A' B' 0 0 dd E d > r if if d = d = r C D d < d dC' < d D' < r , if (10) 3.1.3. Mismatching Landmark Elimination E F E' F' d = d d = d = r , if Inspired by the above analysis, two principles can be provided to obtain the relative distance relation between the selected landmark and the capture positions, including SIFT scale principle and 3.1.3. Mismatching Landmark Elimination panoramic imaging principle. Therefore, for each SIFT feature matching pair, if the conclusions drawn by the aboveby two principles are conflicting, it can becan inferred that at least one conclusion is distance mistaken. Inspired the above analysis, two principles be provided to obtain the relative In this case, Strategy 1 considers that the currently verified SIFT feature matching pair is not reliable, relation between the selected landmark and the capture positions, including SIFT scale principle and which needs to be discarded. pseudo for codeeach of Strategy 1 is shown in Algorithm 1. conclusions panoramic imaging principle. The Therefore, SIFT feature matching pair, if the drawn by the above two principles are conflicting, it can be inferred that at least one conclusion is mistaken. In this case, Strategy 1 considers that the currently verified SIFT feature matching pair is not reliable, which needs to be discarded. The pseudo code of Strategy 1 is shown in Algorithm 1.
Sensors 2018, 18, 3180
8 of 21
Algorithm 1. Pseudo code of Strategy 1. 1: Capture the panoramic images at two different positions. 2: Execute the SIFT detection and matching algorithm 3: for (each SIFT feature matching pair) do 4: Determine the conclusion from SIFT scale principle based on Equation (9) 5: Determine the conclusion form panoramic imaging principle based on Equation (10) 6: Verify whether the above two conclusions are conflicting 7: if TRUE then 8: Discard the currently verified SIFT feature matching pair 9: else 10: Reserve the currently verified SIFT feature matching pair 11: end if 12: end for
Strategy 1 is theoretically correct and feasible, but the two images are inevitably affected by some uncertainties in practice applications, such as image resolution, ground flatness, human error, etc. If the scale difference of the two SIFT features in the same matching pair is too approximated, it will be unreliable by using this strategy to determine whether such type of matches is mismatched. Hence, a scale difference threshold σT can be set, and Strategy 1 will only verify the SIFT matches with ∆σ > σT . Based on our experimental results, the optimal performance can be obtained when we set σT = 0.5. However, this value is not constant; σT can be modified appropriately based on the actual situation to exhibit the optimal performance of Strategy 1. 3.2. Strategy 2: Landmark Contribution Evaluation Strategy 2 is applied to optimize the landmark distribution for visual homing. Compared to the traditional optimization algorithm, Strategy 2 does not directly discard the landmarks, but evaluates their contribution to the final result. For the landmarks with low contribution, Strategy 2 will accordingly reduce their weights, thereby optimizing the actual landmark distribution while preserving the existing landmarks. Strategy 2 adopts the similar idea to SL-ALV. Based on the imaging principle of the panoramic images introduced in Section 2, it can be concluded that the landmarks detected near the horizon circle are more accurate and valuable than those far from the horizon circle. Therefore, Strategy 2 is designed to assign a specific weight to each landmark based on its projected position in the panoramic image. We employ Gaussian function to evaluate the contribution of the landmarks, and the weight of the ith landmark wi can be calculated as follows: wi =
1 √
σG 2π
exp(−
( d i − r )2 ) 2σG 2
(11)
where σG is the standard deviation; di is the image distance between the ith landmark and the image center; and r is the radius of the horizon circle. In this paper, we set σG = 2.0 according to our own experimental conditions. Similar to the setting of σT in Strategy 1, the value of σG is also optional, and researchers can modify this value based on the actual situation to exhibit the optimal performance of Strategy 2. Figure 5 shows the model of Strategy 2. Landmarks in the darker color area are assigned higher weights, and this type of landmarks will play a more important role in the subsequent homing calculations. Strategy 2 effectively evaluates the reliability and stability of the landmarks without discarding landmarks, thus solving the potential problems that may arise due to the excessive landmark elimination. Based on Strategy 2, high-quality landmarks will play a key role in the homing approaches, while the effect of low-quality landmarks will be reduced.
Sensors Sensors 2018, 2018, 18, 18, 3180 x FOR PEER REVIEW
99ofof21 21
1 0.9 0.8 0.7
Weight
0.6 0.5 0.4 0.3 0.2 0.1 0 min
(a)
r
max
(b)
Figure 5. 5. The model of Strategy Strategy 2 and and weight weight assignment: assignment: (a) the model of Strategy Strategy 2, 2, where where the the Figure horizon circle circle is is marked marked as as the the red red part partin inthe thefigure; figure;and and(b) (b)weight weightassignment. assignment. horizon
3.3. Strategy Strategy3:2Multi-Level effectively Matching evaluates the reliability and stability of the landmarks without discarding landmarks, theaccording potential to problems that may arise due to the landmark Strategythus 3 is solving presented the key point matching criterion in excessive the SIFT algorithm. elimination. Based on Strategy 2, high-quality landmarks will play a key role in the homing By setting multi-level distance ratio thresholds, SIFT features can be extracted in stages, and the approaches, while the effect of low-quality landmarks will be reduced. features are assigned corresponding weights in different intervals. Finally, the contribution of the features determined to be less accurate is weakened. 3.3. Strategy 3: Multi-Level Matching. 3.3.1. Nearest-Neighbor Distance Ratio Strategy 3 is presented according to the key point matching criterion in the SIFT algorithm. By setting multi-level distance ratio thresholds, features can be extracted stages, features Nearest-neighbor distance ratio (NNDR) SIFT is a type of key point matchingin metric forand SIFTthe algorithm. are assigned weightsby inNNDR different intervals. Finally, contribution of the features Since the SIFTcorresponding matches determined have extremely high the matching accuracy, NNDR has determined to becommonly less accurate is weakened. become the most used key point matching criterion in the fields of object recognition and image registration. 3.3.1.When Nearest-Neighbor Ratio a SIFT featureDistance is detected, the corresponding descriptors are obtained based on the local image gradients measured at the selected scale is in athe region around thematching feature. Each Nearest-neighbor distance ratio (NNDR) type of key point metricdescriptor for SIFT contains 128-dimensional feature vectors, which describe significant levels of local shape algorithm. Since the SIFT matches determined by NNDR have extremely high matchingdistortion accuracy, and change in illumination [23]. NNDR is applied to findmatching the best criterion candidateinmatch between two NNDR has become the most commonly used key point the fields of object images: If the nearest neighbor is defined as the feature with minimum Euclidean distance for the recognition and image registration. descriptor vector, a possible match can be declared when the distance ratio of the nearest neighbor to When a SIFT feature is detected, the corresponding descriptors are obtained based on the local the second-nearest neighborat is the less selected than a certain threshold. According et al.Each [23],descriptor when the image gradients measured scale in the region around to theLowe feature. distance ratio threshold is set to 0.8, most of the false matches can be eliminated and only a small contains 128-dimensional feature vectors, which describe significant levels of local shape distortion number of correct matches are[23]. discarded at is theapplied same time. As the decreases, the between total number and change in illumination NNDR to find thethreshold best candidate match two of matches that can be obtained will decrease, but the proportion of correct matches will increase. images: If the nearest neighbor is defined as the feature with minimum Euclidean distance for the In thisvector, section, we propose an extended NNDRwhen matching criterion, called multi-level matching. descriptor a possible match can be declared the distance ratio of the nearest neighbor to The proposed criterion can be specifically used in the field of visual homing. By setting multi-level the second-nearest neighbor is less than a certain threshold. According to Lowe et al. [23], when the distance thresholds,is the declared different intervals be given corresponding distance ratio ratio threshold set matches to 0.8, most of theinfalse matches can bewill eliminated and only a small weights, thus further optimizing the accuracy the time. landmarks. number of correct matches are discarded at the of same As the threshold decreases, the total number
of matches that can be obtained will decrease, but the proportion of correct matches will increase. 3.3.2. Multi-Level Matching In this section, we propose an extended NNDR matching criterion, called multi-level matching. Strategy 3 criterion selects five ratio thresholds to declare in stages.BySince the multi-level number of The proposed candistance be specifically used in the field ofmatches visual homing. setting matches be obtained limited when the in threshold is intervals less than 0.4, thresholds distance that ratiocan thresholds, theismatches declared different will the be candidate given corresponding we selectthus are further 0.4, 0.5,optimizing 0.6, 0.7 and in order. The reason we set the thresholds in this way is weights, the0.8 accuracy of the landmarks. mainly to provide an easy-to-operate scheme. To avoid excessive computation complexity while 3.3.2. Multi-Level Matching ensuring optimization effects, we consider that dividing the intervals evenly is a straightforward and effective solution. Strategy 3 selects five distance ratio thresholds to declare matches in stages. Since the number of matches that can be obtained is limited when the threshold is less than 0.4, the candidate thresholds we select are 0.4, 0.5, 0.6, 0.7 and 0.8 in order. The reason we set the thresholds in this way is mainly to provide
Sensors 2018, 18, 3180
10 of 21
Sensors 2018, 18, x FOR PEER REVIEW
10 of 21
The core idea of multi-level matching can be described as follows: When the first threshold 0.4 is set,anthe obtained matches are To considered to be almost error-free, so the landmarks represented by these easy-to-operate scheme. avoid excessive computation complexity while ensuring optimization matches a weight of 1. the evenly secondisthreshold 0.5 is set, new matches can be declared. effects, are we assigned consider that dividing theWhen intervals a straightforward and effective solution. Since these new matches are obtained with the threshold increases, the landmarks represented The core idea of multi-level matching can be described as follows: When the first threshold 0.4 by these new matches are assigned a weight slightly less than 1. Repeated steps are performed until is set, the obtained matches are considered to be almost error-free, so the landmarks represented bythe maximum threshold 0.8 is set, and then weighted SIFT matches The parameter these matches are assigned a weight of the 1. When the second thresholdcan 0.5be is generated. set, new matches can be declared. Since these new matches are obtained with the threshold thethe landmarks settings we select for multi-level matching are shown in Table 1, where increases, τ represents weight for represented by these new matches are assigned a weight slightly less than 1. Repeated steps are multi-level matching. performed until the maximum threshold 0.8 is set, and then the weighted SIFT matches can be 1. Parameter for multi-level matching. generated. The parameterTable settings we selectsettings for multi-level matching are shown in Table 1, where τ represents the weight for multi-level matching. Interval τ
[0,0.4)
[0.4,0.5)
[0.5,0.6)
[0.6,0.7)
[0.7,0.8]
1.00 1. Parameter 0.95settings for0.90 0.85 Table multi-level matching.
0.80
Interval [0,0.4) [0.4,0.5) [0.5,0.6) [0.6,0.7) [0.7,0.8] Similar to τStrategy 2, multi-level matching is also a strategy for which is 1.00 0.95 0.90 0.85weight evaluation, 0.80 appropriate and effective for visual homing. Under the premise that the total number of landmarks is limited,Similar multi-level matching further highlights valuea of the high-quality to Strategy 2, multi-level matchingthe is also strategy for weight landmarks. evaluation, which is Based on Strategies 2 and 3, the optimized landmarks will carry two types of weight information, appropriate and effective for visual homing. Under the premise that the total number of landmarks namely w and τ. Therefore, for further the ith highlights landmarkthe vi , value the landmark vector it forms is no longer the is limited, multi-level matching of the high-quality landmarks. Based on Strategies and 3, the of optimized landmarks two of types of weight traditional unit vector, the2 modulus the vector becomeswill thecarry product wi and τ i : information, namely w and τ. Therefore, for the ith landmark vi, the landmark vector it forms is no longer the wi · τi the product of wi and τi: (12) |vi | =becomes traditional unit vector, the modulus of the vector
v = w τi After the moduli of all landmark vectorsi arei calculated, the optimal ALV computed(12) in the proposed method becomes in fact a weighted average of landmark vectors, and the final homing After the moduli of all landmark vectors are calculated, the optimal ALV computed in vector the can be similarly generated based on aEquation proposed method becomes in fact weighted(5). average of landmark vectors, and the final homing vector can be similarly generated based on Equation (5). 3.4. Overview of the Proposed Strategies 3.4.Figure Overview of the Proposed Strategies 6 shows the complete procedure of the mobile robot’s visual homing process based on the proposed strategies. The details are characterized asmobile follows: Figure 6 shows the complete procedure of the robot’s visual homing process based on the proposed strategies. The details are characterized as follows: • Landmark Detection: SIFT features are detected and matched from the currently captured image
pre-stored target image, all SIFTare matches are considered as the landmarks. and Landmark Detection: SIFT features detected and matched from theoriginal currently captured image and pre-stored target image, all SIFT matches are considered as the original landmarks. • Landmark Optimization: Based on the proposed three strategies, the original landmarks are optimized Landmark Basedand on distribution. the proposed three strategies, the original landmarks are in Optimization: terms of precision optimized in terms of precision distribution. • Homing Vector Calculation: Aand certain visual homing approach is adopted to calculate the homing Homing Vector Calculation: A certain visual homing approach is adopted calculate vector based on the optimized landmarks. It should be noted that, to although wethe only homing vector based on the optimized landmarks. It should be noted that, although we only use use ALV in this paper, the proposed strategies can also apply to other landmark-based visual ALV in this paper, the proposed strategies can also apply to other landmark-based visual homing approaches. homing approaches. • Robot Navigation: The mobile robot is guided by the calculated homing vector to move a preset Robot Navigation: The mobile robot is guided by the calculated homing vector to move a preset step, and LandmarkDetection Detectionuntil untilthe thetarget target location is reached. step, andthen thenre-performs re-performsthe theprocess process of of Landmark location is reached. Landmark Optimization Starting
Landmark Detection
Mismatching Landmark Elimination
Landmark Contribution Evaluation
Multi-level Matching
Robot Navigation
Homing Vector Calculation
Figure6.6.Procedure Procedureof ofthe themobile mobile robot’s robot’s visual proposed strategies. Figure visual homing homingprocess processbased basedononthe the proposed strategies.
Sensors 2018, 18, 3180
11 of 21
Sensors 2018, 2018,18, 18,xxFOR FORPEER PEERREVIEW REVIEW Sensors
11 of of21 21 11
4. Experiments Experiments 4. 4. Experiments 4.1. Experimental Experimental Environment Environment and and Mobile Mobile Robot Platform 4.1. Robot Platform Platform 4.1. Experimental Environment and Mobile Robot To verify the homing performance of the proposed proposed strategies, strategies, related related homing homing experiments experiments were were To verify the the homing homing performance performance of of the the To verify proposed strategies, related homing experiments were carried out. The experimental environment is shown in Figure 7a. The experimental area of the robot is carried out. out. The The experimental experimental environment environment is is shown shown in in Figure Figure 7a. 7a. The The experimental experimental area area of of the the robot robot carried a 4.5 m × 3 m rectangular space, surrounded by different types of objects (such as tables, filing cabinets isaa4.5 4.5m m××33m mrectangular rectangularspace, space,surrounded surroundedby bydifferent differenttypes typesof ofobjects objects(such (suchas astables, tables,filing filingcabinets cabinets is and other experimental equipment). The The mobile mobile robot robot platform platform is is shown shown in in Figure 7b. A panoramic and other experimental equipment). Figure 7b. A panoramic and other experimental equipment). The mobile robot platform is shown in Figure 7b. A panoramic vision system is installed on the omni-directional mobile robot. robot. The The robot robot is is also equipped with four vision system system is is installed installed on on the omni-directional omni-directional mobile mobile also equipped equipped with with four vision the robot. The robot is also four mecanum wheels, which can help the robot move towards any direction without changing orientation. mecanum wheels, wheels, which which can can help help the the robot robot move move towards towards any any direction direction without without changing changing orientation. orientation. mecanum Experimental Experimental Platform Platform
OfficeArea Area Office
Workspace Workspace
3m 3m
Filing FilingCabinet Cabinet
4.5m 4.5m
ExperimentalArea Area Experimental
Manipulator Manipulator
(a) (a)
(b) (b)
Figure7. 7.Experimental Experimental environment andmobile mobile robot platform: (a) experimental environment, where Experimentalenvironment environmentand mobilerobot robotplatform: platform:(a) (a)experimental experimentalenvironment, environment,where Figure the grey grey part area; and and (b) (b) mobile mobile robot robot platform. platform. part denotes denotes the the experimental experimental area; area; the
set of ofimage imagedatabases databaseswas was collected based default experimental environment. A set set of image databases was collected based on on the the default experimental environment. We A collected based on the default experimental environment. We We selected 9 × 6 = 54 representative locations in the experimental area, which is shown in Figure 8a. selected 9 × 6 = 54 representative locations in the experimental area, which is shown in Figure 8a. The selected 9 × 6 = 54 representative locations in the experimental area, which is shown in Figure 8a. The The distance between adjacent locations cm, andall allthe thepanoramic panoramicimages images were were distance between two two adjacent locations waswas setset to to 5050 cm, and all the panoramic images distance between two adjacent locations was set to 50 cm, and During the image collection, the environmental conditions should always be unmodified, collected. During the image collection, the environmental conditions should always be unmodified, collected. During the image collection, the environmental conditions should always be unmodified, and the the robot’s robot’s orientation orientation was was artificially artificially controlled controlled to to remain remain constant. constant. We We used used surveying surveying tools tools to to and representative locations locations in robot on on each each marked marked location location mark the the 54 54 representative in advance, advance, and and then then placed placed the the robot mark image. keep the robot’s orientation as constant as possible, to capture corresponding To the stitching to capture corresponding image. To keep the robot’s orientation as constant as possible, the stitching reference, and and aa reliable reliable external external compass compass can can also also be be lines between between floor floor tiles tiles can can be be regarded regarded as as aa reference, lines image sample, the resolution of all the images is 768 × 768. applied. Figure 8b shows a panoramic all the images is 768 × 768. applied. Figure 8b shows a panoramic image sample, the resolution of all the images is 768 × 768. xx yy
11
22
33
44
55
66
77
88
99
11 22 33 44 55 66
(a) (a)
(b) (b)
Figure 8. Representative locations andand panoramic image sample: (a) representative representative locations, which Figure 8. 8.Representative Representative locations panoramic image sample: (a) representative locations, Figure locations and panoramic image sample: (a) locations, which are denoted denoted as the theasblack black solidsolid circles; andand (b) (b) panoramic image sample, which was captured at which are denoted the black circles; panoramic image sample, whichwas wascaptured captured at at are as solid circles; and (b) panoramic image sample, which Location (8,4). Location (8,4).
Sensors 2018, 18, 3180
12 of 21
4.2. Performance Indicators Four widely used performance indicators are adopted in this paper: AAE (average angular error), RR (return ratio), TDE (travel distance error) and ATS (average trajectory smoothness). AAE is adopted to measure the average difference between the calculated homing direction α and the desired homing direction β (i.e., the direction pointing from C to T). Taking C as the origin of the world coordinate system, α and β can thus be defined as follows: α = atan 2 hy , h x
β = atan 2 (y T − yC , x T − xC )
(13) (14)
where (hx ,hy ) is the coordinate of the calculated homing vector. (xC ,yC ) and (xT ,yT ). respectively. denote the coordinates of C and T. Therefore, the angular error AE(T,C) can be calculated as follows: AE ( T, C ) = |α − β|
(15)
where we stipulate AE(T,T) = 0. To obtain a more general result, researchers often select multiple possible current locations to test the overall homing performance of a certain visual homing approach. Assuming that the total number of the selected current locations is q, so the average angular error AAE(T,C) can be calculated by: q
AAE ( T, C ) =
1 AE ( T, Ci ) q i∑ =1
(16)
where Ci is denoted as the ith current location. Obviously, low AAE value indicates that the robot can move towards the target location with the trajectory closer to a straight line. RR is a performance indicator measuring the homing success rate of the mobile robot. For a certain starting location, a dummy robot is created to perform a simulated movement based on the calculated homing vector, and if the robot can continually move a pre-set step until the target location is reached, we declare the homing process successful. However, if the robot moves more than half the perimeter of the experimental area or outside the boundary, we declare the homing process failed. For all the t possible current locations, the RR value can be calculated by: RR =
t0 t
(17)
where t0 represents the total number of times that the target location can be successfully reached. Obviously, high RR value indicates that the robot can return to the destination from more locations. TDE is adopted to measure the difference between the actual travel distance of the mobile robot and the ideal situation. If a certain homing process is declared successful, the total travel distance of the mobile robot dact is measured and the ideal minimum distance dide is calculated, then the TDE value can be easily generated by: TDE = |d act − dide | (18) TDE is applicable to the actual homing experiments, and the deviation of the robot’s motion can be evaluated by measuring the actual trajectory. Low TDE value indicates that the robot can reach the target location with high efficiency. ATS is adopted to evaluate the smoothness of the generated trajectory. Assuming that a certain actual trajectory of mobile robot is shown in Figure 9, and the offset angle can be defined as the complementary value of the angle between two adjacent segments.
Sensors 2018, 18, 3180
13 of 21
Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, x FOR PEER REVIEW
13 of 21 13 of 21
α1 α1
C C
T T
α3 α3
α2 α2
α4 α4
Figure 9. AAcertain certainactual actual trajectory of mobile The solid blackline solid line represents actual Figure 9. trajectory of mobile robot.robot. The black represents the actualthe trajectory. Figure 9. A certain actual trajectory oflocation mobile where robot. the Therobot black solid line the represents the actual trajectory. The black dots represent the recalculates homing vector. The black dots represent the location where the robot recalculates the homing vector. α1 , α2 , α3 andαα14, trajectory. The black dots represent the location where the robot recalculates the homing vector. α1, α 2, α3 and α4 denote the offset angles. denote the offset angles. α2, α3 and α4 denote the offset angles.
Assuming movement is N, it can thus be indicated thatthat N −N 1 Assumingthat thatthe thetotal totalnumber numberofofthe therobot’s robot’s movement is N, it can thus be indicated Assuming that the total number of the robot’s movement is N, it can thus be indicated that N − 1 1 , α 2 , …, α N−1 . Therefore, the ATS value can be computed by: offset angles can be obtained, denoted as α − 1 offset angles can be obtained, denoted as α1 , α2 , . . . , αN −1 . Therefore, the ATS value can be offset angles can be obtained, denoted as α1, α2, …, αN−1. Therefore, the ATS value can be computed by: computed by: 1 N 1 1i N − 1α ATS 11 N (19) i∑ 1 ααi ATS (19) ATS= N 1 (19) i NN− 1 ii=1 1 Same as TDE, ATS is also a performance indicator applicable to the actual homing experiments, Same as as TDE, TDE, ATS ATS is is also also aa performance performance indicator applicable applicable to to the the actual actual homing homing experiments, experiments, and Same the smoothness of the trajectory can beindicator calculated based on Equation (19). Low ATS value and the smoothness of the trajectory can be calculated based on Equation (19). Low ATS value indicates and the smoothness of the trajectory can be calculated based on Equation (19). Low ATS value indicates that the robot can reach the target location more robustly. that the robot canrobot reachcan the reach targetthe location robustly. indicates that the targetmore location more robustly. 4.3. Homing Experiments on Image Databases 4.3. Homing Homing Experiments Experiments on on Image Image Databases Databases 4.3. Homing experiments were performed based on the image databases introduced in Section 4.1. Homing experiments experiments were were performed performed based based on on the the image image databases databases introduced introducedin inSection Section4.1. 4.1. Homing To evaluate the overall homing performance, we arbitrarily selected a representative location as the To evaluate the overall homing performance, we arbitrarily arbitrarily selected selected aa representative representative location location as as the To evaluate thewhile overall performance, we target location thehoming remaining 53 locations were considered as all possible starting locations.the By target location while the remaining 53 locations were considered as all possible starting locations. target location remaining 53 locations considered as all generated possible starting locations. By employing thewhile visualthe homing approaches, the were homing vectors were from each possible By employing visual homing approaches, the homingvectors vectors weregenerated generatedfrom from each each possible employing thethe visual approaches, starting location to thehoming target location, and the we homing call this kind of were visualization results as the possible homing starting location location to to the the target target location, location, and and we we call call this this kind kind of of visualization visualization results results as as the the homing homing starting vector field. Figure 10 shows the three homing vectors fields of ALV, SL-ALV and the proposed vector field. Figure 10 shows the three homing vectors fields of ALV, SL-ALV and the proposed vector field. Figure 10 shows the three homing vectors fields of SL-ALV and the proposed strategies based on image databases. Location (8,4) was selected asALV, the target location. It can be seen strategies based on image databases. Location (8,4) was selected as the target location. It can be seen seen strategies Locationand (8,4)our wasproposed selected as the target location. It the can homing be intuitivelybased from on theimage figuredatabases. that both SL-ALV strategies can improve intuitively from the figure that both SL-ALV and our proposed strategies can improve the homing intuitively from the original figure that both SL-ALVbut andobviously our proposed strategies can field improve theproposed homing performance of the ALV algorithm, the homing vector of our performance of of the the original original ALV ALV algorithm, algorithm, but but obviously obviously the the homing homing vector vector field field of of our our proposed proposed performance strategies is neater, as the number of arrows that can accurately point to the target location is strategies isis neater, as the number of arrows that can accurately point to the target location relatively strategies neater, of arrows thatcalculated can accurately point to the targettois location is relatively more, thusasitthe cannumber be inferred that the homing vector is closer the ideal more, thus it can be inferred that the calculated homing vector is closer to the ideal situation. Therefore, relatively more, thus it can be inferred that the calculated homing vector is closer to the ideal situation. Therefore, our proposed strategies outperform both ALV and SL-ALV. our proposed strategies both ALVoutperform and SL-ALV. situation. Therefore, ouroutperform proposed strategies both ALV and SL-ALV. 1
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1 1
1 2
2
2
2
3
3
3
3
4
4
4
4
5
5
5
5
6
6
6
6
(a) (a)
(b) (b) Figure 10. Cont.
Sensors 2018, 18, 3180 FOR PEER PEER REVIEW REVIEW Sensors 2018, 18, xx FOR
14 14of of21 21 14 of 21
11
22
33
44
55
66
77
88
99
11 22 33 44 55 66
((cc)) Figure 10. 10. Homing Homing vector (a) homing homing vector vectorfield field of of ALV; ALV; (b) (b) homing homing vector vector vector fields fields (50 (50 cm cm spacing): spacing): (a) (a) homing vector field of ALV; Figure field of SL-ALV; and (c) homing vector field of the proposed strategies. Red squares denote the pre-set SL-ALV; and (c) homing vector field of the proposed strategies. Red squares denote the prefield of SL-ALV; and (c) homing vector field of the proposed strategies. Red squares denote the pretarget location. Black arrows represent the calculated homing vectors pointing from the possible current set target location. Black arrows represent the calculated homing vectors pointing from the possible set target location. Black arrows represent the calculated homing vectors pointing from the possible location to the target current location location to the thelocation. target location. location. current to target
We when the spacing between two adjacent current locations is We also also performed performed experiments experiments when when the the spacing spacing between between two two adjacent adjacent current current locations locations is is We also performed experiments random. We arbitrarily selected 25 locations as the possible current locations, and the principle of the random. We arbitrarily selected 25 locations as the possible current locations, and the principle of the random. We arbitrarily selected 25 locations as the possible current locations, and the principle of the image thethe three homing vector fields of ALV, SL-ALV image collection collectionremained remainedunchanged. unchanged.Figure Figure1111 11shows shows three homing vector fields of ALV, ALV, SLimage collection remained unchanged. Figure shows the three homing vector fields of SLand the proposed strategies when the spacing is random. The conclusions that can be drawn are the ALV and the proposed strategies when the spacing is random. The conclusions that can be drawn are ALV and the proposed strategies when the spacing is random. The conclusions that can be drawn are same as those in Figure 10: our proposed strategies can still exhibit betterbetter homing performance than the same as those in Figure 10: our proposed strategies can still exhibit homing performance the same as those in Figure 10: our proposed strategies can still exhibit better homing performance the and SL-ALV algorithm. thanALV the ALV ALV and SL-ALV SL-ALV algorithm. than the and algorithm. 11
22
33
44
55
66
77
88
99
11
11
11
22
22
33
33
44
44
55
55
66
66
22
33
44
((aa))
55
66
77
88
99
b)) ((b 11
22
33
44
55
66
77
88
99
11 22 33 44 55 66
((cc)) Figure 11. 11. Homing Homing vector fields (random spacing): (a) homing vector field of ALV; (b) homing vector Figure Homing vector vector fields fields (random (random spacing): spacing): (a) (a)homing homingvector vectorfield fieldof ofALV; ALV; (b) (b) homing homing vector vector field of of SL-ALV; SL-ALV; and and (c) (c) homing homingvector vectorfield fieldof ofthe theproposed proposedstrategies. strategies. field and (c) homing vector field of the proposed strategies.
To evaluate evaluate the the AAE AAE results, results, we we selected selected Location Location (2, (2, 2), 2), (4, (4, 6), 6), (5, (5, 3) 3) and and (7, (7, 1) 1) as as another another four four To different target locations. We selected the above locations as different target locations because these different target locations. We selected the above locations as different target locations because these
Sensors 2018, 18, 3180
15 of 21
AAE(°) AAE(°)
To evaluate the AAE results, we selected Location (2, 2), (4, 6), (5, 3) and (7, 1) as another four Sensors 2018, 18, x FOR PEER REVIEW 15 of 21 Sensors 2018, 18, x FOR PEER REVIEW of 21 different target locations. We selected the above locations as different target locations because15these five locations were distributed almost evenly in the experimental area, making the conclusions more five locations locations were were distributed distributed almost almost evenly in in the experimental experimental area, area, making making the the conclusions conclusions more more five convincing. The AAE results of the evenly above fivethe locations are shown in Figure 12. Compared to convincing. The AAE results of the above five locations are shown in Figure 12. Compared to the convincing. results our of the above five locations shown Figure 12.by Compared the the original The ALVAAE algorithm, proposed strategies canare reduce thein AAE value 26.57% intototal, original ALV ALV algorithm, algorithm, our our proposed proposed strategies strategies can can reduce reduce the the AAE AAE value value by by 26.57% 26.57% in in total, total, while while original while SL-ALV canreduce only reduce thevalue AAEby value by 10.45%. Therefore, it can be concluded that the SL-ALV can only the AAE 10.45%. Therefore, it can be concluded that the proposed SL-ALV can only reduce the AAE value by 10.45%. Therefore, it can be concluded that the proposed proposed strategies have better optimization effects than SL-ALV. strategies have better optimization effects than SL-ALV. strategies have better optimization effects than SL-ALV. 40 40 35 35 30 30 25 25 20 20 15 15 10 10 5 5 0 0
ALV ALV
(2,1) (2,1)
(4,6) (4,6)
SL-ALV SL-ALV
(5,3) (5,3) Target Location Target Location
Proposed strategies Proposed strategies
(8,4) (8,4)
(9,2) (9,2)
Figure 12. AAE AAE results results for for ALV, ALV, SL-ALV SL-ALV andproposed proposed strategies. The The blue histograms histograms denote the the Figure Figure 12. 12. AAE results for ALV, SL-ALV and and proposed strategies. strategies. The blue blue histograms denote denote the original ALV algorithm. algorithm. The original histograms denote SL-ALV. The red red histograms denote denote the original The original original histograms histograms denote denote SL-ALV. SL-ALV. The The original ALV ALV algorithm. The red histograms histograms denote the the proposed strategies. strategies. proposed proposed strategies.
AAE(°) AAE(°)
In addition, we also evaluated the AAE results when the three proposed strategies were applied addition, we In addition, also evaluated the AAE results when the three proposed strategies were applied separately, as shown in homing separately, as shown in Figure Figure 13. 13. It It can can be be seen seen that that all all the the three three strategies strategies can can optimize optimize the separately, as shown in Figure 13. It can be seen that all the three strategies can optimize the homing performance. 1 performance. Among Among them, them, the theoptimization optimizationeffect effectof ofStrategy Strategy222isis isthe themost mostobvious, obvious,and andStrategies Strategies performance. Among them, the optimization effect of Strategy the most obvious, and Strategies and 3 can improve thethe homing performance slightly. 1 and 3 can improve homing performance slightly. 1 and 3 can improve the homing performance slightly. 40 40 35 35 30 30 25 25 20 20 15 15 10 10 5 5 0 0
Strategy 1 Strategy 1
(2,1) (2,1)
(4,6) (4,6)
(5,3) (5,3) Target Lociton Target Lociton
Strategy2 Strategy2
Strategy 3 Strategy 3
(8,4) (8,4)
(9,2) (9,2)
Figure 13. AAE results when the three proposed strategies were applied separately. The green Figure 13. 13. AAE green Figure AAE results results when when the the three three proposed proposed strategies strategies were were applied applied separately. separately. The The green histograms denote Strategy 1, the purple histograms denote Strategy 2, and the grey histograms histograms denote purple histograms histograms denote the grey grey histograms histograms histograms denote Strategy Strategy 1, 1, the the purple denote Strategy Strategy 2, 2, and and the denote Strategy 3. denote Strategy 3. denote Strategy 3.
To evaluate evaluate the the RR RR results, results, we we considered considered each each location location as as aa possible possible target target location, location, and and To To evaluate the RR results, we considered each location as a possible target location, and repeatedly repeatedly calculated 54 sets of RR values. The statistics of RR values for all the 54 possible target repeatedly 54 calculated 54 values. sets of RR Theofstatistics of RR the 54target possible target calculated sets of RR Thevalues. statistics RR values forvalues all the for 54 all possible locations locations are are shown in in Table Table 2, 2, and and the the conclusions conclusions that that can can be be drawn drawn are are similar similar to to those in in Figures locations are shown inshown Table 2, and the conclusions that can be drawn are similar to those inthose FiguresFigures 10–13. 10–13. Compared Compared to to SL-ALV, SL-ALV, our our proposed proposed strategies strategies can can further further improve improve the the RR RR values, values, so so the the robot robot 10–13. Compared to SL-ALV, our proposed strategies can further improve the RR values, so the robot can can move from more possible locations to the target location. can move possible locations totarget the target location. move fromfrom moremore possible locations to the location.
Sensors 2018, 18, 3180
16 of 21
Table 2. Statistics of RR values. Quartile
Algorithm
Min
Max
Mean
ALV SL-ALV Proposed strategies
0.389 0.415
1.000 1.000
0.433
1.000
Q1
Q2
Q3
0.812 0.845
0.736 0.792
0.868 0.868
0.906 0.962
0.877
0.792
0.962
1.000
We also tested the average time required for the above three algorithms to calculate a homing vector (Inter Core i7-6800K 3.4 GHz, MATLAB R2012a), as shown in Table 3. It can be seen that these three algorithms have almost the same calculation speed; the proposed strategies do not significantly increase the calculation amount. Table 3. Average time required to calculate a homing vector. Algorithm
Time (s)
ALV SL-ALV Proposed strategies
1.124 1.083 1.275
4.4. Homing Experiments on Mobile Robot To further compare the homing performance of SL-ALV and our proposed strategies, we performed a series of homing experiments on a real mobile robot. Since the robot’s orientation might be slightly offset in actual experiments, it would be meaningful to evaluate the homing performance in practical applications. We arbitrarily selected three discrete locations as the target locations, and for each target location, three different starting locations were also selected. The robot was placed at the target location in advance to capture the current panoramic image. During the experiment, the robot’s orientation remained constant, and the environment remained stationary. After the preparation was completed, the robot was placed at the starting location to be tested. The detailed experimental steps are as follows: Step 1: Step 2: Step 3: Step 4:
The robot captures and stores the currently viewed panoramic image. Two stored panoramic images are matched by the SIFT algorithm, and the homing direction are obtained by SL-ALV/ our proposed strategies. The robot travels 30 cm along a straight line, and then pauses. If either of the following cases happens, jump to Step 6. Case 1: The robot arrives at a range of 30 cm centered on the target location. Case 2: The robot moves more than 750 cm or out of the experimental area.
Step 5: Step 6: Step 7:
Jump to Step 1. If Case 1 happens, we declare the homing process successful. If Case 2 happens, we declare the homing process failed. The robot is manually stopped.
Figures 14–16 show the robot’s trajectories by SL-ALV and our proposed strategies, and Tables 4–6 show the corresponding results recording the AAE, ATS, TDE values and the total number of the robot’s movement steps N. As can be seen intuitively from the figures and tables, the trajectories generated by our proposed strategies are smoother than those generated by SL-ALV, the robot can reach the target location with the trajectory closer to a straight line. In total, the homing processes by our proposed strategies are all successful, while SL-ALV has a homing failure. For the eight sets of
Sensors 2018, 18, x FOR PEER REVIEW
17 of 21
Y-Axis (Meters)
total number of homing steps for SL-ALV is 72, while, for our proposed strategies, this number is only 62. Fourth, the total TDE value of our proposed strategies is 2.29 m, while, for our proposed strategies, number is only 0.35 m. Overall, the homing experiments on a real mobile robot indicate Sensors 2018, this 18, 3180 17 of 21 that the homing effects by our proposed strategies are more accurate, and the robot’s movement processes are smoother. homing experiments where both SL-ALV and our proposed strategies have successfully controlled X-Axis (Meters) the robot to the target area, the experimental results can be summarized as follows: First, the average 0 0.9 1.8 2.7 3.6 4.5 ◦ AAE of SL-ALV is 17.02 , while0 this value of our proposed strategies is only 9.31◦ , the angular error SL-ALV Proposed can be2018, reduced by about 45.30%. Second, the average ATS of SL-ALV is 25.64◦ , while this value 17 ofofour Sensors 18, x FOR PEER REVIEW 21 T1 0.6 ◦ , the offset angle can be reduced proposed strategies is only 17.52 by about 31.67%. Third, the total total number of homing for SL-ALV is 72, while, our proposed strategies, this number is number of homing steps steps for SL-ALV is 72, while, for ourfor proposed strategies, this number is only 62. 1.2 C1 only 62. the Fourth, TDE value of our proposed 2.29 m, for our strategies, proposed Fourth, total the TDEtotal value of our proposed strategiesstrategies is 2.29 m, is while, for while, our proposed strategies, this number is only 0.35 m. Overall, the homing experiments on a real mobile robot this number is only 0.35 m. Overall, the homing experiments on a real mobile robot indicate indicate that the 1.8 that the effects homingbyeffects by our proposed more and accurate, and the robot’s movement homing our proposed strategiesstrategies are more are accurate, the robot’s movement processes C3 2.4 processes are smoother. are smoother. C2
3
X-Axis (Meters) 0
0.9
1.8
2.7
3.6
4.5
Y-Axis (Meters)
SL-ALVcentered on a square block. Figure 14. Homing Experiment 1. The target area is denoted as a circle 0 Proposed The three starting location C1, C2 and C3 are marked as triangular blocks. Blue dashed lines denote the T1 0.6 homing trajectories generated by SL-ALV. Red solid lines denote the homing trajectories generated by our proposed strategies.
C1
1.2
Table 1.8 4. Associated results of homing experiment 1.
Location
Algorithm Success AAE (°) ATSC(°) N TDE (m) 3 2.4 SL-ALV Yes 11.59 18.26 11 0.17 C2 C1 3 Proposed Yes 5.94 11.92 10 0.03 SL-ALV Yes 20.03 20.79 10 0.35 Figure C2 Experiment Figure 14. 14. Homing Homing Experiment 1. 1. The Thetarget targetarea areaisis denoted denotedas asaa circle circle centered centered on on aa square square block. block. Proposed Yes 13.23 17.86 9 0.08 The three starting location location C C11, C22 and C 3 are marked as triangular blocks. Blue dashed lines denote and C3 are marked as triangular blocks. Blue dashed lines denote the SL-ALV Yes 11.52 10.69 5 0.01 homing trajectories generated by bySL-ALV. SL-ALV.Red Redsolid solidlines lines denote homing trajectories generated denote thethe homing trajectories generated by C3 generated Proposed Yes 11.39 5.10 5 0.01 by proposed strategies. ourour proposed strategies. X-Axis (Meters)
Table 4. results of homing experiment 0 Associated 0.9 1.8 2.7 3.6 4.5 1.
C1 C2 C3
0
Algorithm 0.6 SL-ALV Proposed 1.2 SL-ALV Proposed 1.8 SL-ALV 2.4 Proposed Y-Axis (Meters)
Location
C1 Success Yes Yes Yes Yes Yes T2 Yes
SL-ALV
AAE (°) 11.59 5.94 20.03 13.23 11.52 11.39
ProposedN ATS (°) 18.26 11 11.92 10 C2 20.79 10 17.86 9 10.69 5 5.10 5
TDE (m) 0.17 0.03 0.35 0.08 0.01 0.01
C3 3
X-Axis (Meters) 0
0.9
1.8
2.7
Failed 3.6
4.5
Figure Experiment Figure 15. 15. Homing Homing Experiment 2. 2. C
Sensors 2018, 18, x FOR PEER REVIEW0
1
18 of 21
SL-ALV Proposed
Location C1 C2 C3
Y-Axis (Meters) Y-Axis (Meters)
X-Axis of (Meters) 0.6 5. Associated results Table homing experiment 2. 0
0.9
1.8
2.7
3.6
4.5
SL-ALV C2 Proposed
0 1.2
C3 (°) Algorithm Success AAE ATS (°) SL-ALV Yes 27.67 47.31 0.6 1.8 Proposed Yes 9.37 15.90 T2 1.2 SL-ALV Yes 17.86 26.14 2.4 Proposed Yes C2 9.54 18.49 C3 1.8 3 SL-ALV No —— Failed—— T3 2.4 Proposed YesHoming8.43 14.55 Figure 15. Experiment 2.
N 8 6 11 9 —— 11
TDE (m) 0.45 0.04 0.38 0.06 —— 0.07
C1 3
Table 5. Associated results of homing experiment 2. Figure 16. Homing Experiment 3.
Figure 16. Homing Experiment Algorithm Success AAE (°) ATS3.(°) TDE (m) N SL-ALV Yes 27.67 47.31 8 0.45 Table 6. Associated results of homing experiment 3. C1 Proposed Yes 9.37 15.90 6 0.04 Location Algorithm Success AAE ATS (°) 11 (m) N TDE0.38 SL-ALV Yes 17.86(°) 26.14 C2 SL-ALV Yes 14.28 26.56 12 0.07 Proposed Yes 9.54 18.49 9 0.06 C1 Proposed Yes 7.97 13.02 11 0.01
Location
Sensors 2018, 18, 3180
18 of 21
Table 4. Associated results of homing experiment 1. Location
Algorithm
Success
AAE (◦ )
ATS (◦ )
N
TDE (m)
C1
SL-ALV Proposed
Yes Yes
11.59 5.94
18.26 11.92
11 10
0.17 0.03
C2
SL-ALV Proposed
Yes Yes
20.03 13.23
20.79 17.86
10 9
0.35 0.08
C3
SL-ALV Proposed
Yes Yes
11.52 11.39
10.69 5.10
5 5
0.01 0.01
Table 5. Associated results of homing experiment 2. Location
Algorithm
Success
AAE (◦ )
ATS (◦ )
N
TDE (m)
C1
SL-ALV Proposed
Yes Yes
27.67 9.37
47.31 15.90
8 6
0.45 0.04
C2
SL-ALV Proposed
Yes Yes
17.86 9.54
26.14 18.49
11 9
0.38 0.06
C3
SL-ALV Proposed
No Yes
—— 8.43
—— 14.55
—— 11
—— 0.07
Table 6. Associated results of homing experiment 3. Location
Algorithm
Success
AAE (◦ )
ATS (◦ )
N
TDE (m)
C1
SL-ALV Proposed
Yes Yes
14.28 7.97
26.56 13.02
12 11
0.07 0.01
C2
SL-ALV Proposed
Yes Yes
10.78 8.29
14.93 6.23
6 5
0.34 0.08
C3
SL-ALV Proposed
Yes Yes
22.40 8.73
40.45 16.60
9 7
0.52 0.04
5. Discussion In Section 4, we performed the related experiments to evaluate the homing performance of ALV, SL-ALV and our proposed strategies. The experiments consist of two aspects, one based on the image databases and the other based on the real mobile robot. For the experiments on image databases, the homing performance of the algorithms was evaluated under ideal conditions, the experimental environment was always static and the robot’s orientation was almost guaranteed to be constant. For the experiments on mobile robot, the homing performance of the algorithms was evaluated under actual conditions, the robot’s orientation might be slightly offset during the movement. According to the experimental results, our proposed strategies were verified to exhibit good performance both in theory and in practice. The simulation results of our proposed strategies had low AAE values and high RR values, while the robot’s trajectories generated by our proposed strategies had low TDE values and low ATS values. However, in the process of the experiments, we also found two problems that cannot be solved at present. On the one hand, the environment should always be static. If dynamic situations occurred (for example, during the movement of the robot, the researchers were randomly moving and the positions of some objects were arbitrarily changed), the optimization effect would be reduced. On the other hand, the mobile robot should always be subject to holonomic constraints. The performance of all the mentioned homing algorithms would significantly drop under nonholonomic constraints, such as randomly changing the robot’s orientation and vertical height. For the emergence of the above problems, we believe that it is related to the basic concept of visual homing. Since traditional visual homing relies on only a panoramic vision sensor to complete the
Sensors 2018, 18, 3180
19 of 21
navigation task, the application scope of this technique will be relatively limited. Although our proposed strategies can effectively improve the homing performance, it is an algorithm-level optimization after all. As far as we know, most existing visual homing approaches require control variables, such as static environments, illumination changes, holonomic constraints, etc. Some papers have proposed improvements to solve these problems, and visual homing can now cope with the problem of illumination changes well by adopting scale-invariant features. However, in terms of dealing with dynamic environments and nonholonomic constraints, although relevant solutions have made progress, some defects still exist, such as poor performance and excessive calculation [17,28,41]. Therefore, it will be valuable to further improve the performance of visual homing in these aspects. 6. Conclusions In this paper, we propose three landmark optimization strategies for mobile robot visual homing. Two outstanding advantages of our strategies can be summarized. On the one hand, to avoid potential problems that may arise due to the excessive landmark elimination, our strategies adopt the method of weight assignment, and the landmark distribution can thus be optimized by adjusting the contribution of each landmark. On the other hand, our strategies have almost no influence on the calculation speed, and the robot can still reach the target location with high efficiency. Homing experiments were carried out on both image databases and a real mobile robot. The results reveal that our proposed strategies can better improve the homing performance of ALV. By using our proposed strategies, the AAE values can be further reduced and the RR values can be further increased. In practical applications, the robot can reach the target location with a more ideal trajectory, and the TDE and ATS values can also be further reduced. In the future, we will conduct research from the following three aspects: First, we will be more concerned with improving the effect of the visual homing methods in more complex (such as the presence of obstacles or dynamic objects) or larger environments. Second, instead of artificially setting too many key parameters (such as Strategy 3), we will explore more intelligent solutions to adaptively determine parameters by sensing environments. Third, we will consider the homing performance for more general mobile robots, and solve the problem that the effect will decrease under nonholonomic constraints. Author Contributions: X.J. conceived the main idea; X.J. and Q.Z. designed the algorithm and system; J.M. designed the simulation; X.J. and P.L. designed and performed the experiments; X.J. and T.Y. analyzed the data; X.J. wrote the paper; and Q.Z. revised the paper and supervised the research work. Funding: This research was funded by the National Natural Science Foundation of China (61673129, 51674109, and U1530119). Conflicts of Interest: The authors declare no conflict of interest.
References 1. 2. 3. 4. 5.
6.
Nam, T.H.; Shim, J.H.; Cho, Y.I. A 2.5D map-based mobile robot localization via cooperation of aerial and ground robots. Sensors 2017, 17, 2730. [CrossRef] [PubMed] Kretzschmar, H.; Spies, M.; Sprunk, C.; Burgard, W. Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Robot. Res. 2016, 35, 1289–1307. [CrossRef] Penizzotto, F.; Slawinski, E.; Mut, V. Laser radar based autonomous mobile robot guidance system for olive groves navigation. IEEE Latin Am. Trans. 2015, 13, 1303–1312. [CrossRef] Hoy, M.; Matveev, A.S.; Savkin, A.V. Algorithms for collision-free navigation of mobile robots in complex cluttered environments: A survey. Robotica 2015, 33, 463–497. [CrossRef] Ruiz, D.; García, E.; Ureña, J.; de Diego, D.; Gualda, D.; García, J.C. Extensive ultrasonic local positioning system for navigating with mobile robots. In Proceedings of the 10th Workshop on Positioning, Navigation and Communication (WPNC), Dresden, Germany, 20–21 March 2013; pp. 1–6. Fu, Y.; Hsiang, T.R.; Chung, S.L. Multi-waypoint visual homing in piecewise linear trajectory. Robotica 2013, 31, 479–491. [CrossRef]
Sensors 2018, 18, 3180
7.
8. 9.
10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.
26.
27. 28. 29. 30. 31.
20 of 21
Sabnis, A.; Vachhani, L.; Bankey, N. Lyapunov based steering control for visual homing of a mobile robot. In Proceedings of the 22nd Mediterranean Conference on Control and Automation (MED), Palermo, Italy, 16–19 June 2014; pp. 1152–1157. Liu, M.; Pradalier, C.; Siegwart, R. Visual homing from scale with an uncalibrated omnidirectional carema. IEEE Trans. Robot. 2013, 29, 1353–1365. [CrossRef] Gupta, M.; Arunkumar, G.K.; Vachhani, L. Bearing only visual homing: Observer based approach. In Proceedings of the 25th Mediterranean Conference on Control and Automation (MED), Valletta, Malta, 3–6 July 2017; pp. 358–363. Denuelle, A.; Srinivasan, M.V. Bio-inspired visual guidance: From insect homing to UAS navigation. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 326–332. Zeil, J.; Narendra, A.; Stürzl, W. Looking and homing: How displaced ants decide where to go. Philos. Trans. R. Soc. B Biol. Sci. 2014, 369, 20130034. [CrossRef] [PubMed] Gamallo, C.; Mucientes, M.; Regueiro, C.V. Omnidirectional visual SLAM under severe occlusions. Robot. Auton. Syst. 2015, 65, 76–87. [CrossRef] Esparza-Jiménez, J.O.; Devy, M.; Gordillo, J.L. Visual EKF-slam from heterogeneous landmarks. Sensors 2016, 16, 489. [CrossRef] [PubMed] Garcia-Fidalgo, E.; Ortiz, A. Vision-based topological mapping and localization methods: A survey. Robot. Auton. Syst. 2015, 64, 1–20. [CrossRef] Shi, Y.; Ji, S.; Shi, Z.; Duan, Y.; Shibasaki, R. GPS-supported visual SLAM with a rigorous sensor model for a panoramic camera in outdoor environments. Sensors 2012, 13, 119–136. [CrossRef] [PubMed] Paramesh, N.; Lyons, D.M. Homing with stereovision. Robotica 2016, 34, 2741–2758. Sabnis, A.; Arunkumar, G.K.; Dwaracherla, V.; Vachhani, L. Probabilistic approach for visual homing of a mobile robot in the presence of dynamic obstacles. IEEE Trans. Ind. Electron. 2016, 63, 5523–5533. [CrossRef] Arunkumar, G.K.; Sabnis, A.; Vachhani, L. Robust steering control for autonomous homing and its application in visual homing under practical conditions. J. Intell. Robot. Syst. 2018, 89, 403–419. [CrossRef] Cartwright, B.A.; Collett, T.S. Landmark learning in bees. J. Comp. Physiol. 1983, 151, 521–543. [CrossRef] Lambrinos, D.; Möller, R.; Labhart, T.; Pfeifer, R.; Wehner, R. A mobile robot employing insect strategies for navigation. Robot. Auton. Syst. 2000, 30, 39–64. [CrossRef] Fran, M.O.; Schölkopf, B.; Mallot, H.A.; Bülthoff, H.H. Where did I take that snapshot? Scene-based homing by image matching. Biol. Cybern. 1998, 79, 191–202. [CrossRef] Vardy, A.; Möller, R. Biologically plausible visual homing methods based on optical flow techniques. Connect. Sci. 2005, 17, 47–89. [CrossRef] Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [CrossRef] Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2006, 110, 346–359. [CrossRef] Kashif, M.; Deserno, T.M.; Haak, D.; Jonas, S. Feature description with SIFT, SURF, BRIEF, BRISK, or FREAK? A general question answered for bone age assessment. Comput. Biol. Med. 2016, 68, 67–75. [CrossRef] [PubMed] Ramisa, A.; Goldhoorn, A.; Aldavert, D.; Toledo, R.; de Mantaras, R.L. Combining invariant features and the ALV homing method for autonomous robot navigation based on panoramas. J. Intell. Robot. Syst. 2011, 64, 625–649. [CrossRef] Zhu, Q.; Liu, C.; Cai, C. A novel robot visual homing method based on SIFT features. Sensors 2015, 15, 16063–26084. [CrossRef] [PubMed] Churchill, D.; Vardy, A. An orientation invariant visual homing algorithm. J. Intell. Robot. Syst. 2013, 71, 3–29. [CrossRef] Möller, R.; Krzykawski, M.; Gerstmayr, L. Three 2D-warping schemes for visual robot navigation. Auton. Robot. 2010, 29, 253–291. [CrossRef] Möller, R. A SIMD Implementation of the MinWarping Method for Local Visual Homing; Computer Engineering Group, Bielefeld University: Bielefeld, Germany, 2016. Möller, R. Design of a Low-Level C++ Template SIMD Library; Computer Engineering Group, Bielefeld University: Bielefeld, Germany, 2016.
Sensors 2018, 18, 3180
32. 33. 34. 35.
36. 37. 38. 39.
40. 41.
21 of 21
Fleer, D.; Möller, R. Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots. Robot. Auton. Syst. 2017, 89, 51–74. [CrossRef] Möller, R.; Horst, M.; Fleer, D. Illumination tolerance for visual navigation with the holistic min-warping method. Robotics 2014, 3, 22–67. [CrossRef] Möller, R. Column Distance Measures and Their Effect on Illumination Tolerance in MinWarping; Computer Engineering Group, Bielefeld University: Bielefeld, Germany, 2016. Zhu, Q.; Liu, C.; Cai, C. A Robot Navigation Algorithm Based on Sparse Landmarks. In Proceedings of the 6th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2014; pp. 188–193. Yu, S.E.; Lee, C.; Kim, D.E. Analyzing the effect of landmark vectors in homing navigation. Adapt. Behav. 2012, 20, 337–359. [CrossRef] Lee, C.; Yu, S.E.; Kim, D.E. Landmark-based homing navigation using omnidirectional depth information. Sensors 2017, 17, 1928. Zhu, Q.; Liu, X.; Cai, C. Feature optimization for long-range visual homing in changing environments. Sensors 2014, 14, 3342–3361. [CrossRef] [PubMed] Zhu, Q.; Liu, X.; Cai, C. Improved feature distribution for robot homing. In Proceedings of the 19th International Federation of Automatic Control (IFAC), Cape Town, South Africa, 24–29 August 2014; pp. 5721–5725. Yan, J.; Kong, L.; Diao, Z.; Liu, X.; Zhu, L.; Jia, P. Panoramic stereo imaging system for efficient mosaicking: Parallax analyses and system design. Appl. Opt. 2018, 57, 396–403. [CrossRef] [PubMed] Szenher, M.D. Visual Homing in Dynamic Indoor Environments. Ph.D. Thesis, University of Edinburgh, Scotland, UK, 2008. © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).