[13] Polaroid Corporation (Ultrasonic Components Group), Polaroid Manual, Cambridge, MA, 1997. Appendix: Notations and Functions. In this paper, we use ...
Sensor-Based Path Planning for a Non-Point Automaton with Sensing Uncertainties Considered Bing-Ran Zuo and I-Ming Chen Robotics Research Center, School of Mechanical & Production Engineering, Nanyang Technological University, Singapore 639798 E-mail: {mbrzuo, michen}@ntu.edu.sg
1. Introduction Sensor-based path planning has received considerable attention in mobile robot community. The objective is to generate a collision free path to guide a mobile robot from a start location to a desired goal. The strategy for path generation is affected by the feedback sensor information available about the locally visible environment. For our purpose, the robot is equipped with a laser range finder [1] that provides obstacle distances within a certain range. The overall collision free path between the start and goal is decomposed into a series of line segments, whose endpoints are the so-called subgoals. As a result, the problem is reduced to generating a series of subgoals based on the locally feedback distance data, so that the global convergence [2]-[7] is guaranteed. Previous works on sensor-based planning can be distinguished between approaches that focus on obstacle avoidance [8]-[10] and approaches that focus on global convergence [3]-[7]. The former avoid collision by mapping the sensor readings to various actions, but ignore the global convergence. The latter guarantee that the robot reaches the goal along a collision free path. Since the latter achieve local obstacle avoidance as well as global convergence, they have received wide attention in recent years. The Bug algorithms in [3][5] use two reactive modes of motion: moving toward the goal and following an obstacle boundary. The transition between these two modes of motion is governed by a globally convergent criterion, which ensures the distance to the goal decreases monotonously and guarantees the convergence to the goal. With the assumption of convex polygonal obstacles, the subgoal selection algorithm in [6] generates a sequence of subgoals near the polygonal vertices, until the obstacle does not block the line-of-sight to the final goal. The subgoals incrementally guide the robot to the final goal. The online planner in [7] guarantees the global convergence based on avoiding obstacles via exit points, regardless of the order in which the obstacles are avoided. The convergent algorithms ([3]-[7]) are liable to two common assumptions, which limit their implementation in real robotic systems. • •
The robot is treated as a point without physical size. The sensor readings are assumed to be perfect without uncertainties considered, or the local environment is treated with full information available.
It is clear that any physical robot is of a certain size. For global planners, such as the piano mover’s problem [12], there is no difference between the point robot and the physical robot. Since the complete information about the global environment is known, the C-space approach [11] can be used as a unified representation for these two types of robot. But for local planners ([3]-[10]), a point robot is an oversimplified alternative. This is because transforming a physical robot into a point robot requires the analytical expressions of obstacle boundaries. For most current mobile robots with range sensors (e.g. ultrasonic sensor [13] or laser range finder), only the information about obstacle distances is available. Abstracting the boundary expressions from distance data requires much time and greatly reduces the realtime performance of planners. For unstructured environment, it is even impossible and unnecessary to get the analytical expressions of obstacle boundaries. In this case, the algorithms for point robots may not work properly.
Another problem is about the sensing uncertainties. Since any sensor measurement has inherent uncertainties, a proper uncertainty model for sensors is necessary for safe navigation. Although some planners [3]-[5] are directly based on the distance sensing data, there is no consideration about the sensor uncertainties. To overcome these problems, this paper presents a sensor-based path planner for a non-poin mobile robot with sensing uncertainties considered. Like the Bug algorithms [3]-[5], there are two basic modes of robot motion: (1) Moving toward the goal, and (2) following an obstacle boundary. The robot initially moves towards the goal. If there is an obstacle blocking in the way, the robot switches to following the boundaries of this obstacle until it reaches a position loser to the goal. In this way, the global convergence is guaranteed. Unlike the Bug algorithms, the proposed approach directly incorporated the robot size and sensor uncertainties into planning functions. The data structure of an obstacle boundary is represented as a numbered set of distance sensor data rather than an analytical expression. In this way, the proposed planner becomes more feasible for autonomous navigation in unstructured environments.
2. Generation of Obstacle Boundaries As mentioned before, a laser range finder is equipped to measure the distances of obstacles. An obstacle point is represented as an observed range sensor return Oi = (ri ,θ i ) , (1) where ri is the obstacle distance, θ i is the angle relative to a predefined direction (e.g. the x-axis), and i is a numbered index. In this paper, we assume that all the indices are enumerated in a counterclockwise direction (see Oi-1, Oi , Oi+1 in Fig. 1). Since any measurement has inherent uncertainties, a proper uncertainty model is necessary for safe navigation. For the laser range finder, the distance uncertainty ui of an obstacle point Oi (ri ,θ i ) can be estimated as ui = min( ri *δ , ri − r ) , (2) where r is the effective radius (i.e., body radius plus a safety clearance) of the robot, δ = 2π / N m is the angular resolution of the range sensor, and N m is the maximum scan number per cycle. For SICK laser range finder, the maximum scan number per half circle is 360. Thus the angular resoulstion is given by δ = π / 360 = 0 .001389 . The first term in Eq. (1), ri * δ , is readily undertandable due to the intuitive physical meaning of the angular resolution of laser sensors. The second term ri − r results from a safe condition, ri + ui ≥ r , which guarantees obstacle avoidance. A numbered set of all the obstacle points observed at a certain position makes up an Obstacle Point Field (OPF). Ω = Oi i = 1, L , nO . (3)
{
}
An obstacle boundary is referred to a numbered set of obstacle points with the distance between two adjacent points less than a partition threshold (i.e., a constant ρ plus the uncertainties of the adjacent obstacle points). If a boundary contains m obstacle points and the index of the first obstacle point is s, then the boundary can be expressed as b = O i O i O i +1 < ρ + u i + u i + 1, i = s, s + 1, L , s + m − 1 , (4)
{
}
where ui and ui +1 are distance uncertainties for obstacle points O i and Oi +1 , respectively. The constant ρ and the obstacle uncertainties determine the partition of the obstacle points into distinct sensed obstacle boundaries. Since the effective radius of the robot is r, the constant can be expressed as ρ =2*r . (5)
Note that the overall partition threshold is variable due to the distance uncertainties taken into account. Unlike the local tangent graph (LTG) in [5], the partition technique here is based on the distance between two adjacent obstacle points, instead of the continual change of obstacle distance data. In this way, the partition of obstacle boundaries becomes more feasible and robust against sensing uncertainties. Since our approach makes no assumption about the geometric properties about the environment, the obstacle boundary can be arbitrary and need not to be piecewise smooth. This distinct advantage makes our approach applicable to unstructured environments. Similar to OPF, a numbered set of all the obstacle boundaries observed at a certain position makes up an Obstacle Boundary Field (OBF). Λ = bi i = 1,L , nB (6)
{
}
{
}
Given the obstacle point field Ω = O i i = 1, L, nO and the threshold for boundary partition ρ, the OBF
{
}
Λ = bi i = 1,L , nB can be generated directly based on the definition. The above concepts are shown in Fig. 1. All the indices are numbered in a counterclockwise direction (See Oi-1, Oi , Oi+1 ). The angular resolution δ is the minimum angle formed by two adjacent obstacle points. The sensing uncertainty of obstacle point Oi is represented by a circle centered at Oi with a radius of ui . Here,
ui is determined by the uncertainty model in Eq. (2).
OR1 OL3 OR2
OL1
b1
b2
Robot r b3
OL2
OR3
Oi-1 Oi
Ok Ok+1
2δ Oi +1 Oj ui
Oj +1
Fig. 1: Obstacle point field (OPF) and generated boundaries. With a constant ρ = 2 * r , where r is the effective radisu of the robot, three obstacle boundarires, i.e., b 1 , b 2 , and b3 are generated in the OBF. Note that the boundary b3 corresponds to three real obstacles due to the adjacent distacne less than the parition threshold (i.e., O j O j + 1 ≤ ρ + u j + u j +1 , Ok Ok +1 ≤ ρ + uk + uk +1 , where Oj , Oj+1 , Ok, and Ok+1 are distinct points on real obstacles). Thus, the proposed boundary concept is different from that of a physical obstacle boundary.
3. Path Safety To facilitate the discussion, we use [AB ] to denote a line segment from point A to point B, and [ AB a ray from point A through point B containing point A. A lane G([AB], r ) refers to a line segment [AB] with a thickness radius r, as shown in Fig. 2 (a). An open lane G([AB, r ) refers to a ray [AB with a thickness
radius r, as shown in Fig. 2 (b). For more information about the notations and functions, please refer to the Appendix.
r
r
r
A
B
B
A
(a)
(b)
Fig. 2: (a) A lane G([AB], r ) , (b) An open lane G([AB, r ) . Suppose the current position of the robot is C, and the candidate for its next position is S. The path [CS] is said to be safe for an obstacle point O i , if and only if the point O i is not in the lane Γ([ CS] , r + ui ) , i.e.,
Oi ∉ Γ([ CS] , r + ui ) . Here, r is the effective radius of the robot, and ui is the distance uncertainty of the obstacle point O i . Note that the sensing uncertainty is taken into the safety concept by increasing the radius from r to r + ui , as shown in Fig. 3. uk
r C
r
Ok
S
Oj Oi
uj
ui
Fig. 3: Path safety for obstacle points with uncertainty considered. Fig. 3 depicts three obstacle points with uncertainties indicated by shaded circles. The path [CS] is safe for obstacle point O i but unsafe for O j and O k . This is due to that Oi ∉ Γ([ CS] , r + ui ) , but
O j ∈ Γ([CS], r + u j ) and O k ∈ Γ([ CS] , r + uk ) . Note that O j ∉ Γ([CS], r ) , which means that the path CS is safe for O j if the uncertainty is ignored ( u j = 0 ). However, our safety concept here takes into account the sensing uncertainty, based on which the path [CS] is not safe for obstacle point O j . The path [CS] is said to be safe for an obstacle boundary if and only if it is safe for any obstacle point belonging to this boundary. Similarly, the path [CS] is said to be safe for an obstacle boundary set if and only if it is safe for any obstacle boundary belonging to this set. These concepts can also be extended to a ray path [CS without any difficulty.
4. Blocking Boundary A blocking boundary is a boundary in the obstacle boundary field that the robot hit first when it moves toward a desired goal. Given an obstacle boundary field Λ sensed at the current position C, the blocking boundary b* can be determined in two cases. (a) The desired goal G is given, i.e., the desired path [CG] is known. (b) One obstacle point Oˆ matching the blocking boundary is given. For case (a), the blocking boundary b* is the boundary b ∈ Λ , which includes the first hit obstacle point
O* ∈ Ω when the robot moves from C to G. b *
b* = b b ∈ Λ , O* ∈ b, O* = HitObst ([CG], Ω) ,
(7)
where Ω is the obstacle point field, and HitObst ([CG], Ω ) returns the first hit obstacle point O* when robot moves from C to G. Here, by “first hit” we mean that the path [CG] is unsafe for O* and the projection of O* to the straight line CG is nearest to C. Namely, ∀Oi ∈ Γ([CG], r + ui ) , Dist(C, Pro(O* , CG)) ≤ Dist(C, Pro(Oi , CG)) ,
(8)
where O i is an arbitrary obstacle point in Ω, Γ ([CG], r + ui ) is a lane determined by the path segment [CG], robot effective radius r, obstacle uncertainty ui , and Dist(C, Pro(O i , CG)) returns the distance from point C to the projection point of O i to the straight line CG. Obviously, if the path [CG] is safe for the obstacle boundary field Λ, then there is no blocking boundary.
b
r C
r G
O* b*
u*
Fig. 4: Blocking boundary b* and the first hit obstacle point O* . The concept of blocking boundary is shown in Fig. 4. When the robot moves from current position C to the desired goal G, the first hit obstacle is O* . Therefore, the boundary b * which includes O* is the so-called blocking boundary for current position C. For case (b), the term “match” means that the blocking boundary includes one obstacle point, whose distance to the given obstacle point is less than the partition threshold. Namely, the blocking boundary can be written as ˆ ≤ ρ + u + uˆ . b* = b b ∈ Λ , ∃O ∈ b, such that O O (9) k
k
k
As we will see later, the point Oˆ is normally given as an obstacle point associated with a subgoal in the previous obstacle point field. In this way, the connectivity of the followed obstacle boundaries is guaranteed during the obstacle following mode of motion.
5. Left Ex it and Right Exit For any obstacle point Oi , there are two exits: left exit EL and right exit ER, which keep a small clearance ε from the two tangentail points TL and TR determined by the current poistion C and the expaned obstacle circle centered at Oi with a radius r + ui . Here, ui is the distance uncertainty of obstacle point Oi , and
ε > 0 is a small constant given beforehand. Refer to Fig. 5. r
C
TR ER
Oi
TL EL
ε
ε
r+ui
Fig. 5: Left and right exits for an obstacle point. The purpose of setting the exits a small distance ε from the tangential points is to guide the robot around a vertext to a vantage point, from which it is guaranteed that further subgoals will be generated. The bigger the clearance, the smaller the nubmer of generated subgoals. However, a very large clearance may result the closure of blocking boundaries and no safe subgoal generated. The choice of ε is depended upon the previous knowledge about the environment. If the minimum distance κ between obstacles in the environment is available, the clearance constant can be chosen as 0 < ε ≤ κ / 2 . Otherwise, the clearance ε can be set as a very samll positive value. Similarly, for an obstacle boundary b, there are two exits available : left exit EL and right exit ER, which are exactly the left most exit and right most exit among all the obstacle points belonging to this boundary, respectively. Here, by “left most” we mean that the path CEL is safe for the obstacle boundary, and by “right most” we mean that the path CER is safe for the obstacle boundary. Refer to Fig. 6.
C ε
TR ER OR
r+uR
b
r
r+uL
TL OL
ε
EL
Fig. 6: Left and right exits for an obstacle boundary. The left obstacle point OL is associated with left exit EL , and the right obstacle point OR is associated with the right exit ER . Note that the different uncertainties uL and uR may result different radii of the expanded obstacle circles centered at OL and OR, respectively. For each boundary in the obstacle point field, the left and right obstacle points are shown in Fig. 1. Since the concept of “left most” is based on the safety consideration, the left or right obstacle point is not necessarily the end point of the boundary. Refer to OL1 for b 1 in Fig. 1. The determination of left and right exits makes up the basic principal for obstacle avoidance.
6. Subgoal-Obstacle Pair A subgoal-obstacle pair consists of a subgoal and its associated obstacle point. Once the robot switches to following obstacles, the associated obstacle point can be used to locate the followed obstacle boundary. Given an obstacle boundary field Λ sensed at the current position C and the blocking boundary b* , a left subgoal-obstacle pair can be generated using the following algorithm. The intrinsic robot data, such as the effective radius r and subgoal clearance ε, are available beforehand. Algorithm 1: Generate a LEFT subgoal-obstacle pair Step 1. Initialize the blocking boundary set B = {b* } , the non-blocking boundary set B = Λ − B , and the obstacle viewing flag ViewAhead = FALSE ; Step 2. Set the RIGHT subgoal candidate as the RIGHT exit of the blocking boundary S R = RightExit( b*) ;
b* , i.e.
Step 3. Set the LEFT subgoal and its associated obstacle point as the LEFT exit and LEFT obstacle point of the boundary b, i.e., S L = LeftExit(b* ) , O L = LeftObstb ( *) ; Step 4. Check if the path CSL is safe for the blocking boundary set B. If so, go to Step 7. Otherwise, go to Step 5. Step 5. Check if the path CSR is safe for the blocking boundary set B. If so, go to Step 6. Otherwise, the overall blocking boundary is closed. Exit the loop and stop the navigation. Step 6. Reset the LEFT subgoal as the RIGHT subgoal candidate, i.e., S L = S R , and the obstacle viewing flag ViewAhead = TRUE ; Step 7. Check if the path CSL is safe for the non-blocking boundary set B . If so, exit the loop and go to Step 9. Otherwise, mark the blocking boundary in B as b , and go to Step 8; Step 8. Reset the LEFT subgoal as the RIGHT exit of the boundary b , i.e., S L = RightExit(b ) , and the obstacle viewing flag ViewAhead = TRUE . Go to Step 7; Step 9. Check if the path CSL is safe for the blocking boundary set B. If so, exit the loop and go to Step 12. Otherwise, go to Step 10. Step 10. Add the boundary b into the blocking boundary set B, and remove it from the non-blocking boundary set B . Step 11. Reset the blocking boundary b* = b , and go to Step 3. Step 12. Determine the obstacle point OL associated with the LEFT subgoal SL . Namely, HitObst ([CS L , B), if ViewAhead = TRUE OL = , if ViewAhead = FALSE O L , where HitObst ([CS L , B) returns the first hit obstacle point belong to a boundary in the blocking boundary set B when robot moves from C through SL . Some comments are addressed here. Step 1 initializes the blocking boundary set and its complement. The flag ViewAhead determines whether the Obstacle Point Assoicated with the Subgoal (OPAS) will be relocated. If ViewAhead = TRUE, the OPAS should be relocated as the first hit obstacle point when the robot move from the current position to the subgoal. Otherwise, the OPAS is the left obstacle point of a certain blocking boundary. Step 2-11 returns a safe subgoal. In case that the subgoal does not exist, the path planner exits the loop and stops the navigation. The left subgoal is initialized in Step 2-6, among which Step 4-5 checks if all the blocking bounaries form a closed blocking boundary. Step 7-8 checks if the left subgoal is safe for non-blocking boundary set, and Step 9 checks if the left subgoal is safe for blocking boundary set. Step 10-11 resets the blocking boundary, the blocking boundary set and non-blocking boundary set. Finally, Step 12 determines the OPAS according to the obstacle viewing flag. Similary, a right subgoal-obstacle pair can be generated as follows. Algorithm 2: Generate a RIGHT subgoal-obstacle pair Step 1. Initialize the blocking boundary set B = {b* } , the non-blocking boundary set B = Λ − B , and the obstacle viewing flag ViewAhead = FALSE ; Step 2. Set the LEFT goal candidate as the LEFT exit of the blocking boundary b* , i.e. S L = LeftExit( b* ) ; Step 3. Set the RIGHT subgoal and its associated obstacle point as the RIGHT exit and RIGHT obstacle point of the boundary b, i.e., S R = RightExit(b* ) , O R = RightObstb ( *) ; Step 4. Check if the path CSR is safe for the blocking boundary set B. If so, go to Step 7. Otherwise, go to Step 5.
Step 5. Check if the path CSL is safe for the blocking boundary set B. If so, go to Step 6. Otherwise, the overall blocking boundary is closed. Exit the loop and stop the navigation. Step 6. Reset the RIGHT subgoal as the LEFT subgoal candidate, i.e., S R = S L , and the obstacle viewing flag ViewAhead = TRUE ; Step 7. Check if the path CSR is safe for the non-blocking boundary set B . If so, exit the loop and go to Step 9. Otherwise, mark the blocking boundary in B as b , and go to Step 8; Step 8. Reset the RIGHT subgoal as the LEFT exit of the boundary b , i.e., SR = LeftExit(b ) , and the obstacle viewing flag ViewAhead = TRUE . Go to Step 7; Step 9. Check if the path CSR is safe for the blocking boundary set B. If so, exit the loop and go to Step 12. Otherwise, go to Step 10. Step 10. Add the boundary b into the blocking boundary set B, and remove it from the non-blocking boundary set B . Step 11. Reset the blocking boundary b* = b , and go to Step 3. Step 12. Determine the obstacle point OR associated with the left subgoal SR. Namely, HitObst ([CS R , B), if ViewAhead = TRUE OR = , if ViewAhead = FALSE O R , where HitObst ([CS R , B) returns the first hit obstacle point belonging to a boundary in the blocking boundary set B when robot moves from C through SR. So far, we have stated the details for generating the left or right subgoal-obstacle pair. The results can be furthermore integrated into a sensor-based planner.
7. Sensor-Based Navigation As for the sensor-based navigation, there are basically two modes of motion: (1) Moving toward the goal, and (2) following an obstacle boundary. The robot initially moves towards the goal. If there is a blocking boundary encountered, the robot switches to following this boundary until it reaches a position whose distance is loser to the goal. The overall sensor-based algorithm can be summarized as follows. Algorithm 3: Sensor-based navigation Step 1. (a) (b) Step 2.
Move toward the goal G until one of the following events occurs. If the goal G is reached, stop the navigation. Else, a blocking boundary b* is detected. Go to Step 2. Choose a boundary-following direction (LEFT or RIGHT), and record the minimum distance d min from the current position C to the goal G.
Step 3. Generate a subgoal S and its associated obstacle point O based on the blocking boundary b* and boundary-following direction. Refer to Algorithm 1 or 2. If no safe subgoal available, stop the navigation. Otherwise, go to Step 4. Step 4. Move toward the subgoal S until the one of the following events occurs. (a) If the goal G is not reachable, stop the navigation. (b) Else if the distance d from the current position C to the goal G becomes smaller than d min , go to Step 1. (c) Else, the subgoal S is reached. Go to Step 5. Step 5. Reset the blocking boundary b* as a boundary in the new obstacle boundary field, which matches the obstacle point O in the old obstacle point field. Go to Step 3. It is seen that a blocking boundary is determined in two ways. (i). In step 1, the blocking boundary is a boundary in the current OBF, which is first encountered when the robot moves toward the goal. Referring to Eq. (7), the blocking boundary can be expressed as
b* = b b ∈ Λ C , O* ∈ b, O* = HitObst ([CG], ΩC ) , where Λ C denotes the current obstacle boundary field, Ω C denotes the current obstacle point field, and
O* is the first hit obstacle point satisfying the following condition. ∀Oi ∈ Γ([CG], r + ui ) , Dist(C, Pro(O* , CG)) ≤ Dist(C, Pro(Oi , CG)) . (ii). In Step 5, the blocking boundary is reset as a boundary in the new OBF, which matches the obstacle point in the old obstacle point field. As mentioned in Section 5, the term “match” means that the blocking boundary includes one obstacle point whose distance to the given obstacle point O is less than the partition threshold. Namely, b* = b b ∈ Λ N , ∃O k ∈ b, such that O k O ≤ ρ + uk + u , where Λ N denotes the new obstacle boundary field, and u is the uncertainty of the obstacle point O. In this way, the reset blocking boundary and the old one belong to the same followed obstacle, and the connectivity of the followed obstacle boundary is guaranteed. In Step 2, the boundary-following direction can be arbitrary chosen without violating the convergence properties. To perform a local optimization, the user can select the one based on which the generated subgoal is closer to the final goal. Note that the distance to the goal always decreases monotonously during the motion toward the goal. It is this property which guarantees the convergence of the algorithm. After determining the blocking boundary and obstacle-following direction, Step 3 returns a subgoalobstacle pair generated by Algorithm 1 or 2. The subgoal determines the next path segment for navigation, while the associated obstacle point will be used to detect the new blocking boundary if applicable. The overall navigation path consists of a series of line segments. In Step 4 (a), the robot moves toward the subgoal along a line segment. An algorithm for dynamic reachability test will take into effect. For more details, please refer to [2]. During the motion of obstacle following (Step 2-5), if the distance to the goal becomes closer to the minimum distance d min recorded so far, the robot switches to moving toward the goal. Refer to Step 4 (b). Once the subgoal is reached (Step 4 (c)), the robot detect a new blocking boundary based on the sensed obstacle boundary field, and starts a new cycle of navigation. The overall diagram for sensor-based navigation is shown in Fig. 7. Start navigation
Sensing
Planning Control Dynamics N
Is goal reached?
Generate an OPF Generate an OBF Generate a subgoal Is subgoal closer to goal? Y Move tarward goal
N
Follow obstacle Is goal reachable? Y
Desired motion
Y Stop navigation
Fig. 7: Diagram for sensor-based navigation.
N
Here, our focus is on the sensing and planning modules. The former generates the obstacle boundary field based on the sensed distance data, while the latter generates the desired path (or motion). The low-level controller and dynamics modules output the real motion, which makes the robot reach the goal as planned beforehand. Some implementation results about the high-level sensor-based path planner are presented in the following section.
8. Examples The proposed path planner has been integrated into the Simulator for Sensor-Guided Autonomous Systems (SGAS). Fig. 8 depicts the navigation environment, the start location S and the goal location G. The effective radius of the robot r = 13 pixels, as indicated by with a large circle. The installed laser range finder is shown with a small bold circle. Note that the center of the sensor does not coincide with the center of the robot. The maximum sensing range for laser sensor MAX_ECHO = 1000 pixels, and maximum scan number per cycle N m = 400 .
Fig. 8: Navigation environment.
Fig. 9: Obstacle Point Field (OPF).
Fig. 9 shows the obstacle point field scanned at the start location S. From (5), the partition constant ρ = 2 * r = 26 pixels. Thereafter, 14 boundaries are generated at this location, as shown in Fig. 10.
Fig. 10: Obstacle Boundary Field (OBF).
Fig. 11: Blocking boundary and two exits.
Note that a single real obstacle can generate several sensed boundaries (see b2 and b3 ; b4 and b6 ; b8 , b9 and b10 ; b11 and b12 ). Likewise, several real obstacles may belong to the same sensed boundary (see b13 ). The boundary b7 is the initial blocking boundary at the location S. With algorithm 1 and 2, the overall generated blocking boundary set {b7 , b5 , b1 , b14, b13 } , the left subgoal SL and the right subgoal SR for the start location S are shown in Fig 11. Now let’s analyze the navigation paths based on different decision strategies. (a). Select a left subgoal If the robot always selects a left subgoal, the navigation path is obtained as S→a→b→c→d→e→f→g →h→G, as shown in Fig. 12 (a). At the start location S, the robot select the left subgoal a. The robot keeps moving toward the goal during segment S→a→b→c. At location c, the left subgoal d becomes farther to the goal G. So the robot switches to moving around obstacles unitl reaching location h, which is closer to the goal than the lcoation c. Thereafter, the robot restores the motion toward the goal until reaching the final goal. (b) Select a right subgoal If the robot always selects a right subgoal, the navigation path is obtained as S→a→b→c→d→e→f→g→G, as shown in Fig. 12 (b). At the start location S, the robot select the right subgoal a. The robot keeps moving toward the goal during segment S→a→b. At location b, the right subgoal c becomes farther to the goal G. So the robot switches to moving around obstacles unitl reaching location g, which has a closer distacne to the goal than the lcoation b. Thereafter, the robot restores the motion toward the goal until reaching the final goal. (c) Select a closer subgoal Beween the left and right goals, if the robot always selects one closer to the final goal, then the navigation path is obtained as S→a→b→c→d→e→f→g→G. Refer to Fig. 12 (c). At the start location S, the right subgoal a is selected because it is closer to the final goal G than the left one. The robot keeps moving toward the goal during segment S→a→b. At location b, the right subgoal d becomes farther to the goal G. So the robot switches to moving around obstacles unitl reaching location g, which has a closer distacne to the goal than the lcoation c. Thereafter, the robot restores the motion toward the goal until reaching the final goal. Note that the turn direction during obstacle following is selected once for all. This is liable to guarantee the global convergence.
(a)
(b)
(c) Fig. 12: Planned navigation paths ( MAX_ECHO = 1000 ). (a) Select a left subgoal. (b) Select a right subgoal. (c) Select a closer subgoal. If the maximum sensing range is reduced to 10% (i.e., MAX_ECHO = 100 ), the navigation paths are depicted in Fig. 13 (a)-(c), respectively. In general, reducing the sensor range will generate more subgoals, which may furthermore slow down the robot speed. Comparing Fig. 12 (a) and Fig. 13 (a), one may conclude that the larger sensing range may reduce the length of the navigation path. However, this is not generally true. As pointed out in [4], as long as the robot sensors provide only partial information, the quality of the generated paths can not be guaranteed. For more discussion about the relationship between sensing range and path lengths, please refer to [4]-[5].
(a)
(b)
(c) Fig. 13: Planned navigation paths ( MAX_ECHO = 100 ). (a) Select a left subgoal. (b) Select a right subgoal. (c) Select a closer subgoal.
9. Conclusion Sensor-based path planning is one of the most important topics in mobile robot. Most current approaches that achieve obstacle avoidance as well as global convergence focus on a point robot and ignore the sensor uncertainties. This paper presents a sensor-based path planner for a non-poin mobile robot and with sensing uncertainties considered. A geometrical uncertainty model for the laser range finder is proposed, and some basic concepts, such as the path saftey, blocking boundary, are discussed. Like the Bug algorithms [3]-[5], there are two basic modes of robot motion: (1) Moving toward the goal, and (2) following an obstacle boundary. The robot initially moves towards the goal. If there is an obstacle blocking in the way, the robot switches to following the boundaries of this obstacle until it reaches a position loser to the goal. In this way, the global convergence is guaranteed. Unlike the Bug algorithms, the proposed approach directly incorporated the robot size and sensor uncertainties into planning functions. The data structure of an obstacle boundary is represented as a numbered set of distance sensor data rather than an analytical expression. There is no assumption about the geometric properties about obstacles in the environment. The proposed planner is applicable to sensor-based autonomous navigation in unstructured environments.
Acknowledgment This work is supported by National Science and Technology Board and Ministry of Education, Singapore under Academic Research Project JT/ARC 7/97.
References [1] SICK AG, Proximity Laser Scanner (PLS), Technical Description, Germany, 2000. [2] B. R. Zuo and I. M. Chen, Dynamic reachability test for sensor-based navigation with unknown obstacle boundaries. The 2001 IEEE International Conference on Robotics and Automation, Seoul, Korea (ICRA’2001, submitted). [3] V. J. Lumelsky and A. A. Stepanov. Dynamic path planning for a mobile automaton with limited information on the environment. IEEE Trans. Automatic Control, 31(11): 1058-1063, 1986. [4] V. J. Lumelsky and T. Skewis. Incorporating range sensing in the robot navigation function. IEEE Trans. Sys., Man Cybernet., 20(5): 1058-1069, 1990. [5] I. Kamon, E. Rimon and E. Rivlin. TangentBug: A range-sensor-based navigation algorithm. Int. J. Robotics Res., 17(9): 934-953, 1998.
[6] B. H. Krogh and D. Feng, Dynamic generation of subgoals for autonomous mobile robots using local feedback information. IEEE Trans. Automatic Control, 34(5): 483-493, 1989. [7] Z. Shiller. Online suboptimal obstacle avoidance. Int. J. Robotics Res., 19(5): 480-497, 2000. [8] O. Khatib. Real time obstacle avoidance for manipulators and mobile robots. Int. J. Robotics Res., 5(1): 90-98, 1986. [9] J. Borenstein, and Y. Koren. The vector field histogram fast obstacle avoidance for mobile robots. IEEE Trans. Robotics Automation, 7(3): 278-288, 1991. [10] D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1): 23-33, 1997. [11] Lozano-Perez, Spatial planning: A configuration space approach. IEEE Trans. Computers, 32(2): 108120, 1983. [12] C. Yap, Algorithmic motion planning, in Advances in Robotics, Vol.1: Algorithmic and Geometric Aspects, J. Schwartz and C. Yap, Eds. Hillsdale, NJ: Erlbaum, 1987, pp. 95-143. [13] Polaroid Corporation (Ultrasonic Components Group), Polaroid Manual, Cambridge, MA, 1997.
Appendix: Notations and Functions In this paper, we use some notations and functions as follows. Notations: [AB] [AB AB AB
G([AB], r ) G([AB, r ) O Ω b Λ Functions: Dist(A, B)
Pro(C, AB) HitObst ([AB], b) HitObst ([AB], Ω) HitObst ([AB], S) HitObst ([AB, S) LeftExit (A) RightExit (A) LeftExit (b) RightExit (b) LeftObst (b) RightObst (b)
Line segment from point A to point B Ray from point A through point B containing point A Straight line (unbounded) through point A and point B Length of line segment [AB] Line segment [AB] with a thickness radius r Ray segment [AB with a thickness radius r Obstacle point Obstacle point field (OPF) Obstacle boundary Obstacle boundary field (OBF) Returns distance from point A to point B, i.e., Dist(A, B) = AB Returns projection (point) of point C to straight line AB Returns the first hit obstacle point on boundary b when robot moves from point A to point B Return the first hit obstacle point in obstacle point field Ω when robot moves from point A to point B Returns the first hit obstacle point on a boundary belonging to the boundary set S when robot moves from point A to point B Return the first hit obstacle point on a boundary belonging to the boundary set S when robot moves from point A through point B Returns left exit of obstacle point A Returns right exit of obstacle point B Returns left exit of boundary b Returns right exit of boundary b Returns left obstacle point of boundary b Returns right obstacle point of boundary b