A Mobile Robot System With Semi-Autonomous ...

28 downloads 27721 Views 1MB Size Report
Keywords: Mobile Robot, Human Tracking, Laser Range Finder, Person Following. 1. ..... After that, the target person pushes the elevator button to call the car (Fig. ..... In: IEEE/RSJ international conference on intelligent robots and systems.
A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior 1

Yutaka Hiroi, 2Shohei Matsunaka, 3Akinori Ito Osaka Institute of Technology, [email protected] 2 Osaka Institute of Technology, [email protected] 3 Tohoku University, [email protected] *1

Abstract We are developing a mobile robot for supporting daily life. One of the important issues of such a mobile robot is navigation, where the user leads the robot to the destination. In this paper, we propose a simple person-following method that enables the robot to follow the target person robustly using a laser range finder using a simple algorithm. When moving around a building, one problem is how to get into a small space such as an elevator car, in which the normal person-following strategy does not work well because the robot cannot keep its distance from the user because of the room size. Therefore, we developed a mobile robot system that can ride an elevator by cooperation with the user.

Keywords: Mobile Robot, Human Tracking, Laser Range Finder, Person Following 1. Introduction Robots for supporting daily life of humans have been actively developed [1-3]. These robots are expected to work in an unstructured real environment, in contrast to industrial robots that work in completely structured environment such as factories. The robots working under real environments cannot be operated by a simple program; rather, interaction with the users, observation of the environments and complicated decision are needed to achieve the task robustly. Consider a task of leading a robot from a point to the destination. The most straightforward method to do this is to use a remote controller [4]. The controller-based method is easy to realize, but the user should take full control of the robot during the movement. The other scenario is to bring the robot by letting the robot follow the user [1]. In this scenario, the user first instructs the robot to follow him/her, and then the robot follows the user until they arrive at the destination. Here, the robot is expected to avoid obstacles and walls autonomously, tracks the user in a crowd, and always keeps an appropriate distance from the user. This scenario enables the user to lead the robot in a more natural way. This kind of robot navigation is called a semi-automatic robot navigation [6], which means that the direction or moving path is provided by human, but the control such as avoiding obstacles or tracking humans is performed by the robot. There have been many works on human tracking and person following. In this paper, we define the terminology “human tracking” as detecting humans near the sensor and tracking the target humans, which does not imply the robot's motion. We also define “person following” making a robot follow the target person using the human tracking technology. The human tracking technologies have been applied to many fields such as monitoring [7] and robotics [8]. The tracking is mainly based on computer vision [9] or sensors such as a laser range finder (LRF) [10]. The methods based on computer vision can use color information and distance to the object using a stereo camera; however, the drawback of the vision-based method is that it is fragile against the lighting condition [11]. The computational complexity is another problem. The LRF-based method can measure distances between sensors and objects in a two-dimensional plane precisely, and robust against lighting condition. Considering these advantages, we decided to use the LRF-based method for human tracking because its robustness against lighting condition, precision of measurement and low computational cost. When following a person using LRFs, the following three issues should be considered. The first one is number and positions of the LRFs, and the second one is the algorithm for tracking the person. The third one is how to control the robot when the position of the person is observed.

Journal of Man, Machine and Technology(JMMT) Volume1, Number1,December 2012 doi: 10.4156/jmmt.vol1.issue1.4

44

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Many systems employ only one LRF and measure the legs of the humans [12-16]. The problem of this approach is that it is difficult to separate multiple persons in a crowd. Therefore, several systems employ multiple LRFs and measure legs, shoulders and heads of the persons to increase robustness of the detection. However, considering the cost of LRFs, number of LRFs should be small. As for the detection algorithm, many system use the Kalman filter [15-16], particle filter [17] and their variants such as Interactive Multiple Model [18-19] for tracking. These algorithms are used for robust tracking under noisy observation, considering the temporal continuity of the objects and the distribution of observation noise. One drawback of these algorithms (especially the particle filter) is the computational cost. Considering the implementation to robots, the simpler tracking logic is desired. As for the third issue, many human tracking methods do not consider how to control the mobile robot so that the robot keeps an appropriate distance from the target person. In addition, it should be considered that the environment changes according to the position of the robot and the target person. When passing through a narrow doorway or going into a small room such as an elevator car, the distance to be kept from the walls or the target should change so that the robot actually passes into the room. Considering the above issues, we have developed a system for person following and mobile robot control such that:  Robustly tracks the target person using one LRF under an environment where multiple persons are moving, and  Let the robot go into an elevator car with the person and exit from it safely. This paper is organized as follows. In section 2, we propose a method to follow a target person using an LRF. In section 3, a robot system that can ride an elevator car with the target person using person-following and semi-autonomous behavior. In section 4, we describe results of experiments that examine the robustness of the proposed method such as person-following under an environment where multiple people are walking or the target person is blocked by another person, as well as the elevator riding with the target person. We conclude this paper in section 5.

2. Person following using a laser range finder 2.1 Requirement We propose an algorithm to track the target person using one LRF under an environment where multiple persons are moving. The supposed environment is a real environment such as a room or a corridor. To achieve this, we should solve the following four issues. (1) The robot should detect the target person. (2) The robot should be able to track the target person even when other persons exist in the space. (3) The robot should cope with occlusion. (4) The robot should avoid obstacles. Not only solving these issues, the solution should not be computationally expensive, as described in Introduction. Among these issues, most of the previous works either did not consider the obstacle avoidance or the avoidance behavior was unified with other processes such as (1) to (3). Our system separates the sensor for person following from that for avoiding obstacles, which make the measurement and calculation of the target person and obstacles easier. We can use the algorithms of human tracking and obstacle avoidance separately, and the results of the algorithm are merged to design the actual behavior of the robot. We only describe the person following algorithm in this paper, and the obstacle avoidance will be described in the future publication.

2.2 Hardwares Fig. 1 shows the mobile robot and configuration of the sensors. The moving base is Pioneer P3-DX developed by Adept MobileRobots LLC. It employs differential drive and have versatility and quick response. The LRF is installed at 1000 mm high so that persons' waists are observed [20]. Observing persons' waist is more robust than methods that observe legs, because the leg-observation is affected by the person's clothing (such as a long skirt). The other LRF for obstacle avoidance is installed at the lowest front of the robot.

45

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Measurement  position of the LRF

LRF (For Human Tracking)

LRF (For Obstacle Avoidance) Mobile  Robot (ASAHI)

Height : 1000[mm]

Figure 1. For measurement of human position and the installation height of LRF Next, the LRF used in this robot is UTM30-LX manufactured by Hokuyo Automatic Co. Ltd. Fig. 2 shows the scanning range of the sensor. The range is from -45 degree to 225 degree, from 0 to 1080 steps. The resolution is 0.25 degree/step. One scan takes 25 ms.

Scan Direction LRF Measuring Range(270[deg]) 90[deg] (540[step])

180[deg] (900[step])

0[deg] (180[step])

Y

X

225[deg] (1080[ste p])

‐45[deg] (0[step])

Figure 2. Measuring range of UTM30-LX

2.3 Human detection algorithm Next, we describe the algorithm to detect the target person using the measurement from the LRF. The overview of the detection algorithm is as follows. (1) Split objects and walls by comparing the distances. (segmentation) (2) Discriminate object from walls using the width of the region. (discrimination) (3) Calculate the position of the object. (position calculation) (4) Search the nearest object from the detected objects, which becomes the target for following. First, let us explain the segmentation process. As Fig. 3 shows, the LRF measures the distance to the obstacle counterclockwise, and obtains the angle-distance map. Let D ( ) be the distance to the obstacle at angle  . Here,  is an integer representing the step number, as explained in the previous section. Now, let D( ) | D( )  D(  1) | . If the boundary of an object comes between angle  and   1 (the red points in Fig. X), D ( ) is expected to be larger; otherwise (the blue points) is small. Therefore, we can use an arbitrary threshold Dth to detect the boundary.

46

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Figure 3. Segmentation of the measurement data After segmenting the measurement data, we discriminate the segments of human-like objects from other background segments (walls). The discrimination is based on the width of the segment. After the segmentation, absolute width of each segment is calculated (Fig. 4), and the width is compared with the typical width of a human body. According to the AIST anthropometric database [20], the width of a human body (maximum body breadth) is around 418mm (female) or 470mm (male). Similarly, the depth of a body (abdominal extension depth) is around 280 mm. Considering variance of the body size, effect of clothes, posture, arm swinging and measurement error, we decide an object to be a candidate of human when its depth is between 100 to 800 mm. The maximum width of 800 mm is almost twice as wide as a single person's width, but it does not matter unless two persons walk tightly side by side.

Figure 4. Measurement of object widths

After deciding human-like objects, the center points of the objects are calculated, and the nearest object to the LRF is determined as the first target to follow. Through this process, we can achieve the first issue, i.e., a method to detect the target person. Next, we explain a method to track the target person in the environment where multiple persons exist, and some of them sometimes block the target person. Fig. 5 shows an example of such a situation. When a non-target person gets nearer than the target person, a simple person-following method misunderstands that the non-target person is the one to follow. To prevent this, we need to consider the continuity of the person's position. This kind of human

47

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

tracking is usually based on a statistical tracking method such as the Kalman filter or particle filter. However, we confirmed that a method based on simple heuristics explained below works very robustly without a large amount of computation under our environment. The target person

The other person

Y X Figure 5. Problems to follow the nearest human y ymax

C B

A

C B xmin

x xmax

ymin

Figure 6. Movement of the target person

Figure 7. Thresholds of target detection

Fig. 6 shows the example of tracking. Suppose that the LRF is not moving, and the target person is at the position A, then move to the position B at the next step, and then move to the position C. When the target person is at the position C, the other person becomes nearer than the target person. The tracking method should cope with such a situation. Usually this kind of problem is solved by statistics-based tracking method such as the Kalman filter or the particle filter. In this paper, we propose a heuristic-based method, which is much simpler than the statistics-based method and yet robust enough for single-person tracking. Let us define ( x(i, t ), y (i, t )) as the coordinate of the i-th person at time t in the measurement area, and p (t ) be the number of the target person at time t. Here, the origin is the position of the LRF. Then we calculate the difference of the position of i-th person at time t from the target person's previous position, as follows: x(i, t )  x(i, t )  x( p (t  1), t  1) y (i, t )  y (i, t )  y ( p (t  1), t  1)

(1) (2)

Then the target person at time t is determined as

p (t )= arg min x(i,t ) 2  y (i, t ) 2 where xmin  x(i, t )  xmax and y min  y (i, t )  ymax i

(3)

48

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

The region shown in red broken line in Fig. 7 shows the region where xmin  x(i, t )  xmax and y min  y (i, t )  ymax when the position of the target person at time t  1 is the position B. When the coordinate of a person falls into this region, that person can be a candidate of the target user at the next time step. Then the candidate person who is the nearest to the LRF is chosen as the target person at time t. According to the preliminary experiment, we used ( xmin , xmax )  (300,300) and ( ymin , ymax )  (300,600) . The reason why the value of ymax is large is that the target person is supposed to be walking away. When a person who is not the target person blocked the target person (occlusion) at time t, the system cannot find the target user. At that case, the system tries to find the target person at time t+1 referring the target person’s position at t  1 . According to the preliminary experiment, the system could robustly track the target person even when the walking speed of the person suddenly changed or the person turned rapidly.

2.4 Designing the robot's movement Next, we explain how to operate the robot so that it follows the target person. Here, we assume that the coordinate of the target person ( x, y ) is obtained using the human tracking method explained in the previous section. The origin of the coordinate is assumed to be the position of the LRF (and thus the robot). The distance L and angle  robot between the robot and the target person is L  x2  y2

(4) 

x y

 robot  tan 1    

(5)

where the relationship between the coordinate and angle is shown in Fig. 8. Then we design the velocity of the robot V so that it is in proportion to the distance L. This design enables the robot to move rapidly when the distance gets longer.  K L if L  Lsp (6) V  V otherwise  0 Here, KV  0 is a constant to control the velocity. Lsp  0 is a minimum distance

between the robot and the target person to be kept. In addition, when the target person steps to the left or right side, the robot should change its direction so that position of the target person comes in front of the robot. To do this, we need to change the velocity of left and right wheels independently. The difference of the velocity is designed as V  K t robot  K tDrobot (7) where K t  0 and K tD  0 are constants. Now the velocity of left and right wheels are designed as follows. V R  V  V (8) (9) V L  V  V Here, VL and VR are the velocity of the left and right wheels, respectively. Finally, the angle velocity [rad/s] of the left and right wheels are as follows. V L  L 2R

(10) R 

VR 2R

(11) where R is the radius of the wheel.

49

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Y  Robot

L

Running direction

X Mobile Robot

Figure 8. Parameters for person following

3. Riding an elevator car 3.1 Special behavior should be designed for riding an elevator As mentioned above, the robot for daily life support is desired to move through a small doorway or get into a small space such as an elevator car to move in a building. In a usual person-following behavior, the robot keeps a distance between the target person or other obstacles (that is 1000 mm in this work). However, when entering into a small space, the robot should get nearer than the distance to get into the car. Moreover, when riding an elevator, controlling only distance to the target person is not enough. Fig. 9 shows the situation when the robot is keeping a minimum distance to the target person. The two robots in the figure are at the same distance from the target person, but one of the two robots is outside of the elevator car. Therefore, to enter the car with the target person, we need not only the normal person-following control but also some kind of special control for entering the elevator car safely. Elevator car

Figure 9. Problems of riding an elevator using person-following method

The basic strategy of our robot to use an elevator is that the target person instructs the robot to transit to the elevator mode using spoken dialog, and then the robot autonomously searches the wall of the elevator and enters the elevator by moving along the wall. In this work, we assume that an elevator car is at least 1350 mm deep, which complies with Japanese building code.

3.2 Elevator riding scenario In this section, we explain a scenario for a robot to ride an elevator. First, the target person arrives in front of the elevator (Fig. 10(a)) and tells the robot that they have arrived at the elevator (Fig. 10(b)).

50

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

After that, the target person pushes the elevator button to call the car (Fig. 10(c)). On arriving at the car, the target person gets into the car (Fig. 10(d)). EV

EV internal  button EV button 걁 corridor걂

Door

Target  human

Mobile  robot

“I arrived in  front of the EV”

(a)

(b)

(c)

(d) Figure 10. Riding an elevator (1)

After the instruction by speech, the robot follows the target person (Fig. 11(a)), and senses the wall of the elevator door at the same time (Fig. 11(b)). On sensing the door, the robot transits to the autonomous mode. After entering the autonomous mode, the robot announces the state transition (Fig. 11(c)). The robot searches and follows the elevator wall using the two LRFs, in search of the inner wall of the elevator (Fig. 11(d)). When the robot finds the inner wall, it measures the angle between the robot and the wall, and rotates so that it becomes in parallel to the wall (Fig. 11(e)). Then the robot moves straight measuring the distance to the side wall and the front wall, and it stops when the distance to the front wall becomes smaller than the threshold (Fig. 11(f)).

51

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Measuring  Range of LRF

Recognizes that the robot is close to a wall (a)

(b)

Measuring  Range of LRF

Measuring  Range of LRF

“I switched to the  wall tracing mode” (c)

(d)

Measuring  Range of LRF

Measuring  Range of LRF

(e)

(f) Figure 11. Riding an elevator (2)

52

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

3.3 Moving in parallel to the wall In the final part of the elevator riding, the robot moves in parallel to the elevator wall. In this section, we explain how the robot moves in parallel to it. The basic principle is to detect the wall and follow the imaginary line that is Liw mm inside the elevator and in parallel to the wall. Fig. 12 shows the overview of the method. The control strategy is similar to that used for person following. Let L be the distance from the robot to the wall, and  be the angle between the robot and the wall. Then the velocities of two wheels can be calculated as follows. V R  V  V (12)

V L  V  V (13)

V  K L L  K LD L  K t  K tD (14) Here, VR and VL are velocities of right and left wheels, respectively. K L , K LD , K t and K tD are constants to control the velocity. Imaginary  line Wall

Measuring Range of  LRF

Mobile  robot

Figure 12. Following a wall

3.4 Getting off the elevator When the robot gets off the elevator, just detect the door opening and go out of the elevator. The point to be considered is how to detect the target person again, because the robot loses the target person when entering the elevator. To do this, the robot rotates to face the door (Fig 13(a)), and halts. Then the target person to stand in front of the robot (Fig. 12(b)), and give a command to detect the target again using spoken dialog. After arriving at the destination floor, the target person gets off the elevator, and the robot follows the person.

53

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

(a)

(b) Figure 13. Re-detection of the target person

4. Experiment 4.1 Experimental conditions The experiments were conducted in a room. There were no obstacles in the room. All windows were shut and hidden by the curtain. Settings of the parameters are shown in Table 1. Table 1. Parameter settings

Explanation LRF observation data segmentation threshold Minimum human-robot distance Person following parameters Distance between the wall and the imaginary line Wall following parameters

Symbol Dth

Value 200 mm

Lsp

1000 mm

KV K tD

0.2 s-1 2.0 mm/s·deg 0.01 mm/deg

Liw

500 mm

KL

0.01 s-1 0.01

Kt

K LD

4.2 Person-following experiment We conducted an experiment to confirm that the proposed person-following method works under an environment where many people are moving around. Fig. 14 shows the configuration of the experiment. In this experiment, one target person and four other people were in a room, and they moved in synchronous with a metronome. The timing of the movement is shown in Fig. 15. In this figure, the numbers by the points show the time in second when the person passed by the point. As shown, the velocity of all walkers (except the one who did not move) was 1500 mm/s. The walkers were undergraduate students of Osaka Institute of Technology.

54

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

3000[mm]

0.0

2.0

0.5

1.5

2.0

1.0

Other person

0.0

Target person

0.5

Walking path

1.5

1.0

1.0

1.0

1.5

0.5

0.5

2.0

0.0

0.0

750[mm] 600[mm]

1200[mm]

500[mm] 2400[mm]

LRF

Figure 14. Experimental setting

Not moving

Figure 15. Time for walkers to pass

Fig. 16 shows the pictures of the experiment. As shown, we could confirm that the robot using the proposed method could follow the target person precisely.

55

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

(a) 0.0s

(b) 0.5s

(c) 1.0s

(d) 1.5s

(e) 2.0s

(f) 2.5s

(g) 3.0s

(h) 3.5s Figure 16. The experimental result

4.3 Robustness against crossing Next, we conducted an experiment to confirm whether the proposed method could track the target even when the other person blocked the target. The configuration is shown in Fig. 17. There were two walkers (the target and the other), who were asked to walk in 1500 mm/s velocity, in synchronous with the metronome sound. The target started at time 0.0 s, and the other person started at time 0.5 s; thus, the other person reached to point A at time 1.5 s and blocked the path of the target person. We conducted 25 trials, five times of trials by five combinations of walkers. The walkers were undergraduate students of Osaka Institute of Technology. As a result, the robot could track the target without failure for all the trials.

56

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

3000[mm]

1500[mm]

Other 900[mm]

750[mm] Target

A 500[mm]

2400[mm] LRF

(a)

(b) Figure 17. Experimental setting

Fig. 18 shows an example of the tracking result. In this figure, the blue cross shows the measured position of the target person, and the red arrow is the other person's trajectory. As shown, the LRF lost the target from 2250 to 2750 mm, but it could find the target again. As a result, the trajectory of the target was robustly estimated. Trajectory of the target 5000 4500 4000 3500

Y [mm]

3000 2500 2000 1500 1000 500 0 -500

-2500 -2000 -1500 -1000 -500

LRF

0 500 X [mm]

1000 1500 2000 2500

Trajectory of the  other walker

Figure 18. Result of the target measurement

4.4 Elevator riding Finally, we conducted an experiment in which the target person got into the elevator with the robot and got out of it. The experiment was conducted using a real elevator in Osaka Institute of Technology. As a result, the robot successfully could ride the elevator. Fig. 19 and 20 shows the pictures of the behaviors of the target person and the robot when riding the elevator. As shown, we can confirm that the robot could behave as designed.

57

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

(a) 4[s]

(b) 12[s]

(c) 14[s]

(d) 20[s]

(e) 22[s]

(f) 23[s]

(g) 23.5[s]

(h) 24[s]

Figure 19. Riding an elevator (1)

58

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

(a) 26[s]

(b) 28[s]

(c) 29[s]

(d) 30[s]

(e) 34[s]

(f) 35[s]

Figure 20. Riding an elevator (2)

Fig. 21 shows the behavior of the target person and the robot when getting off the elevator. Like the figures above, the robot could follow the person and get off the elevator smoothly.

59

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

(a) 2[s]

(b) 4[s]

(c) 14.5[s]

(d) 17.5[s]

(e) 21[s]

(f) 30[s]

Figure 21. Getting off the elevator

As described above, we could confirm that the robot smoothly rode and got off the elevator using the proposed person-following method and interaction with the target person. However, it must be noted that there were several assumptions for the method to work successfully, which limits the proposed method. For example, the current method assumes that the elevator is empty when riding. In a real situation, there may be several passengers in a car, and the method would fail in such a situation. However, as the robot moves by being helped by the target person, we think that we can develop a method for the target person to help the robot appropriately for riding a crowded elevator. In our opinion, it is important for a robot to achieve the task reliably even with a help from human towards a real use of a robot in a society.

5. Conclusions We have developed a method for a robot to follow a person robustly using one LRF. Using the proposed method, we developed a robot system that can ride an elevator with the target person robustly and safely. From the experiments, the person-following method was proven to be able to track the target person robustly even under the environment where multiple walkers existed. The tracking algorithm considering human walking behavior made it possible to track the target even when the other walker blocked the target.

60

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

Conventional methods of target person measurement using LRF to observe legs of the target; compared with those methods, the proposed method measures waist of the target, which is robust against clothes such as a long skirt. The proposed algorithm is a combination of simple method, yet it is proven to work very robustly. The simplicity and robustness are two of very important factors for a robot in a real use. In addition to the person-following, we proposed a robot system that can ride an elevator car robustly by interacting with the target person. It is essentially difficult to enter a small space like an elevator car by using only usual person-following method. A completely autonomous system to do this should become very complicated and not expected to be robust; our method solves this problem by incorporating the human user into the system behavior. As we proved in the experiment, a little bit of human help makes the robot behave very robustly. A person-following robot can be used as a carrier of heavy baggages. As it can use an elevator, the robot is expected to carry baggages between different floors. One of the remaining problems is the situation where the robot completely lost the target. In the current implementation, the robot recognizes the nearest object as a target. Therefore, when the robot loses the target, it calls the target person using speech synthesis and the person should come near at the front of the robot. Automating this procedure will be an issue to be solved next.

6. References [1] A. Mertens, U. Reiser, B. Brenken, M. Lüdtke, M. Hägele, A. Verl, C. Brandl and C. Schlick, “Assistive Robots in Eldercare and Daily Living : Automation of Individual Services for Senior Citizens”, Sabina (Ed.) , Intelligent Robotics and Applications Part I : 4th International Conference, ICIRA 2011, pp. 542-552, 2011. [2] C.-H. King, T. L. Chen, Z. Fan, J. D. Glass and C. C. Kemp, “Dusty : an assistive mobile manipulator that retrieves dropped objects for people with motor impairments ”, Disability and Rehabilitation : Assistive Technology, vol. 7, no. 2, pp168-79, 2012. [3] K. Yamazaki, R. Ueda, S. Nozawa, M. Kojima, K. Okada, K. Matsumoto, M. Ishikawa, I. Shimoyama and M. Inaba, “Home-Assistant Robot for an Aging Society”, T.kanade(Ed.), Proceedings of The IEEE, Special Issue, Quality of Life Technology, Vol.100, No.8, pp. 24292441, 2012. [4] T. L. Chen and C. C. Kemp, “A Direct Physical Interface for Navigation and Positioning of a Robotic Nursing Assistant”, Advanced Robotics, vol. 25, lssue5, pp. 605-627, 2011. [5] R. Gockley, J. Forlizzi and R. Simmons, “Natural Person-Following Behavior for Social Robots,” In Proc. Int. Conf. On Human-robot Interaction (HRI2007), pp. 17-24, 2007 [6] A. Argyros, P. Georgiadis, P. Trahanias and D. Tsakiris, “Semi-autonomous Navigation of a Robotic Wheelchair,” J. Of Intelligent and Robotic Systems, vol. 34, pp. 315-329, 2002. [7] H. Zhao and R. Shibasaki, “A Novel System for Tracking Pedestrians Using Multiple Single-Row Lasar-Range Scanners,” IEEE Trans. Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 35, no. 2, pp. 283-291, 2005. [8] N. Bellotto and H. Hu, “Multisensor-Based Human Detection and Tracking for Mobile Service Robots,” IEEE Trans. Systems, Man and Cybernetics—Part B: Cybernetics, vol. 39, no. 1, pp. 167-181, 2009. [9] D. M. Gavrila, “The Visual Analysis of Human Movement: A Survey,” Computer Vision and Image Understanding, vol. 73, no. 1, pp. 82-98, 1999. [10] A. Fod and A. Howard, “Laser-Based People Tracking”, In: IEEE international conference on robotics and automation (ICRA), pp. 3024–3029, 2002.

61

A Mobile Robot System With Semi-Autonomous Navigation Using Simple And Robust Person Following Behavior Yutaka Hiroi, Shohei Matsunaka, Akinori Ito

[11] P. K. Atray and M. A. Hossain, “Multimodal fusion for multimedia analysis: a survey,” Multimedia Systems, vol. 16, pp. 345-379, 2010. [12] D. Schulz, W. Burgard, D. Fox and A. B. Cremers, “People Tracking with a Mobile Robot Using Sample-based Joint Probabilistic Data Association Filters”, International Journal of Robotics Research, vol.22, no.2, pp. 99-116, 2003. [13] J. Xavier, M. Pacheco, D. Castro, A. Ruano and U. Nunes, “Fast line, arc/circle and leg detection from laser scan data in a player driver”, In: IEEE international conference on robotics and automation (ICRA), pp. 3941-3946, 2005. [14] J. Cui, H. Zha, H. Zhao and R. Shibasaki, “Robust Tracking of Multiple People in Crowds Using Laser Range Scanners”, In: IEEE 18th international conference on pattern recognition (ICPR), pp. 857-860, 2006. [15] J. H. Lee, T. Tsubouchi, K. Yamamoto and S. Egawa, “People tracking using a robot in motion with laser range finder.” In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2936-2942, 2006. [16] M. Luber, J. A. Stork, G. D. Tipaldi and K. O. Arras, “People Tracking with Human Motion Predictions from Social Forces”, In: IEEE international conference on robotics and automation (ICRA), pp. 464-469, 2010. [17] M. Montemerlo, S. Thrun and W. Whittaker, “Conditional Particle Filters for Simultaneous Mobile Robot Localization and People-Tracking”, In: IEEE international conference on robotics and automation (ICRA), pp. 695-701, 2002. [18] H. Zhao, Y. Chen, X. Shao, K. Katabira and R. Shibasaki, “Monitoring a populated environment using single-row laser range scanners from a mobile platform”, In: IEEE international conference on robotics and automation (ICRA), pp. 4739-4745, 2007. [19] O. M. Mozos, R. Kurazume and T. Hasegawa, “Multi-Layer People Detection Using 2D Range Data”, International Journal of Social Robotics, vol. 2, lssue 1, pp. 31-40, 2010. [20] AIST Anthropometric database 1991-1992: Retrived November 30, 2012, from http://riodb.ibase.aist.go.jp/dhbodydb/91-92/main.html (in Japanese).

62

Suggest Documents