Long distance prediction and short distance control in

0 downloads 0 Views 525KB Size Report
Long distance prediction and short distance control in Human-Robot Systems. Rainer Palm, Senior Member IEEE and Achim J. Lilienthal, Member IEEE.
Long distance prediction and short distance control in Human-Robot Systems Rainer Palm, Senior Member IEEE and Achim J. Lilienthal, Member IEEE Abstract—The study of the interaction between autonomous robots and human agents in common working areas is an emerging field of research. Main points thereby are human safety, system stability, performance and optimality of the whole interaction process. Two approaches to deal with human-robot interaction can be distinguished: Long distance prediction which requires the recognition of intentions of other agents, and short distance control which deals with actions and reactions between agents and mutual reactive control of their motions and behaviors. In this context obstacle avoidance plays a prominent role. In this paper long distance prediction is represented by the identification of human intentions to use specific lanes by using fuzzy time clustering of pedestrian tracks. Another issue is the extrapolation of parts of both human and robot trajectories in the presence of scattered/uncertain measurements to guarantee a collision-free robot motion. Short distance control is represented by obstacle avoidance between agents using the method of velocity obstacles and both analytical and fuzzy control methods.

on observations of early trajectory parts, plus a subsequent extrapolation. Section II deals with long distance prediction by the example of the identification of pedestrian lanes using fuzzy time clustering and the prediction of possible collisions in the presence of scattered/uncertain measurements. Section III deals with short distance control especially with obstacle avoidance by using velocity obstacles and fuzzy control methods. Section IV deals with simulations and section V ends with conclusions.

I. INTRODUCTION Interaction and cooperation between autonomous robots and human agents in common working areas is a challenging field of research. Human safety, system stability, its performance and the optimality of the whole interaction process are the main issues in this connection. With regard to human-robot interaction two approaches should be mentioned: Long distance prediction which requires the intention recognition of other agents, and short distance control dealing with a mutual control of their motions and behaviors. A great amount of research has been done in the field of planning and learning of mobile robot tasks, on navigation and on obstacle avoidance [1], [2]. Solutions to the problem of intention recognition is the modeling of the human behavior by clustering methods and by Bayesian networks [3], [4]. Other publications deal with ”Intention Recognition in Human-Robot Collaborative Systems” and human-robot motions in accordance with human intention [5]. Bruce et al [6] describe the planned rendezvous of a human and a robot at an intersection zone of their paths. Further research on human intentions together with trajectory recognition and planning is presented by [7]. The modeling of pedestrian trajectories using Gaussian processes is shown in [8]. In [9] fuzzy methods, probabilistic methods and HMM approaches for modeling of human trajectories are compared. Further research on fuzzy logic or fuzzy cognitive maps and reinforcement learning in connection with human robot cooperation are presented by [3], [10]. In addition to the recent research mentioned above and as an extension of our work described in [11], this paper concentrates on the recognition of human intentions to move along certain trajectories. Our proposed method is based

Fig. 1. Long/short distance principle shown by Edinburgh pedestrian data

II. LONG DISTANCE PREDICTION A. Identification of pedestrian tracks and lanes 1) Fuzzy modeling of lanes: Learning from experience is an excellent method to identify human intentions to reach a specific goal. In the framework of ”programming by demonstration” the identification of human grasp pattern by fuzzy time clustering and modeling is described in [12]. In the same way, for the recognition of human intentions ”where to move” models are used that are built on the basis of recorded pedestrian trajectories. We used the ”Edinburgh Informatics Forum Pedestrian Database” consisting of a large set of walking trajectories that has been recorded over a period of months [13] (see Fig. 1). From this data lanes are identified from people who normally use a specific area in a work space during a certain time. In our case the models are built by fuzzy time clustering as follows: Assume 𝑚 locations at a work space describing start and goal positions of pedestrian tracks. 1. Split a set of 𝑚𝑡𝑜𝑡 trajectories into 𝑚𝑘 groups (bundles) each of them containing 𝑚𝑘𝑙 trajectories with the same start/goal area. 2. Run a fuzzy time clustering for each trajectory separately with 𝑐 time clusters 𝐶𝑖,𝑘,𝑙 ∈ 𝑅2 , 𝑖 = 1...𝑐 - number of time cluster, 𝑘 = 1...𝑚𝑘 - number of group; 𝑙 = 1...𝑚𝑘𝑙 - number of trajectory in group 𝑘;

3. Compute the mean values of ∑ the 𝑖 − 𝑡ℎ time clusters in 𝑚𝑘 each group 𝑘: 𝐶𝑖,𝑘 = 1/𝑚𝑘𝑙 ⋅ 𝑙=1𝑙 𝐶𝑖,𝑘,𝑙 , 𝑚𝑘𝑙 - number of trajectories in group 𝑘. 𝐶𝑖,𝑘,𝑙 = (𝑐𝑥 , 𝑐𝑦 )𝑇𝑖,𝑘,𝑙 are the 𝑥, 𝑦 coordinates of the 𝑖-th cluster center in the 𝑙-th trajectory in the 𝑘-th group. The time clusters 𝐶𝑖,𝑘 , (𝑖 = 1...𝑐, 𝑘 = 1...𝑚𝑘 ), represent finally the lanes . Fig.2 shows an example for 𝑘 = 3 and 𝑐 = 10.

Fig. 3. Kinematic relations

C. Extrapolation of linear trajectories 1) Kinematic relations: The kinematic relation between two systems 𝐶𝐴 and 𝐶𝐵 is in general defined by ⎞ ⎛ 𝑐𝑜𝑠(𝜙𝐴𝐵 ) −𝑠𝑖𝑛(𝜙𝐴𝐵 ) 𝑥𝐴 (3) 𝑇𝐴𝐵 = ⎝ 𝑠𝑖𝑛(𝜙𝐴𝐵 ) 𝑐𝑜𝑠(𝜙𝐴𝐵 ) 𝑦𝐴 ⎠ 0 0 1 Fig. 2. Lanes 1...3, average trajectories

B. Recognition of intentions to follow certain lanes To predict the motion of a human agent to aim at a specific goal, we compute the membership of a part of its trajectory to one of the identified lanes. The membership of a position x = (𝑥, 𝑦)𝑇 to a cluster center 𝐶𝑖,𝑘 is computed by 𝑤𝑖,𝑘 = ∑ 𝑐

1 2

𝑑𝑖,𝑘 𝑚𝑝𝑟𝑜𝑗 −1 𝑗=1 ( 𝑑𝑗,𝑘 )

(1)

where 𝑑𝑖,𝑘 = (x − 𝐶𝑖,𝑘 )𝑇 (x − 𝐶𝑖,𝑘 ), 𝐶𝑖,𝑘 - 𝑖-th cluster center in the 𝑘-th lane, 𝑚𝑝𝑟𝑜𝑗 > 1 - fuzziness parameter [14]. The algorithm to identify the lane is listed as follows: 1. Computation of the closest distances 𝑑𝑖𝑚𝑖𝑛 ,𝑘 = 𝑚𝑖𝑛𝑗 (∣x − 𝐶𝑗,𝑘 ∣) to the cluster centers 𝐶𝑗,𝑘 (𝑗 = 1...𝑐, 𝑘 = 1...𝑚𝑘𝑙 ), 2. Comparison of the membership functions 𝑤𝑖𝑚𝑖𝑛 ,𝑘 and selection of the lane with the highest membership: (𝑤𝑖𝑚𝑖𝑛 ,𝑘 )𝑚𝑎𝑥 = 𝑚𝑎𝑥(𝑤𝑖𝑚𝑖𝑛 ,𝑘 ), 𝑘 = 1...𝑚𝑘𝑙 or 𝑘𝑚𝑎𝑥 = 𝑎𝑟𝑔𝑚𝑎𝑥(𝑤𝑖𝑚𝑖𝑛 ,𝑘 ). Because of the measurement noise several data points are needed for the identification. Therefore moving averages (𝑤 ¯𝑖𝑚𝑖𝑛 ,𝑘 )𝑚𝑎𝑥 over a well defined number of time steps 𝑛 for a time 𝑡𝑗 are being computed with 𝑗 ≥ 𝑛 − 1 (𝑤 ¯𝑖𝑚𝑖𝑛 ,𝑘 )𝑚𝑎𝑥 (𝑡𝑗 ) =

𝑛−1 1∑ (𝑤𝑖 ,𝑘 )𝑚𝑎𝑥 (𝑡𝑗−𝑖 ) 𝑛 𝑖=0 𝑚𝑖𝑛

(2)

With this, the degree of membership of a human trajectory to a particular lane can be reliably identified.

where x = (𝑥𝐴 , 𝑦𝐴 )𝑇 , is a position vector in 𝐶𝐴 and 𝜙𝐴𝐵 is the orientation angle between 𝐶𝐴 and 𝐶𝐵 . Figure 3 shows the relations between the coordinate systems 𝐶𝑅 (robot), 𝐶𝐻 (human), and 𝐶𝐵 (base): 𝑇𝐻𝐵 between human and base, 𝑇𝑅𝐵 between robot and base, 𝑇𝐻𝑅 between human and robot. The 𝑦 axis is pointing in the direction of the velocity vector. A further coordinate system 𝐶𝐻˜ is defined whose 𝑦-axis points from the center of 𝐶𝐻 to the center of 𝐶𝑅 . It defines the of the human relative to the robot. The heading angle 𝜙𝐻𝐻 ˜ distance between 𝐶𝐻 (and 𝐶𝐻˜ ) and 𝐶𝑅 is denoted by 𝑑𝐻𝑅 . 2) Observation of trajectories at an early point of time: The goal is to predict a possible collision/rendezvous together with a computation of the position of the ”area of intersection” and the traveling time of the contributing agents to reach this area. If two agents reach this area at different points in time then a collision does not occur. Velocities v𝐻 and v𝑅 of human and robot, respectively, are estimated by means of Kalman filtered sequences of robot and human positions x𝑅𝐵 (𝑡𝑖 ) and x𝐻𝑅 (𝑡𝑖 ) from which the distance 𝑑𝐻𝑅 between human frame 𝐶𝐻 and robot frame 𝐶𝑅 and the relative orientation angle 𝜙𝐻𝑅 between the two frames are computed. 3) Computation of possible collisions: The combination ”1 human - 1 robot” can be easily handled because it is sufficient to change the velocity of the robot to avoid a crash at the intersection. For more than two agents this problem is more challenging due to possible conflicts in reaching local goals. This is illustrated by a ”1 human - 2 robot” example (see Fig. 4). Let the human agent intend to move along a line with an estimated velocity 𝑣𝐻 . Let robot 1 and 2 intend to move on lines with the velocities 𝑣𝑅1 and 𝑣𝑅2 . All these 3 lines cross at 3 different intersections 𝐼𝐻,𝑅1 , 𝐼𝐻,𝑅2 , and 𝐼𝑅1,𝑅2 that can be computed in advance. The question is, which velocities 𝑣𝑅1 and 𝑣𝑅2 lead to a collision with the human agent provided

that the velocity 𝑣𝐻 of the human remains constant. Let the traveling times 𝑡𝑖𝑗 on the particular sections 𝐷𝑖𝑗 be 𝑡𝑖𝑗 = 𝐷𝑖𝑗 /𝑣𝑖 ;

𝑡𝑗𝑖 = 𝐷𝑗𝑖 /𝑣𝑗

(4)

Minimizing of the differences Δ𝑡𝑖𝑗 = 𝑡𝑖𝑗 − 𝑡𝑗𝑖 by changing the velocities 𝑣𝑅1 and 𝑣𝑅2 makes it possible to compute the time durations and respective velocities that lead to collisions and from this to conclude how to avoid such collisions. 50

40

vH=3 m/s D

30

20

D

human H

H,R2

10 y

DH,R1 0

vR1=12.4 m/s

DR2,H

−10

D

R1,R2

DR2,R1

−20

robot R2

−30

vR2 =6.9 m/s

−40 0

5

10

15 x

20

25

30

Fig. 4. Intersection points, one human, two robots

Let Δ𝑡1 = Δ𝑡𝐻,𝑅1 , Δ𝑡2 = Δ𝑡𝐻,𝑅2 , Δ𝑡3 = Δ𝑡𝑅1,𝑅2 𝑣1 = 𝑣𝐻 , 𝑣2 = 𝑣𝑅1 , 𝑣2 = 𝑣𝑅2 , (𝑣𝑖 ≥ 0) Δt = (Δ𝑡1 , Δ𝑡2 , Δ𝑡3 )𝑇 , v = (𝑣1 , 𝑣2 , 𝑣3 )𝑇 . Define further the Lyapunov-function 1 𝑇 Δt(𝜏 ) 𝑄Δt(𝜏 ) → 𝑚𝑖𝑛 (5) 2 where 𝑄 > 0 is a diagonal weighting matrix. Differentiation of (5) by 𝜏 yields 𝑉 (𝜏 ) =

˙ )𝑇 𝑄Δt(𝜏 ) < 0 𝑉˙ (𝜏 ) = Δt(𝜏

(6)

where 𝑉˙ = 𝛿𝑉 /𝛿𝜏 and ˙ = 𝛿Δt ⋅ v˙ = 𝐽 ⋅ v˙ Δt 𝛿v ⎛ ⎜ 𝐽 =⎝

𝛿Δ𝑡1 𝛿𝑣2 𝛿Δ𝑡2 𝛿𝑣2 𝛿Δ𝑡3 𝛿𝑣2

𝛿Δ𝑡1 𝛿𝑣3 𝛿Δ𝑡2 𝛿𝑣3 𝛿Δ𝑡3 𝛿𝑣3

v𝑘+1 = v𝑘 + 𝛿v𝑘+1

(7)

⎟ ⎠

(14)

This option leads to ∣Δt∣∣𝐶Δ𝑡 ∣>0 > ∣Δt∣∣𝐶Δ𝑡 ∣=0 which prevents human and robot from a possible collision at some intersections. As already mentioned measurement noise and other uncertainties lead to fuzziness in the intersections. In general measurement errors and uncertainty in kinematics and geometry have an impact on the generation of appropriate velocities to avoid collisions. Give certain error rates in measurements for angles and distances, the resulting velocities lead to intersection areas (red scatter diagrams, see Fig. 4) instead of crossing points. The same is true for the results gained from the optimization (blue areas). To cope with this problem numerically in the sense of fuzzy sets we run an example (1 human, 2 robots) several times (e.g. 20 times) and make a fuzzy clustering for the crossing points (red dots) and also for the optimized points (blue dots). The cluster centers are computed by the centers of gravity of each cluster. The clusters are modeled by Gaussian functions 𝑑 2 1 𝑒−( 𝜎 ) (15) 𝑤𝐺𝑖 = √ 2𝜋𝜎 where 𝑤𝑖 - membership of a point 𝑥 to a cluster 𝐶𝑙𝑖 , 𝑑 distance of a point 𝑥 in the cluster from the cluster center 𝑥𝑐𝑖 , 𝜎 - standard deviation. For obstacles with a diameter 𝐷 it can easily be shown that a collision can be avoided if a single measurement satisfies

˜ 𝑖𝑗 ≥ 4𝜎 + 2𝐷 𝐷



(12)

the velocity vector converges to a velocity vector v𝑜𝑝𝑡 and a corresponding minimum of Δt. In order to avoid a possible collision, equation (11) is changed in that a correction vector 𝐶Δ𝑡 is introduced so that 1 𝑇 𝑉 = (Δt + 𝐶Δ𝑡 ) 𝑄(Δt + 𝐶Δ𝑡 ) → 𝑚𝑖𝑛. (13) 2 From this follows that (11) changes to 𝛿v = −𝐾𝑜𝑝𝑡 (𝐽 𝑇 𝐽)−1 𝐽 𝑇 𝑄−1 (Δt + 𝐶Δ𝑡 )

no collision

R1,H

robot R1

𝐾𝑜𝑝𝑡 determines the length of the optimization step. Running (11) in an optimization loop

(16)

˜ 𝑖𝑗 between a ”red” and a ”blue” position. for the distance 𝐷 (8)

which is not square due to 𝑣1 = 𝑣𝐻 = 𝑐𝑜𝑛𝑠𝑡. Choosing ˙ 𝑇 𝑄 = −(Δt)𝑇 leads to Δt 𝑉˙ = −Δt𝑇 Δt < 0

(9)

𝑄𝐽 ⋅ v˙ = −Δt

(10)

𝛿v = −𝐾𝑜𝑝𝑡 (𝐽 𝑇 𝐽)−1 𝐽 𝑇 𝑄−1 Δt

(11)

From (7) we get

and finally

III. SHORT DISTANCE CONTROL From robot/human positions measured at an early point in time Kalman-filtered sequences of robot and human positions x𝑅𝐵 (𝑡𝑖 ) and x𝐻𝑅 (𝑡𝑖 ) are gained from which the velocities v𝐻 and v𝑅 are estimated. Then the distance 𝑑𝐻𝑅 between are measured and 𝐶𝐻 and 𝐶𝑅 and the relative angle 𝛼𝐻𝑅 ˜ the angle 𝜙𝐻𝑅 computed. Despite of existent traffic rules a collision between human agent and robot may occur. Therefore the robot controller switches from the ’normal mode’ to ’bypassing control mode’ if at close distance. After having passed the human agent, the robot controller switches back to ’normal mode’ keeping its previous velocity value while

heading the original goal. In the following this is done by two methods, the velocity obstacle method and the fuzzy control approach. A. Velocity obstacles Measurement errors or uncertainties of the geometrical model lead to crossing areas instead of crossing points that can be taken into account in advance. In addition the size of an agent plays an specific role. A good method to consider the size of an agent is given by the velocity obstacle method [15], [16]. This method has already been introduced for ship navigation and mooring in harbors [17]. Let two circular agents 𝐻 and 𝑅 with the radii 𝑟𝐻 and 𝑟𝑅 move in a common workspace on straight trajectories with the velocities 𝑣𝐻 and 𝑣𝑅1 (see Fig. 5). The relative velocity between 𝐻 and 𝑅 is 𝑣𝑅𝐻1 = 𝑣𝑅1 − 𝑣𝐻 . Then we construct a larger agent 𝐻 ′ at the location of 𝐻 with a radius 𝑟𝑅𝐻 = 𝑟𝐻 + 𝑟𝑅 and draw two tangents from 𝑅 to this new agent 𝐻 ′ . From this it becomes obvious that 𝑣𝑅𝐻1 lays inside the velocity obstacle and it comes to a collision. There are three options to bring the relative velocity outside the velocity obstacle. a) change ∣𝑣𝑅 ∣ and keep the direction of 𝑣𝑅 constant b) change the direction of 𝑣𝑅 and keep ∣𝑣𝑅 ∣ constant c) change ∣𝑣𝑅 ∣ and the direction of 𝑣𝑅 In our case we choose option c) which requires a compromise between the two other options.

To guarantee the above mentioned compromise between the steering angle and the velocity of the robot we formulate a control law that leads to a desired angle 𝜖𝑑 outside the velocity obstacle. With the Lyapunov function 𝑉𝜖 = 1/2 ⋅ (𝜖 − 𝜖𝑑 )2

(19)

we obtain a stable control law that leads the difference velocity 𝑣𝑅𝐻 away from the human velocity obstacle 𝑑𝜔 = −𝐾 ⋅ 𝐽 † ⋅ (𝜖 − 𝜖𝑑 )

(20)



where 𝐽 is the pseudo inverse of 𝐽 and 𝐾 is a gain matrix (see 18). B. Fuzzy control approach To identify the trajectory of the human agent relative to the robot it is sufficient to know the heading angle and the relative velocity Δv = ∣x˙ 𝐻𝐵 − x˙ 𝑅𝐵 ∣ = 𝜙𝐻𝐻 ˜ ∣x˙ 𝐻𝑅 ∣. Δv is an invariant and can be computed in any is comcoordinate system. Ones the heading angle 𝜙𝐻𝐻 ˜ puted the relative motion between human and robot can be determined. Fuzzy labels are attached to 9 motion directions of the human agent to the robot represented by the heading angle 𝜙𝐻𝐻 ˜ : ’𝐴𝑃 𝑃 =approach’, ’𝐴𝑉 𝑅=avoid right’, ’𝑀 𝑂𝑅=move right’, ’𝑅𝐸𝑅=recede right’, ’𝑅𝐸𝐶𝐿 =recede L’, ’𝑅𝐸𝐶𝑅 =recede R’,’𝑅𝐸𝐿=recede left’, ’𝑀 𝑂𝐿=move left’, ’𝐴𝑉 𝐿=avoid left’. For the robot actions we have the following lables: ’𝑇 𝑈 𝑅=turn right’, ’𝑇 𝑈 𝐿=turn left’, ’𝑀 𝑆=move straight’, ’𝑁 𝑂𝑁 =no change’, ’𝑆𝐷=slow down’,’𝑆𝑈 =speed up’,’𝐴𝑉 𝑅=avoid right’,’𝐴𝑉 𝐿=avoid left’. In general the variables to be processed are 𝛼𝐻𝑅 ˜ , 𝜙𝐻𝐻 ˜ , the distance ∣Δx∣ = ∣x𝐻𝑅 ∣, and the relative velocities ∣Δv∣. For ∣x𝐻𝑅 ∣ and ∣Δv∣ fuzzy sets ’Small’, ’Medium’, and ’Large’ of Gaussian type are defined and appropriate fuzzy rules are formulated 𝐼𝐹 ∣Δx∣

𝛼𝐻𝑅 ˜ = 𝐴𝑖 = 𝐷𝑋𝑖

𝐴𝑁 𝐷

𝐴𝑁 𝐷

𝜙𝐻𝐻 = 𝑃𝑖 ˜

∣Δv∣ = 𝐷𝑉𝑖

𝐴𝑁 𝐷

𝑇 𝐻𝐸𝑁

(21) 𝐴𝐶𝑇𝑟𝑜𝑏

where 𝐴𝐶𝑇𝑟𝑜𝑏 : Fig. 5. Principle of velocity obstacles

Bringing the difference velocity 𝑣𝑅𝐻 out of the velocity obstacle means changing the angle 𝜖 appropriately (see Fig. 5). 𝜖 is a function 𝑓 (𝜙𝑅 , ∣𝑣𝑅 ∣, 𝜙𝐻 , ∣𝑣𝐻 ∣), whereas only 𝜙𝑅 , ∣𝑣𝑅 ∣ are control variables. A change of 𝜖 is defined as 𝑑𝜖 =

∂𝑓 ∂𝑓 𝑑∣𝑣𝑅 ∣ 𝑑𝜙𝑅 + ∂𝜙𝑅 ∂∣𝑣𝑅 ∣

(17)

where 𝑑𝜙𝑅 and 𝑑∣𝑣𝑅 ∣ are the changes in the steering angle 𝜙𝑅 and the velocity ∣𝑣𝑅 ∣, respectively. The partial derivatives in (17) are computed on the basis of given geometrical parameters and velocities. Rewriting (17) yields 𝑑𝜖

=

𝐽

=

𝐽 ⋅ 𝑑𝜔 ∂𝑓 ∂𝑓 ); , ( ∂𝜙𝑅 ∂∣𝑣𝑅 ∣

(18) 𝑑𝜔 = (𝑑𝜙𝑅 , 𝑑∣𝑣𝑅 ∣)𝑇

𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑃 𝐻𝑖

𝐴𝑁 𝐷

∣v∣𝑟𝑜𝑏 = 𝑉 𝑅𝑖

𝑖 - rule number - relative angle between 𝐶𝐻˜ and 𝐶𝑅 𝛼𝐻𝑅 ˜ 𝐴𝑖 - fuzzy set for the relative angle - heading angle between 𝐶𝐻˜ and 𝐶𝐻 𝜙𝐻𝐻 ˜ 𝑃𝑖 - fuzzy set for the heading angle ∣Δx∣ - distance between 𝐶𝐻 and 𝐶𝑅 𝐷𝑋𝑖 - fuzzy set for the distance ∣Δv∣ - relative velocity between 𝐶𝐻 and 𝐶𝑅 𝐷𝑉𝑖 - fuzzy set for the relative velocity 𝜙𝐻𝑅𝑟𝑜𝑏 - steering angle of robot 𝑃 𝐻𝑖 - fuzzy set for the steering angle ∣v∣𝑟𝑜𝑏 - desired velocity of the vehicle 𝑉 𝑅𝑖 - fuzzy set for the desired velocity However, not every combination of variables makes sense. Therefore ”pruning” of the set of rules and intelligent hierarchization can solve this problem. A simple set of Mamdani

fuzzy rules for 𝛼𝐻𝑅 ˜ > 0 to avoid a collision between human and robot contains only the heading angle 𝜙𝐻𝐻 ˜ , the steering angle 𝜙𝐻𝑅𝑟𝑜𝑏 and the robot velocity ∣v∣𝑟𝑜𝑏 like: = 𝐴𝑃 𝑃 𝐼𝐹 𝜙𝐻𝐻 ˜ 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑇 𝐿 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑆𝐷 (22) Such a set of rules can either exclusively be applied, but it works even better if it is combined with the crisp decision on being inside or outside the velocity obstacle, as we do in our case. The whole set of rules is shown in (23). 𝐼𝐹 𝐼𝐹

𝜙𝐻𝐻 = 𝐴𝑃 𝑃 ˜ 𝜙𝐻𝐻 = 𝐴𝑉 𝑅 ˜

𝑇 𝐻𝐸𝑁 𝑇 𝐻𝐸𝑁

𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑇 𝑈 𝐿 𝐴𝑁 𝐷 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝐴𝑉 𝐿 𝐴𝑁 𝐷

∣v∣𝑟𝑜𝑏 = 𝑆𝐷 ∣v∣𝑟𝑜𝑏 = 𝑆𝐷

𝐼𝐹 𝜙𝐻𝐻 = 𝑀 𝑂𝑅 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑀 𝑆 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑁 𝑂𝑁 ˜ = 𝑅𝐸𝑅 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑀 𝑆 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑁 𝑂𝑁 𝐼𝐹 𝜙𝐻𝐻 ˜ 𝐼𝐹 𝜙𝐻𝐻 = 𝑅𝐸𝐶𝑅 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑀 𝑆 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑆𝐷 ˜ 𝐼𝐹 𝜙𝐻𝐻 = 𝐴𝑉 𝐿 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝐴𝑉 𝑅 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑆𝐷 ˜ 𝐼𝐹 𝜙𝐻𝐻 = 𝑀 𝑂𝐿 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝐴𝑉 𝐿 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑆𝐷 ˜ 𝐼𝐹 𝜙𝐻𝐻 = 𝑅𝐸𝐿 𝑇 𝐻𝐸𝑁 𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑀 𝑆 𝐴𝑁 𝐷 ∣v∣𝑟𝑜𝑏 = 𝑁 𝑂𝑁 ˜ 𝐼𝐹

𝜙𝐻𝐻 = 𝑅𝐸𝐶𝐿 ˜

𝑇 𝐻𝐸𝑁

𝜙𝐻𝑅𝑟𝑜𝑏 = 𝑀 𝑆

𝐴𝑁 𝐷

Fig. 6. Lanes 1-3 versus test tracks

∣v∣𝑟𝑜𝑏 = 𝑆𝐷 1

Features of fuzzy controller: Form of membership function=Gauss, fuzzifier = singleton, composition method = minmax, number of fuzzy sets: 𝜙𝐻𝐻 ˜ =9, 𝜙𝐻𝑅𝑟𝑜𝑏 =5, ∣v∣𝑟𝑜𝑏 =3. The defuzzified and scaled outputs of the fuzzy controller represent the control inputs for the human operator that lead the difference velocity 𝑣𝐻𝑅 away from the robot velocity obstacle. IV. SIMULATION RESULTS A. Recognition of lanes while tracking From 11 trajectories of the Edinburgh-data 3 different lanes were identified being used in both directions with different velocities. Lanes 1, 2, and 3 are modeled off line on the basis of 4, 5, and 2 trajectories, respectively. The modeling results are representative trajectories/paths of bundles of similar trajectories. In our example, 3 from 20 test trajectories (tracks) were selected from the Edinburgh-data which have not been used for modeling but whose entries/exits areas coincide with those of the modeled lanes (see Fig. 6). During motion the degrees of membership (see Fig. 7) for each track are computed according to (1) and (2) with a moving average of 10 time steps. From the time evolutions of the membership functions an early recognition of the lane used by the human agent can be recognized (see Fig. 8 for track 1). B. Computation of possible collisions In this simulation a combination of 1 human agent and 3 robot agents is presented (see Fig. 9). The simulation shows sufficient distances between red and blue areas which shows that collisions between human agent and the robots and between robots can be avoided. In contrast to [11] the correction vector 𝐶Δ𝑡 is velocity dependent which leads to better results than for 𝐶Δ𝑡 = 𝑐𝑜𝑛𝑠𝑡. In addition, an overlap of ”blue areas” are tried to be avoided which results in a more consistent distribution of the ”blue areas”. The number of optimization steps and the resulting velocities (measured in 𝑚/𝑠), respectively, are 𝑁 = 50, ∣𝑣𝐻 ∣ = 5, ∣𝑣𝑅1 ∣ = 20.11,∣𝑣𝑅2 ∣ = 20.07,∣𝑣𝑅3 ∣ = 22.63.

0.9

lane 1

0.8

0.7 w(i)

0.6

0.5

0.4

0.3

0.2

0

lane 3

lane 2

0.1

0

10

20

30

40

50

60

70

80

90

100

time step i

Fig. 7. membership functions

Fig. 8. memberships track 1

C. Velocity obstacles and fuzzy control The simulation example includes both the velocity obstacle and the fuzzy solution. We simulated a robot and a human agent with the goal positions 𝑥𝑟𝑜𝑏 = (0, 50) and 𝑥ℎ𝑢𝑚 = (40, 10) (see Figs.10 and (23)). The time distances between the squares/circles on the trajectories are 3s. The robot is controlled by the analytical controller described in Section III-A whereas the human agent’s behavior follows the fuzzy controller described in Section III-B. The rule set has already been shown in (23). The time plots show a smooth avoidance procedure even in the case of the ”fuzzy - nonfuzzy” combination. V. CONCLUSIONS Long distance prediction of human-robot interaction requires a recognition of intentions of other agents on a longer time scale whereas short distance control deals with fast actions and reactions between agents during a short period of time. In this paper we presented the identification of the intentions of pedestrians to walk on specific lanes by using fuzzy time clustering of pedestrian tracks. It was shown that, after a short time, human tracks can be successfully identified with pedestrian lanes learned before. This information can be used for an extrapolation of parts of human trajectories to guarantee a collision-free robot motion in the presence

60

positions of H and R3 at t

50

H,R3

50

40

40

positions of R2 and R3 at tR2,R3

positions of H and R2 at t

H,R2

R1

robot

robot 1

30

positions of R1 and R2 at t

R1,R2

human

y

y, m

20

human

H

vR

30

10

20

vH

vH−vR

0

positions of H and R1 at t

H,R1

10

−10 robot 3

positions of R1 and R3 at t

R1,R3

−20 0

2

4

6

8

10 x

12

14

16

R2

18

v −v

0

robot 2

R3

R

H

20

0

5

10

Fig. 9. Example with 1 human and 3 robots with correction

15

20 x, m

25

30

35

40

Fig. 11. Velocity obstacle, t=18s

phiRB, m

1

50

steering angle, robot 0.5

0

0

50

100

150

200

250

200

250

200

250

200

250

−1

40

−1.5

30

pHi

HB

, m

steering angle, human

y, m

human

v

−2

tangents vR2, m/s

H

20

v −v H

vH, m/s

vR robot

R

vR−vH

0 0

5

10

20 x, m

25

30

35

40

Fig. 10. Velocity obstacle, t=0s

of scattered/uncertain measurements. On the other hand, in the context of short distance control we used the methods of velocity obstacles, fuzzy control and a switching control for obstacle avoidance between agents. Computer simulations with simulated and experimental data show the applicability of the methods presented. ACKNOWLEDGMENT This research work has been supported by the AIR-project, Action and Intention Recognition in Human Interaction with Autonomous Systems. R EFERENCES [1] O. Khatib. Real-time 0bstacle avoidance for manipulators and mobile robots. IEEE Int. Conf. On Robotics and Automation,St. Loius,Missouri, 1985, page 500505, 1985. [2] R. Palm and A. Bouguerra. Particle swarm optimization of potential fields for obstacle avoidance. In Proceeding of RARM 2013, Istanbul, Turkey. Volume: Scient. coop. Intern. Conf. in elect. and electr. eng., Sept. 2013. [3] Karim A. Tahboub. Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. Journal of Intelligent and Robotic Systems., Volume 45, Issue 1:31–52, 2006. [4] The Anh Han and L.M. Pereira. State of the art of intention recognition. AI Communications., Volume 26:237–246, 2013. [5] T. Fraichard, R. Paulin, and P. Reignier. Human-robot motion: Taking attention into account . Research Report, RR-8487., 2014. [6] J. Bruce, J. Wawer, and R. Vaughan. Human-robot rendezvous by cooperative trajectory signals. pages 1–2, 2015.

100

150

velocity, robot

1.5 0

50

100

150 t, 10s

velocity, human

1.02

1

15

50

2

1

10

0

2.5

0

50

100

150 t, 10s

Fig. 12. Velocities and steering angles

[7] R. T. Chadalavada, H. Andreasson, R. Krug, and A.J. Lilienthal. That’s on my mind! robot to human intention communication through onboard projection on shared floor space. European Conference on Mobile Robots (ECMR), 2015. [8] T. Ellis, E. Sommerlade, and I. Reid. Modelling pedestrian trajectory patterns with gaussian processes. In 2009 IEEE 12th International Conference on Computer Vision, Workshops (ICCV Workshops), pages 1229–1234. IEEE, 2009. [9] D. Makris and T. Ellis. Spatial and probabilistic modelling of pedestrian behaviour. In Proc. ICRA, pages 3960–3965. IEEE, 2010. [10] A. Ciaramella, M.G.C.A.Cimino, F. Marcelloni, and U. Straccia. Combining fuzzy logic and semantic web to enable situation-awareness in service recommendation. Database and Expert Systems Applications, Lecture Notes in Computer Science., Volume 6261:31–45, 2010. [11] R. Palm, R.T. Chadalavada, and A. Lilienthal. Recognition of humanrobot motion intentions by trajectory observation. In 9th Intern. Conf. on Human System Interaction, HSI2016. IEEE, 2016. [12] A. Skoglund, B. Iliev, B. Kadmiry, and R. Palm. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives. In Proc. CIRA 2007, Jacksonville,USA. IEEE. [13] Edinburgh informatics forum pedestrian database. http://homepages.inf.ed.ac.uk/rbf/FORUMTRACKING/, 2010. [14] R. Palm and B. Iliev. Learning of grasp behaviors for an artificial hand by time clustering and takagi-sugeno modeling. In Proc. FUZZ-IEEE 2006 - IEEE Intern. Conf. on Fuzzy Systems, Vancouver, BC, Canada. [15] P. Fiorini and Z. Shiller. Motion planning in dynamic environments using velocity obstacles. Int. J. Robot. Res., vol. 17, no. 7, 1998, 1998. [16] J. van den Berg, M. Lin, and D. Manocha. Reciprocal velocity obstacles for real-time multi-agent navigation. Proc. IEEE Int. Conf. Robot. Autom., 2008, 2008. [17] F. S. Miller and A. F.Everett. Instructions for the use of martins mooring board and battenbergs course indicator. Authority of the Lords of Commissioners of the Admirality, 1903.