Proactive MDP-based Collision Avoidance Algorithm for ... - DASLAB

0 downloads 0 Views 812KB Size Report
of an autonomous car driving through the intersection with the presence ... Despite the use of both proactive and reactive methods in ... Most of these works consider the ... (MDP) to solve the problem. ... In this section, we formulate the proactive decision making .... relatively small and can be set to initial values in a very short.
Proactive MDP-based Collision Avoidance Algorithm for Autonomous Cars Denis Osipychev, Duy Tran, Weihua Sheng School of Electrical and Computer Engineering Oklahoma State University Stillwater, Oklahoma 74078 Email: [email protected]

Girish Chowdhary

Ruili Zeng

Department of Automobile Engineering School of Mechanical and Military Transportation University Aerospace Engineering Tianjin, China Oklahoma State University Email: [email protected] Stillwater, Oklahoma 74078 Email: [email protected]

Abstract—This paper considers a decision making problem of an autonomous car driving through the intersection with the presence of human-driving cars. A proactive collision avoidance system based on a learning-based MDP model is proposed in contrast to a reactive system. This approach allows to pose the question as an optimization problem. The proposed learning algorithm explicitly describes the interaction with the environment through a probabilistic transition model. The effectiveness of this concept is supported by a variety of simulations which include driving behaviors with Gaussian-distributed velocity, random actions and real human driving.

I.

I NTRODUCTION

The high risk of collisions and the severity of the possible consequences remain to be the main properties of land transportation. Driving in the presence of other road users is a complex task achieved by human drivers only, but even they make wrong actions leading to lamentable statistics. Safe and reliable decision making is a major challenge for the use and popularization of the autonomous robotic vehicles. To fit the existing traffic manner the modern autonomous cars are expected to have fast reactive safety system and proactive predictive control algorithm [1], [2]. The reactive safety features warn the driver about difficulties on a road or even make the urgent actions to avoid the accidents. They were developed to surpass the human in time of reaction or excellence of sensors. Because of the use of modern detectors and fast computer logic, such systems had many successful implementations and prevented up to 80% of simulated collisions [3], [4]. For example, the completely reactive robotic system ALVINN uses images from the cameras and Neural Networks for reactive decision making [5]. Reactive safety systems have improved road safety by helping avoid collisions and accidents in the short term. However, further safety improvements are requiring increasing the sensitivity of the reactive systems that leads to an increase in the number of false alarms. Also, most of those systems were non-optimal and annoying to the passengers. Proactive safety allows to achieve a higher sensitivity to the potentially danger situations while taking softer actions. Despite the use of both proactive and reactive methods in This project is supported by the National Science Foundation (NSF) Grant CISE/IIS 1231671, CISE/IIS/1427345, National Natural Science Foundation of China (NSFC) Grants 61328302, 61222310 and the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China (No. ICT1408).

Fig. 1. Collision avoidance with the use of an intersection’s infrastructure is possible in time domain by changing speed in advance.

mobile robotics research, it is still a challenge for its adoption in transportation vehicles. There are existing works which recognize a driver’s activities and act according to the likelihood of those or other activities. Most of these works consider the world as partially observed or completely hidden where the motivation and dynamics of the processes are not available while only the effects of certain actions can be observed [6], [7]. These works are giving an approximate solution or using heuristic approaches like ”if-else” rules preprogrammed by the developer what does not allow to optimize a solution. This paper proposes the use of classical Markov Decision Process (MDP) to solve the problem. In this way, it allows us to find the best actions given full knowledge of the parameters of speed, direction and position for all involved vehicles. This condition can be satisfied by establishing RF connections between all cars and transferring the data to each other using V2V or V2I communication as has been explained in the work [8]. Due to the fact that up to 50% of accidents occurred at intersections, this paper introduced and verified the possibility of the use of the MDP framework for planning the actions of an autonomous vehicle (the agent) and checked the sufficiency of using proactive actions for avoiding collision. Fig. 1 illustrates a sufficiency of an early little change in the speed to avoid a collision in the time domain.

II.

M ETHODOLOGY

A. Learning-based MDP model optimization over expected reward In this section, we formulate the proactive decision making problem as an optimization problem. For this purpose, the autonomous collision avoidance task is posed as an MDP tuple (S, A, T, R) that captures the Markovian transition of the car in the real world [9], [10]. Here, S is the set of discrete states of the car, A is the set of desired actions, T (s, a, s0 ) is the transition model from any state s ∈ S to any other state s0 ∈ S when the action a ∈ A is taken, and denotes the conditional probability of transition p(s0 |a, s). R is the model of the reward obtained by the transition (s, a, s0 ). The value of each state is given by the value of the next state discounted by the discount factor γ and the cost of transition and mathematically described by the Bellman equation: ! V (s) = max a∈A

0

0

0

∑ T (s, a, s )(R(s, a, s ) + γV (s ))

(1)

s0 ∈S

The optimal policy π ∗ (a) is the set of action for each state that maximizes the expected discounted reward: ! ∗

π = arg max E π

0

∑ (R(s, a, s )|π)

(2)

s∈S

There are many approaches to solving MDPs, some of which were surveyed in the recent papers [10], [7]. We chose the value-iteration algorithm due to its convergence guarantees. The proposed method was decomposed into the following steps: creating a dynamical model of a car, learning transition rules for the list of actions over dynamical simulations, solving MDP in order to find the optimal solution to pass the intersection, build dynamical simulation of an intersection to prove the method.

where vx (t) and vy (t) represented longitudinal and lateral velocity. r(t) was the yaw rate at time t. The state-space structure of the model was illustrated by the following differential equations: dx1 (t) = x2 (t) × x3 (t) dt + m−1 × [Cx × (u1 (t) + u2 (t)) × cos (u5 (t))   x2 (t) + a × x3 (t) − 2 ×Cy × u5 (t) − × sin (u5 (t)) x1 (t)  +Cx × (u3 (t) + u4 (t)) −CA × x1 (t)2 (6) dx2 (t) = −x1 (t) × x3 (t) dt + m−1 × [Cx × (u1 (t) + u2 (t)) × sin (u5 (t))   x2 (t) + a × x3 (t) × cos (u5 (t)) + 2 ×Cy × u5 (t) − x1 (t)  b × x3 (t) − x2 (t) + 2 ×Cy × (7) x1 (t) dx3 (t) 1 × = dt (0.5 × (a + b))2 × m {a × [Cx × (u1 (t) + u2 (t)) × sin (u5 (t))    x2 + a × x3 (t) + 2 ×Cy × u5 (t) − × cos (u5 (t)) x1 (t)  b × x3 (t) − x2 (t) (8) − 2 × b ×Cy × x1 (t) Solving these ordinary differential equations (ODE) (Eq. 6 – 8) explicitly was difficult. However, Runge-Kutta method [12] provided a numerical solution for the state of the vehicle(velocity, acceleration and yaw rate) in every iteration.

B. Dynamic model of a vehicle In order to simulate a dynamics of a car, a simplified dynamical model of the Dubin’s car has been described by the equations of motion based on the dynamic vehicle model [11]. It used six parameters to describe the real vehicle and environment: m: a: b: Cx : Cy : CA :

Mass of vehicle [kg] Distance from front axle to Center of Gravity [m] Distance from rear axle to Center of Gravity [m] Longitudinal tire stiffness [N] Lateral tire stiffness [N/rad] Air resistance coefficient [1/m]

In this simulation, we chose coefficients according to the Volvo V70 model as followed, m = 1700, a = 1.5, b = 1.5, Cx = 150000, Cy = 4000, CA = 0.5. Three states of the model were taken into consideration: x1 (t) = vx (t) = Longitudinal velocity [m/s] x2 (t) = vy (t) = Lateral velocity [m/s] x3 (t) = r(t) = Yaw rate[rad/s]

(3) (4) (5)

Fig. 2. An example of MDP formulation showing that some actions lead to the collision state. These actions should be marked by highly negative reward (penalty).

To utilize the discrete state MDP framework described in Section II-A, the continuous time dynamic model of the car has to be translated to a discrete-state transition model. The example of this translation is shown in Fig. 2. The collision state is defined as a state in a grid world which is occupied

by two cars in the same time. As can be inferred from the example, the only way to avoid the collision in the junction of two paths is to reach this point at the time different from another vehicle. This approach enables time to be used as one of the states of the car and allows to separate dynamic states into static states by time steps. To maintain the connection between states the transition model is required. The uncertainty in transitions s → s0 shown in Fig. 3 has to be described in terms of transition probabilities p(s0 |s, a). The distribution of the probabilities has to be found according to discrete actions performed by the agent. TABLE I.

NA 1 2 3 4 5 6 7 8 9 10

ACTION ’ S DESCRIPTIONS AND PENALTIES

Description of action Keep going Soft Speed up Soft Slow down Soft Turn left Soft Turn right Emergency stop Speed up Slow down Turn left Turn right

Penalty 0 0 0 0 0 -100 -20 -20 -30 -30

The set of actions can be decomposed into two main subsets: so called soft actions and firm actions. The soft actions are shown in Table I with number 1 to 5. Because of their smoothness and passengers-friendliness, they were grouped as a preferred actions and defined as zero-cost actions. The firm actions with numbers 6 to 10 in Table I are rough actions which were used when the soft actions were not sufficient to prevent the collision with the costs defined accordingly to their preference. The durations of all actions were identical and defined by the time-step of the CAS algorithm equaled to 1 second.

C. Learning of a discrete transition and reward model In this paper, to represent a dynamical state of the agent as a static state we choose 4 parameters: time, longitudinal and latitudinal locations on the road and velocity of the vehicle. These parameters forms a 4 dimensional set of nonoverlapping states while other parameters such as acceleration and orientation of the vehicle are neglected to reduce the number of states. These ignored parameters are assumed to be relatively small and can be set to initial values in a very short duration. Any state of the autonomous car can be classified by this discrete model of the world and be represented as a tuple: s = [time, locX , locY , velocity] The resulting state-action transition matrix T (s, s0 , a) is very large and increases in size with the number of states. For the case considered in this paper, the set of all states forms 10 x 10 x 3 x 10 matrix, with 3000 initial states and same number of possible states for each of the 10 actions. This lead to a very large dimensional MDP with 90 millions elements (3000x3000x10). It should be noted that the dimensionality of the discretized state-space can be reduced by increasing the range over which the states are discretized, but this leads to other complexities such as high uncertainties in the transitions. To learn Transition model, this paper proposes the learning Algorithm 1, where one time step of CAS is divided to 10 incremental time steps equal to 0.1 second. Then, the Dynamic Simulation function, described in Section II-B, simulates the path with these steps and returns the [x, y] data of all 10 steps. This coordinates are linearly applied to all possible initial points [Locx , Locy ∈ Road] equally distributed inside the one discrete location state and give the expected paths from these points. The obtained paths are being classified to the discrete states. The numbers of visits to these discrete states by taking one action give the conditional probability distribution of the vehicle inside one time step of the CAS. This process requires a lot of computational work, but the T matrix has to be obtained just once, and remains to be the same till dynamic model and parameters of the grid world are valid. Data: Car dynamic model D Result: Transition model T for every action a ∈ A do for every velocity v ∈ R do x = 0, y = 0, t = 0 ; while tinc ≤ tCAS do [xn , yn , vn ,tn ] = D(x, y,t,tinc , v) ; tinc = tinc + tCAS /10 ; end for Locx , Locy ,time ∈ R do s = [Locx , Locy , v,time] ; s0n = [xn + Locx , yn + Locy , vn ,tn + t] ; 0 ∈S) n T (s, a, s0 ) = ∑n (s→s ; n end end end Algorithm 1: Learning the Transition Model

Fig. 3. Uncertainties in the transitions from one state by one actions may result different states due to stochasticity inside the initial state

The reward function is designed to show the agent which states should be followed. We give a large negative reward

to the collisions, or to be more precise the states in which collision happens. To motivate the agent moves towards the intersection, the states at the other side of the intersection get the positive reward. This positive reward has reduction by time of obtaining this reward to avoid the following very slow and safe policy. All other states obtains the reward according to the cost of actions shown in the Table I. This formulation provides a great degree of flexibility in defining the priorities of actions and states. R(s, s0collision , a) = −10000 50 R(s, s0final , a) = 0 s (time) R(s, s0 , a) = Cost(a)

(9) (10) (11)

D. CAS algorithm description Decision making Algorithm 2 for the CAS is based on the Bellman function shown in the Equation 1. We calculates the vector V (s) of the maximum values of state s using the T (s, s0 , a) and R(s, s0 , a) matrices with respect to probability of the transition from this state to any resulting state and the cost of this transition. The output matrix P(s) gives the best policy of actions. When the allocation of the penalty states in the matrix R is known, we have a map of actions for any state of the agent, regardless of where it had really been. This policy is relevant only for the specific location of penalties or distribution of the reward at the space. We could say that, regardless of other factors, once calculated policy should fit to any similar distribution of the rewards. By that, there is no need for constantly calculating the policies on-line, they could be precomputed in advance and stored as ready-made solutions in the database what let to save the time of calculation. Data: Transition model T , Reward model R Result: Optimal policy π ∗ while 4 > η do for s ∈ S do v = V (s) ; V (s) = maxa∈A (∑s T (s, a, s0 )(R(s, a, s0 ) + γV (s0 ))) ; π(s) = arg maxπ (∑s T (s, a, s0 )(R(s, a, s0 ) + γV (s0 ))); 4 = max(4, |v −V (s)|) ; end end Algorithm 2: Value-iteration algorithm

E. Simulation description To prove the viability of the concept the computer simulation has been built to describe the intersection where an autonomous vehicle has been moving from south to north. The simulation environment has been designed in the Matlab computing environment as an intersection where both autonomous and human-driving vehicles were involved. Fig. 4 illustrates the simulation of the vehicles passing the intersection where the green, blue and yellow rectangles represented the humandriving vehicles, while the red one was the autonomous driving vehicle. The Algorithm 3 utilizes the dynamical equations of

Fig. 4. Simulation of the autonomous car at the bottom coming through the intersection with other human-driving cars (the right ones)

all vehicles and updates their positions with a time interval of 10 ms. The short update interval is used to eliminate a possibility of skipping discrete states and avoid jumping one vehicle over another. The frequency of the CAS decision making algorithm has been set to 1 Hz (once every second). Therefore, after each decision the agent continued to go by inertia for 1 second, until the next action is computed based on the evaluation of the environment. A delay in the implementation of the action is not taken into consideration due to an opportunity to define the dynamics of the car as a black box. Two generalized cases of the problem have been elaborated - the moving in the same direction to the traffic and in the transverse direction. The states of collisions are determined by classifying the visited states of the human-driving cars with the assumption of further move with fixed velocity. This makes possible to obtain the probability distribution of the intermediate states of all vehicles and assign the values of penalties to these states corresponding to their probabilities. Three role-models have been created to simulate a humandriving car. The first one reproduces holding ”the constant speed” by the human driver. The car has been given the initial velocity while its further speed is defined according to the Gaussian probability distribution of the velocity in the previous step. The second model emulates a random selection of the action every second from the list of soft actions unified with the list of the agent’s actions. It reproduces the intentional actions of the driver while he is driving. The third model is using a real human-driving. For this purpose, the data have been obtained from the driving through simulated intersection with the use of the Logitech G27 steering wheel and pedals to control the model of the car. Due to the large computational delay in calculating the transition matrix and policy, the human driving cannot be executed in real time. The pure data resulted by the human intention has been saved to a data file and reproduced by steps during CAS simulation. Thus, when the value-iteration algorithm is calculating the policy, the manual driving vehicle stops until the calculations gets finished. This

Data: Transition model T , Dynamic function D Result: Result of collision carn = [xn , yn , vn ],t = 0 ; while y ≤ y f inal ∈ R do [xn , yn , vn ,tn ] = Dn (xn , yn , vn ,tn ), n = [0..3]; if t ∝ tCAS then Scollision (n) = S(Agent ⊥ carn ) ; Scollision → R ; if R 6= R prev then π = CAS(x, y, v,t, T, R) ; end an=0 = π(s); end switch Human behavior model do case 1 vn=1..3 = Gaussian(vn ); end case 2 an=1..3 = random(a ∈ A); end case 3 vn=1..3 = load 0 human.model 0 ; end endsw an=0..3 → Dn ; t = t + 0.01 ; end Algorithm 3: Simulation algorithm

Fig. 5. Transition model for actions: 1- keep going, 6- emergency brake, 7speed up, 9- turn left, 10-turn right and speeds 1, 30, 60 mph. The probability of transition from the state marked by (*) is shown in gradations of red color.

allows us to simulate the interaction with the real drivers as close as possible. It should be noted that none of these models performs the actions in the very aggressive manner aimed to commit an intentional crash. III.

E VALUATION

Transition matrix has been obtained by the simulation of the dynamic function of the agent. 10 small incremental steps within each time state have been checked and classified into discrete states and defined the conditional probability of being in any of this state. Thereby, 10 interim states have been tested for each of 10 actions in each of 3000 states resulting the classification of 300000 values of the dynamic functions. This process was the most computational-intensive despite the use of a simplified dynamic model. The resulted states of each action are shown in Figure 5 in tonal gradations with respect to its probability. As can be seen, this probability depends not only on the selected action, but on the vehicle’s speed and location on the roadway as well. The quantitative simulations provided the data sufficient to compare the work of reactive and proactive systems during 100 trials with 8 simulations each including 2 different initial velocities 30 and 60 miles per hour and the presence of one and two human-driving cars with the Gaussian distribution of the speed. This quantitative simulation did not consider random-action and real-human models due to difficulties in the comparison. In all cases, there were obtained no car collisions and the significant improvement in the travel time through the intersection in contrast to the reactive systems. Fig. 6 shows the velocities of the agent (denoted as car1) and the human driving car(denoted as car2) moving in transverse directions.

Fig. 6. Agent(’Car1’) and human(’Car2’) velocities in random example, simulation stops when Agent pass intersection.

Both human-driving and autonomous cars had initial velocities 30 mph (14 mps, shown at top figure) and 60 (28 mps, shown at bottom one). As can be inferred from the figure, the time required to pass the intersection for the proactive algorithm is less(6.1 and 4.5 seconds) than for the reactive

the significantly lower maximum acceleration used to avoid collision, and improvement in travel time. The wider range of travel time and accelerations were resulted by originality of each solution found by MDP for each particular allocation of the cars. IV.

Fig. 7. Agent(’Car1’) and human(’Car2’ , ’Car3’) velocities in random example, simulation stops when Agent pass intersection.

algorithm(7.1 and 9.8 seconds). The actions performed by the proactive system were smoother and required less change in the speed what gave less discomfort to the passengers. The cases considered two human-driving cars are shown in Fig. 7. In all simulations the travel time was less for the proactive system for 25-30% and the agent avoided a complete stop in most cases when the use of soft actions was enough.

Simulations of this approach proved the possibility of longterm planning the actions which avoid collisions with other cars. The CAS algorithm proposed in the paper has avoided collisions in all considered cases. Significant advantages over the reactive methods using a full-stop algorithm programed with ”if-else” were reached in the travel time. Simulations showed that the delay was reduced by 25 - 50% for the case of the cross-traffic. The car has performed a full stop only when there was not enough distance to maintain the lower speed while other cars were passing through. However, the calculation of the optimal policy carried out on-line significantly delayed the CAS algorithm and can not be implemented as an on-line process on a real car. The only way to reduce the computation time is to avoid the change in the allocation of the penalties. This can be done by a prediction of the intention of other drivers. Human behavior can be learned and classified to several models which can be used for allocation of the penalty states. Another way is based on off-line calculating of all possible allocations of the penalties and combining them into groups with the unified solution which satisfied the whole group. This list of solutions will be used as a ready-made policy and can be considered as on-line. R EFERENCES [1] [2]

[3]

[4] [5] [6] [7]

[8]

[9] [10]

[11] Fig. 8. Max acceleration used and travel time comparison for MDP and reactive methods. The higher variances of MDP results are due to variety of solutions.

The statistical data over all 100 trials shown in Fig. 8 gave

CONCLUSIONS

[12]

G. Leen and D. Heffernan, “Expanding automotive electronic systems,” Computer, vol. 35, no. 1, pp. 88–93, 2002. J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE, 2011, pp. 163–168. T. Li, S.-J. Chang, and Y.-X. Chen, “Implementation of human-like driving skills by autonomous fuzzy behavior control on an fpga-based car-like mobile robot,” Industrial Electronics, IEEE Transactions on, vol. 50, no. 5, pp. 867–880, 2003. R. Sukthankar, “Raccoon: A real-time autonomous car chaser operating optimally at night,” DTIC Document, Tech. Rep., 1992. D. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” DTIC Document, Tech. Rep., 1989. T. Bandyopadhyay et al., “Intention-aware pedestrian avoidance,” in Experimental Robotics, pp. 963–977. S. Brechtel, T. Gindele, and R. Dillmann, “Probabilistic mdp-behavior planning for cars,” in 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2011, pp. 1537–1542. J. Santa, A. F. Gomez-Skarmeta, and M. Sanchez-Artigas, “Architecture and evaluation of a unified v2v and v2i communication system based on cellular networks,” Computer Communications, vol. 31, no. 12, pp. 2850–2861, 2008. R. Bellman, “A markovian decision process,” DTIC Document, Tech. Rep., 1957. A. Geramifard et al. (2013) A tutorial on linear function approximators for dynamic programming and reinforcement learning. [Online]. Available: http://dx.doi.org/10.1561/2200000042 MathWorks. Modeling a vehicle dynamics system. [Online]. Available: http://www.mathworks.com/help/ident/examples/modeling-avehicle-dynamics-system.html?refresh=true C. L. E. Hairer and M. Roche, The numerical solution of differentialalgebraic systems by Runge-Kutta methods. Springer, 1989.

Suggest Documents