Control and Navigation Systems for Mobile Robots ...

2 downloads 778 Views 764KB Size Report
cases, the mobile robot was able to achieve the goal position without colliding ... Inertial measurement unit (IMU) and encoders, S3: Wireless Sensor. Networks ...
Control and Navigation Systems for Mobile Robots Using Fuzzy-Neural Networks Luis Enciso Salas Engineering Department Pontifical Catholic University of Peru PUCP San Miguel, Lima 32, Peru [email protected]

Antonio Moran Cardenas Graduate School Pontifical Catholic University of Peru PUCP San Miguel, Lima 32, Peru [email protected]

Abstract— This paper presents a new approach for designing navigation and control systems for mobile robots using fuzzyneural networks working in interior and unknown environments. A grid quantized matrix is continuously updated to configure the robot working area. An arbiter is used to determine the action the robot should carry out at each instant in order to avoid obstacles and achieve the goal position. The effectiveness of the navigation and control systems has been verified with the robot moving in complex and previously unknown environments. In all cases, the mobile robot was able to achieve the goal position without colliding with obstacles found in its motion.

conditions. In more recent works, Song [4] presented a structure based on three actions and a Fuzzy-Kohonen clustering network for their fusion and coordination; however, the structure is very dependable on sensors arrangement, and the generated paths are far from the optimal. Joshi [5] shows an interesting structure with actions implemented by fuzzy-neural networks, and actions coordination implemented by specially trained neural networks. Mohanty [6] employs a MANFIS network to deal with the robot navigation problem; however, not all navigation conditions are analyzed. In the present work a fuzzy-neural network system is proposed for solving the navigation and control problems. To improve the navigation and control performance, an integrated map-based and action-based approach is proposed, allowing the use of diverse sensor distributions and the implementation of different control schemes. The paper is organized as follows: in Section II the robot problem and the general structure of the navigation and control systems are presented. Section III presents the main tasks of the navigation system. Section IV presents the controller design process including actions implementation and their coordination. Section V presents the algorithm for escaping from trappings. Section VI presents simulation results using a car-like mobile robot moving in complex environments. Finally, conclusions are presented in Section VII.

I. INTRODUCTION

N

owadays there is a significant interest in autonomous mobile robots to be applied for solving transportation problems in industry under diverse working conditions and environments. Two are the main problems in mobile robotics: navigation and control. In the navigation problem, the robot determines its present position and configures its working environment. In the control problem, the robot takes the actions to complete the tasks for describing desired trajectories. Most of proposed navigation systems are based on either reactive or deterministic approaches. In the first one the robot uses only online sensor information, whereas in the second, the robot uses prior known information of the environment such as maps or landmarks. In the reactive approach, the robot can take one of a set of actions or reactive behaviors depending on the environment characteristics at each time. Such behaviors could be: avoid an obstacle, follow a wall, detect goal position, discover new area, communicate data, etc. These actions represent a natural solution to the robot motion problem since they are similar to actions humans take when moving in a complex environment. One of the first works based on a set of actions was presented by Li [1] who proposed the navigation and control systems using fuzzy logic and neural networks. Robot actions implemented by fuzzy-neural networks are also presented by Godjebac [2] where only basic actions are considered. Rusu [3] proposed a structure with seven actions, however their coordination could turn to be complex since many actions could be redundant in some environments and motion

II. STRUCTURE OF ROBOTIC SYSTEM The main components of the autonomous robotic system are the mobile robot and the navigation and control systems to be described in the following. A. DESCRIPTION OF THE STRUCTURE Figure 1 shows a mobile robot inside a working area with fixed unknown obstacles. The objective is to design a navigation and control systems so that the robot moves to achieve a goal position avoiding obstacles the robot finds while running. The robot is equipped with an inertial measurement unit (IMU), encoders, and sonars for

determining its actual position as well as the presence of walls or obstacles. A wireless network of sensors (WSN) located on the corners of the working area is used to determine the robot global coordinates.

B. CAR-LIKE ROBOT MODEL The car-like robot model is shown in Figure 3. This kind of mobile robot is characterized by having a great stability when turning but a low maneuverability when driving. Front steering wheels

S3

S3

Goal

Obstacle

Obstacle

v

Rear traction wheels Y

S1 Robot

δ

Q



(x,y)

S2

L

Obstacle S3

S3

Figure 1. Mobile robot in working area. S1: sonar sensors, S2: Inertial measurement unit (IMU) and encoders, S3: Wireless Sensor Networks (WSN).

The main structure of the proposed navigation and control systems is shown in Figure 2. Firstly, the navigation system is realized through area grid updating, robot localization, and wall and obstacle finding. Using this information, an arbiter is proposed for the coordination of actions to be carried by the mobile robot according to the actual environment characteristics. Afterwards, the control system executes the action selected by the arbiter in the previous stage. The system includes a global planner occasionally used when the robot gets trapped in complex environment configurations. A fuzzy-neural system is used for the implementation of each selected action. Only one action is executed at each time allowing a reduction of computing time. NAVIGATION SYSTEM

CONTROL SYSTEM

ROBOT

Sonars

xd yd θd Grid Update

NeuroFuzzy Arbiter

x, y, Φ

Localization

Obstacle Avoidance Controller Δθ Wall Following Controller

Goal Seeking Controller

v, δ

Mobile Robot

Global Planning Sensors

1

2

3

4

Figure 2. General structure of navigation and control systems.

{G}

X

Figure 3. Car-like robot model.

The kinematical equations of the mobile robot are as follows: 𝑟 (2.1) 𝜃(𝑘) = 𝜃(𝑘 − 1) + 𝑡𝑎𝑛𝛿(𝑘) 𝐿 𝑥(𝑘) = 𝑥(𝑘 − 1) + 𝑟 𝑐𝑜𝑠𝜃(𝑘)

(2.2)

𝑦(𝑘) = 𝑦(𝑘 − 1) + 𝑟 𝑠𝑖𝑛𝜃(𝑘)

(2.3)

where x and y are the coordinates of point Q, θ is the orientation of the robot respect to the X axis, δ is the steering angle which is mechanically limited in a real robot, r is the distance the robot moves forward at each iteration step, and L is the length of robot wheel base. III. NAVIGATION SYSTEM The navigation system is composed by two subsystems: grid updating and robot localization. In the first subsystem, the data collected from sonars is processed and allocated in a quantified ordered matrix representing the robot working area including obstacles distribution. The second subsystem implements sensor fusion (encoders, IMU, WSN) to obtain the robot coordinates at each instant. A. GRID UPDATING This process is based on a simplified version of the certainty grid presented in [8]. The grid is a 2-dimensional representation of the working area and can be easily implemented as a matrix in a computer. This process is shown in Figure 4. Data acquired by sonars is processed and transferred to the main grid where obstacles and wall positions are being configured.

GRID(NxN) WINDOW(nxn) SONAR DATA PROCESSING

ROBOT TRACKING

QUANTIZER

Figure 4. Grid updating and quantization process.

The grid configuration at each step is used to determine the robot reactive actions. To do that, a small window, with the robot coordinates as its center, is defined and subdivided in 4 main regions (Figure 5) which are later quantized to obtain three variables (Fwest, Fnorth, Feast in Equation 2.1) representing the obstacles relative concentration in each defined sector. This simplified process guarantees to have a minimum number of representative values. θ

Additionally in Equation 2.1, the following variables are used: Kc is the force constant determined by trial according to robot geometry and sensors characteristics, C(i,j) is the storage value of coordinates(i,j), and d(i,j) is the distance from point with coordinates (i,j) to robot present position. B. ROBOT LOCALIZATION The objective of localization is to determine the coordinates of the robot at every instant of its navigation in the working area. For this purpose, a sensor fusion system was designed using Kalman filters in order to integrate sensor data from different sources: encoders, inertial sensors and wireless sensors. The objective is to determine the coordinates (x,y) and inclination θ of the robot at each instant. In Figure 6, the components and structure of the localization system are shown. Acelerometer

Lineal aceleration g

Giroscope

North (Sn)

West (Sw)

Extended Kalman Filter

rate WSN

45°

(Xrobot, Yrobot)

Kalman Filter

Magnetometer

45° 90°

Euler yaw Angles Estimation

RSSI

X Y

θ

n __

90°

Encoders

East (Se)

Velocity

Figure 6. Structure of localization system.

__ n

IV. CONTROLLER DESIGN

WINDOW Figure 5. Quantization of window information by means of quadrant forces.

Since this method has a similarity to Virtual Force methods [8][9], the force concept is conserved for the variables derived in Equation 3.1, where Fwest is a magnitude to represent the relative concentration of obstacles in sector Sw, Fnorth is a magnitude to represent the relative concentration of obstacles in sector Sn, and Feast is a magnitude to represent the relative concentration of obstacles in sector Se. The robot will prioritize its motion toward the sector with lesser obstacles concentration. 𝑗∈𝑆𝑊 𝑖∈𝑆𝑊

𝐾𝑐 ∗ 𝐶(𝑖, 𝑗) 𝐹𝑤𝑒𝑠𝑡 = ∑ ∑ 𝑑(𝑖, 𝑗)2 𝑗 𝑖 𝑗∈𝑆𝑁 𝑖∈𝑆𝑁

𝐹𝑛𝑜𝑟𝑡ℎ = ∑ ∑ 𝑗 𝑖 𝑗∈𝑆𝐸 𝑖∈𝑆𝐸

𝐹𝑒𝑎𝑠𝑡 = ∑ ∑ {

𝑗

𝑖

𝐾𝑐 ∗ 𝐶(𝑖, 𝑗) 𝑑(𝑖, 𝑗)2

𝐾𝑐 ∗ 𝐶(𝑖, 𝑗) 𝑑(𝑖, 𝑗)2

(3.1)

The controller is composed by a set of actions coordinated by an arbiter, all of them implemented by means of fuzzyneural networks based on the ANFIS structure [10]. In this section, the ANFIS structure is presented in A; actions designing process is presented in B, C and D; and the arbiter design is presented in E. A. ANFIS NETWORKS A fuzzy-neural network scheme is employed to implement the set of defined actions. The structure of the fuzzy-neural network is shown in Figure 7 as it was proposed by Jang [10] and it is composed as follows: Layer 1: Neurons implementing membership functions: 𝐴 𝐵 (4.1) 𝑂1,𝑖 = 𝐴𝑖 (𝑥𝐴 ); 𝑂1,𝑖 = 𝐵𝑖 (𝑥𝐵 ) Layer 2: Product nodes, calculating the firing strength of each rule: 𝐴 𝐵 (4.2) 𝑂2,𝑖 = 𝑂1,𝑛 . 𝑂1,𝑚 Layer 3: Normalization layer: 𝑂2,𝑖 (4.3) 𝑂3,𝑖 = 𝑁 ∑𝑖=1 𝑂2,𝑖

Layer 4: Adaptive nodes to generate the consequence part of each rule: (4.4) 𝑂4,𝑖 = 𝑂3,𝑖 . 𝑓𝑛 Layer 5: Summation of defuzzified signals to obtain the network output: 𝑀 (4.5) 𝑂5,𝑖 = ∑ 𝑂4,𝑖

steering), MES (Medium steering) and GRS (Great steering); and the linguistic values for the velocity v are: NS (Normal speed), RS (Low speed) and VRS (Very low speed). The following rules were employed: IF Fnorth is FLO THEN Sc is ZES AND v is NS; IF Fnorth is FME THEN Sc is MES AND v is RS; IF Fnorth is FGR THEN Sc is GRS AND v is VRS.

𝑖=1

A1 A2 X1

A3

N



 N ˆ2



N Σ

B1 X2

B2

ˆ1



ˆ 3

N

ˆ 4



N

ˆ 5



 N ˆ6



LAYER LAYER 2 1

Σ

1 '

1 '

Σ Σ

2 '

LAYER 3

Σ



2 '

LAYER 4

LAYER 5

Figure 7. Structure of an ANFIS. Adapted from [11].

B. GOAL SEEKING ACTION This action allows the mobile robot to move towards the goal position. In this action, the input is the difference between the goal direction and the robot orientation, as it is shown in Equation 4.6. The selected output is the change in steering angle (∆δ), which is defined in Equation 4.7. ∆𝑑 𝜃 = 𝜃𝑑 − 𝜃

(4.6)

D. WALL FOLLOWING ACTION This action allows the robot to follow walls and it is designed with two input variables since it is more complex than the previous actions. The selected input variables are: Forward force (Fnorth) and Side force (Feast or Fwest in accordance to which wall will be followed); and the selected output variable is the steering angle change (∆δ). Five linguistic values were defined for each variable whose membership functions are shown in Figure 8. The linguistic values selected for the Side force input were: TSMSF (Too small side force), SMSF (Small side force), MESF (Medium side force), GRSF (Great side force), and TGRSF (Too great side force). Meanwhile, the linguistic values define for the ahead force input were: TSMSA (Too small side ahead), SMSA (Small ahead force), MESA (Medium ahead force), GRSA (Great ahead force), and TGRSA (Too great ahead force). Side Force 1 TSMSF SMSF MESF

TGRSF

GRSF

0.5 0

0

1

2

3

4

5

Ahead Force

∆𝛿 = 𝛿(𝑘) − 𝛿(𝑘 − 1)

(4.7)

The linguistic values selected for the input were: NE (Negative Difference), ZE (Zero Difference) and PO (Positive Difference). The following three rules were employed considering the direct relationship between goal position orientation and steering angle: IF ∆dθ is NE THEN ∆δ is negative; IF ∆dθ is ZE THEN ∆δ is zero; IF ∆dθ is NE THEN ∆δ is positive. C. OBSTACLE AVOIDANCE ACTION This action was designed to avoid collisions based on the input Fnorth, which is the magnitude representing the relative concentration of obstacles in the North direction (robot orientation). Three linguistic variables were defined for this input: FSM (Small Force), FME (Medium force) and FGR (Great Force).Meanwhile the output variables are the steering angle change Sc, defined in Equation 4.8, and the velocity of the robot v. 𝑆𝑐 = ∆δ ; 𝐹𝑤𝑒𝑠𝑡 ≤ 𝐹𝑒𝑎𝑠𝑡 { 𝑆𝑐 = −∆δ ; 𝐹𝑤𝑒𝑠𝑡 > 𝐹𝑒𝑎𝑠𝑡

(4.8)

The linguistic values of the output Sc are: ZES (Zero

MEAF 1 TSMAF SMAF

GRAF

TGRAF

0.5 0

0

1

2

3

4

5

Figure 8. Membership functions of the Left Wall Following Action (Right Wall Following can be determined by sign change).

Training by means of genetic algorithm is required to obtain proper output values. E. ARBITER OF ACTIONS The function of the arbiter is to coordinate the activation of reactive actions during the navigation. It was implemented by a fuzzy-neural network since it allows the inclusion of human experience in the designing process. Three input variables were considered to represent each particular case in a trajectory: Feast, Fwest and Fnorth; and then, three linguistic values were defined for each case: SM (Small Force), ME (Medium force) and GR (Great Force). Figure 9 presents the membership functions for every case.

trapped.

Fnorth ME

1 SM

1

GR

0.5

[D Dx Dy] = Distances()

0 0

1

2

3

4

5

Fwest ME

1 SM

D 45° Each base rule has a total of 27 rules.

V. ESCAPING FROM TRAPPINGS Navigation in unknown environments could be complex and the robot could get trapped in some special conditions. An algorithm for getting out from trapping situations is needed in order to diversify the responses of the robot. The identifiers chosen in Equation 4.9 can be used to weigh the commitment of the robot to reach the goal position at certain periods of time. This concept is used in the algorithm shown in Figure 10, to determine if the robot is in trapping condition and then the control system takes the proper actions so that the robot gets out from this condition. In the algorithm, D, Dx and Dy define the distance from the robot to the goal position, and the differences in x and y coordinates, respectively. Dmin, Dxmin and Dymin represent the minimum values of D, Dx and Dy up to the present position; ∆θ is the differential of the robot inclination angle; GS is a variable that takes the value of 1 when a goal seeking action is being employed and the value of 0 in other cases; and finally trapCondition is a generated value that can be used to select a new set of rules in the arbiter in case the robot gets

VI. SIMULATION Some representative tests were carried out to verify the performance of the navigation and control systems when the robot moves in different situations and working environments. For the tests, the following parameters are considered: window size n = 33, force constant Kc = 7.5, robot velocity v = 1m/s; and trap limit LIM = 38. Figure 11 shows the trajectory of the robot moving through a narrow passage, corner and turning points. In Figure 12 it is shown the trajectory of the robot avoiding a U-shaped obstacle; the trap avoiding condition is activated and the robot can overcome the test. In Figure 13, a labyrinth is employed to test the controller when changing direction is required. In Figures14, an office-like environment with complex obstacles is used to test the navigation and control systems. In all simulation tests, the mobile robot was able to identify the working environment, determine its location, and implement the proper control actions to achieve the desired final position. These results verify the effectiveness of the proposed navigation and control systems.

VII. CONCLUSIONS A new navigation and control systems for the autonomous motion of mobile robots working in complex environments have been proposed. The navigation system integrates sensor information for determining the robot position and obstacles presence. An arbiter determines the action to be taken by the robot according to the present condition of the working area. The control system computes and applies the robot steering

angle to implement the action determined by the arbiter. Neural networks, fuzzy neural and Kalman filters have been used to implement the navigation and control systems. The effectiveness of the navigation and control systems have been verified simulating the behavior of the mobile robot moving in complex and previously unknown environments. In all cases, the mobile robot was able to avoid obstacles and achieve the goal position.

20 18 16 14

GOAL

12 10 8 6 4

20

2

START

GOAL 0

15

10

0

2

4

6

8

10

12

14

16

18

20

Figure 14.Robot moving in environment with complex obstacles.

5

ACKNOWLEDGEMENTS 0

START

-5 -20

-15

-10

-5

0

5

10

15

20

Figure 11.Robot moving through narrow passages.

Authors acknowledge the comments and suggestions of the Control and Automation Group (GECA) of the Engineering Department at PUCP which were useful to complete the present work.

REFERENCES

20 15

5 0 -5 -10

START

-15 -20 -10

-5

0

5

10

15

20

Figure 12.Robot avoiding a U-shaped obstacle.

35

GOAL

30 25 20 15 10 5

START

0 0

Li, W., “A hybrid neuro-fuzzy system for sensor based robot navigation in unknown environments”, Proceedings of the 1995 American Control Conference, vol. 4, pp. 2749-2753,1995. [2] Godjevac, J.; Steele, N., “Neuro-fuzzy control of a mobile robot”, Neurocomputing, vol. 28, pp. 127-143, 1999. [3] Rusu, P.; Petru, E. M.; Whalen, T. E.; Cornell, A.; Spoelder, H. J. W., “Action-Based Neuro-Fuzzy Control for Mobile Robot Navigation”, IEE Instrumentation and Measurement Technology Conf., pp. 16171622, 2002. [4] Song, K.-T.; Lin, J.-Y., “Action Fusion of Robot Navigation Using a Fuzzy Neural Network”, 2006 IEEE International Conference on Systems, Man, and Cybernetics, pp. 4910-4915, 2006. [5] Joshi, M.M.; Zaveri, M.A., "Neuro-fuzzy based autonomous mobile robot navigation system," Control Automation Robotics & Vision (ICARCV), 2010 11th International Conference on , pp.384,389, 7-10 Dec. 2010. [6] Mohanty, P. K.; Parhi, D. R., “A New Intelligent Motion Planning for Mobile Robot Navigation using Multiple Adaptive Neuro-Fuzzy Inference System”, Applied Mathematics & Information Sciences, vol. 8, pp. 2527 – 2535, 2014. [7] Peter Corke, Robotics, Vision and Control: Fundamental Algorithms in MATLAB , Springer, 2011. [8] Borenstein, J.; Koren, Y., "Real-time obstacle avoidance for fast mobile robots," IEEE Transactions on Systems, Man and Cybernetics, vol.19, no.5, pp.1179-1187, Sept.-Oct. 1989. [9] Kwang-Young Im; Se-young Oh, "An extended virtual force field based behavioral fusion with neural networks and evolutionary programming for mobile robot navigation", Proceedings of the 2000 Congress on Evolutionary Computation, vol.2, pp.1238-1244, 2000. [10] Jang, J.-S.R, "ANFIS: adaptive-network-based fuzzy inference system", IEEE Transactions on Systems Man and Cybernetics 23, pp. 665-685, 1993. [11] Cardenas, A. M.; Razuri, J. G.; Sundgren, D.; Rahmani, R., “Autonomous Motion of Mobile Robot Using Fuzzy-Neural Networks”, 12th Mexican International Conference on Artificial Intelligence, pp. 80-84,2013. [1]

GOAL 10

5

10

15

20

Figure 13.Robot moving in a labyrinth environment.

Suggest Documents