Architecture design and implementation of image ... - Springer Link

2 downloads 0 Views 3MB Size Report
Feb 1, 2018 - School of Information Engineering, Southwest University of Science and ... Up to now, numerous of corporations, universities, institutions and ...
Multimed Tools Appl https://doi.org/10.1007/s11042-018-5816-9

Architecture design and implementation of image based autonomous car: THUNDER-1 Chengmin Zhou 1

1

& Fei Li & Wen Cao

2

Received: 19 January 2018 / Revised: 1 February 2018 / Accepted: 19 February 2018 # Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1. Keywords Autonomous car architecture . Intelligent obstacle avoidance . Deep learning . Computer vision . ROS

1 Introduction Autonomous vehicle is the trend of the future which will play a essential role in our daily life in various of scenes. For instance, the technique of autonomous car will act as a crucial part in assisting the handicapped to get around. In addition, elimi- nating the drink driving will also be a contribution of autonomous technology. What’s more, reducing the traffic jam needs the fusion of autonomous technique as well. According to the classification from Society of Autonomous Engineers (SAE), the technique of autonomous vehicle is divided into six grades which respectively are Driver only

* Chengmin Zhou [email protected]

1

School of Cybersecurity, Chengdu University of Information Technology, Chengdu, China

2

School of Information Engineering, Southwest University of Science and Technology, Mianyang, China

Multimed Tools Appl

(Level 0), Assisted (Level 1), Partial Automation (Level 2), Conditional Automation (Level 3), High Automation (Level 4), and Full Automation (Level 5) [11]. The focus of research has shifted to the realization of highly and even fully automated driving [23]. Experience shows that the most challenging phases are High Automation and Full Automation which require remarkable enhancements not only on the existing perception, prediction and planning algorithms but also on the system architectures [19]. Up to now, numerous of corporations, universities, institutions and individuals all over the world concentrate into the research of autonomous technology. As to corporations,the US’s and Germany’s, like Tesla, Google in USA as well as Audi, BMW in Germany, work as leader among various of manufacturers. In China, Baidu corporation has a standout performance among vehicle manufactures and put forward BApolla^ project in the filed of autonomous driving which greatly promotes the commercial implementation of autonomous technique in China. As for universities, institutions and individuals, there exist numerous of outstanding teams all over the world. For instance, CMU (USA), AutoNOMOS group from FU Berlin (Germany), TU Braunschweig (Germany), KIT (Germany), Autonomous System Lab (Switzerland) as well as Mobile Robotics Group of Oxford University in England, etc. Researchers are numerous which means the existence of various methods of implementation. However, although the existing of differences among the researchers, the architecture and core modules of each other’s are same to a large extent which are composed by the modules of perception, planning and control [23] and they are also divided into the components of obstacle avoidance, localization, mapping, navigation and others from the perspective of technique. In the field of obstacle avoidance, U Rosolia developed a two-stage nonlinear nonconvex control approach for autonomous vehicle driving during highway cruise conditions [20]. S Tijmous put forward the BDroplet^ strategy, an avoidance strategy based on stereo vision inputs that outperforms reactive avoidance strategies by allowing constant speed maneuvers while being computationally extremely efficient, and which does not need to store previous images or maps [24]. MC Kang proposed a optimized version to the method that effectively locates obstacles at a risk of collision using the shape variation of a grid [13]. In the module of localization, RW Wolcott presented a generic probabilistic method for localizing an autonomous vehicle equipped with a three-dimensional (3D) LIDAR scanner which models the world as a mixture of several Gaussians, characterizing the z-height and reflectivity distribution of the environment [26]. M Aldibaja developed a Robust Intensity Based Localization Method for Autonomous Driving on Snow-wet Road Surface [1]. SH Lee proposed an on-road vehicle localization scheme to keep a track of an ego vehicle with respect to the ego/target lane center using the results of the camera-based lane recognition [16]. In the component of mapping, on one hand, there exists various of mature mapping schemes like ORB-SLAM based on monocular camera [17], RGB-D SLAM with RGB-D camera [7], hector SLAM which combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing [15], and gmapping using Rao-Blackwellized Particle Filters (RBPF) [4] as well as the optimization of RBPF [10]. On the other hand, some researchers also put forward some novel methods which were difference with the prevalent mapping methods. For instance, M Oliveira proposed an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors [18], etc. On the part of navigation (path plan), the typical algorithm include the A* which was taken into practice in DARPA Urban Challenge [6], Dijkstra and so on. There also existed numerous of novel optimized approaches, like A model predictive controller (MPC) with time-varying safety constraints for

Multimed Tools Appl

highway path planning [12], approach using a graph-based search algorithm and a discrete kinematic vehicle model to find a primitive path [5]. Apart from the research and implementation of traditional algorithms, there also existed a lot of researches that involved the Deep Learning (DL) method which was popular in recent years. For instance, the project which introduced a novel approach for advanced localization performance by applying deep learning in the field of visual odometry and the proposed method will have the ability to assist or replace a purely Global Positioning System based localization approach with a vision based approach [3], the framework which incorporated Recurrent Neural Networks (RNN) for information integration enabling the car to handle partially observable scenarios [21], etc. After summarizing the relevant researches in recent years, conclusion demonstrates that the implementation of Deep Learning in robotics confronts with various of challenges such as the uncertainty. In addition, the relevant research about robustness of autonomous car when fusing the Deep Learning technique with traditional technique in high velocity or cruise mode is limited. In view of this, this paper proposes a method which fuses the Deep Learning method with traditional method in navigation and obstacle avoidance for obtaining a robust and safety autonomous system in cruise mode. The autonomous cars are exhibited in Fig. 1 and paper is organized as follow: In section 2, the core submodules of autono- mous car are demonstrated as well as the software framework. In section 3, the detailed design of function modules are presented which involve the implementation of Hector SLAM based on Teleoperation, autonomous navigation (path plan), intelligent abstacle avoidance system, design of Command Processing and Data Recording system. In section 4, the hardware system is detailed. Then, experiments are conducted and conclusions are drawn which cover the autonomous SLAM result, obstacle avoidance result as well as navigation result. The last but not the lest, the discussion and acknowledgment are demonstrated.

2 Framework design 2.1 Core modules The main intention of this paper about intelligence car project is to check the performance of the technique, algorithm and the model for applying to the commercial vehicle in the future. So, the attention is focused on how to design and deploy the system to the car which has general similarity with the real commercial vehicle and this paper will not intend to introduce more complex mathematical formulas. The autonomous vehicle system (Fig. 2) in this paper is composed by seven core function modules which involves the implementation of Hector SLAM based on Teleoperation, Autonomous Hector SLAM, the realization of Autonomous Navigation System, Intelligent Obstacle Avoidance System, Command Processing Center, Data Recording System as well as the Hardware System.

2.2 Software framework The design of software framework (Fig. 3) is essential component for the purpose of obtaining the stabilization. So the framework design of this paper absorbs some of the

Multimed Tools Appl

Fig. 1 (b) is the autonomous car (THUNDER-1) which is realized in this year and (a) is the obstacle avoidance car finished in last year. Two cars are used for data co-collection and communication

great idea from [8, 25] and highlights the features in Data Recording system, Recognition Model Retraining system and Intelligent Obstacle Avoidance System. With regard to environment perception, various of sensors are adopted which include laser liDAR for localization and mapping as well as obstacle avoidance, monocular camera for image collection, deep camera for image collection and 3D mapping, Inertial Measurement Autonomous Vehicle System

Obstacle Avoidance System with Deep Learning

Hector SLAM System based on Teleoperation

Autonomous Hector SLAM System

Fig. 2 The core modules of autonomous car

Path Planning System

Command Processing System

Data Recording System

Hardware System

Multimed Tools Appl

Path Planning Depth Camera msg

Obstacle Detection Local Model

liDAR msg

Hector SLAM

GPS msg

Global Positioning & Mapping

IMU msg

Posture Estimation

Global Combiner

Behavior Switches

Obstacle Avoidance Global Model

Lane Keeping

Steering

Data Recording

Velocity Control

Brake

Throttle

Recognition Model Retrainning

Fig. 3 The software framework of indoor autonomous car which includes the Raw Data Collection, Scene Detection and Understanding, Path Plan, Car Control, Data Recording, Recognition Model Retraining, Communication, and Security

Unit (IMU) for pose estimation as well as GPS for global positioning. For the reason that the attention of this paper is the check the performance of fusing Deep Learning with traditional technique in indoor scene, so the GPS for outdoor positioning will be specially detailed in next paper as well as intelligent lane detection and tracking system. In the framework, the Local Model is constructed by intelligent obstacle avoidance system. At the same time, the Localization and mapping system as well as pose estimation system form the Global Model which provide the essential messages to the navigation system. And then, the navigation system draws the the desire path and generates the decision commands of velocity and steering angle. Finally, these commands come to the Car Control for parsing the commands to the messages which can be recognized by car. In addition, the Data Recording System is designed to collect the raw data from sensors and Car Control System for the Recognition Model Retraining. What’s more, the communication and security of the autonomous car are discussed which will also be detailed in next paper.

3 Functional module design 3.1 Implementation of Hector SLAM based on teleoperation 3.1.1 Overall design of SLAM system Gmapping is one of the popular SLAM scheme which uses the Rao-Blackwellized particle Filters, works best in planar environments, relies on available, sufficiently accurate odometry and does not leverage the high update rate provided by modern LIDAR systems [15]. in view

Multimed Tools Appl

of this shortage, the scheme of Hector SLAM [14] is adopted which is more stable and does not need to generate the accurate odometry message from the hardware. The 2D SLAM system (Fig. 4) receives the raw data and joint value from the liDAR and then get through the process of Preprocessing which uses an interpolation scheme allowing sub-grid cell accuracy through bilinear filtering in order to achieve the direct computation of interpolated values or derivatives. Next, the processed data comes to the Scan Matching module which is based on Gauss-Newton approach and used for localization. When all the requisite messages are provided, the map is drawn by using a multi-resolution map representation similar to image pyramid approaches which is used in computer vision. Meanwhile, the navigation system sends the initial pose which is generated by IMU and the SLAM system generates the 2D pose estimation to navigation system for path plan.

3.1.2 Hector SLAM based on teleoperation In this paper, the Xbox360 (Fig. 5) is used as the controller on one hand to operate the car remotely which involves the operation of throttle, trimming left and right, brake and reverse, starting the cruise mode, and steering. On the other hand, it’s also used to initiate the Autonomous SLAM, Autonomous Navigation, Data Recorder, and Convolutional Neural Network (CNN) mode. With regard to the ROS node design of Hector SLAM based on Teleoperation (Fig. 6), firstly, the raw data of Laser Scan is generated by laser liDAR and then transmitted to Mapping node after being preprocessed. And then, the car is shifted by the Xbox360 in Teleoperation node and the map is redrawn and sent to Map Server node. At the same time, the initial pose from IMU and the 2D pose from Mapping node transmit to Pose Estimation node to generate the accurate estimation of car pose and sent to Trajectory Server node. Finally, the map drawn before fuses with the trajectory to form a integrated Map Saving node.

3.2 Autonomous Hector SLAM As for the ROS node design of autonomous Hector SLAM (Fig. 7), the difference between autonomou Hector SLAM with SLAM based on Teleoperation is in the nodes of Cost Mapping, Exploration planner, and Exploration Controller (Fig. 7). The Cost Mapping node receives the 2D map from Mapping node and computes the value of each interest region of

Stabilization

liDAR Raw Message

Joint Value

Preprocessing

Scan Matching

2D Pose Estimation

Initial Pose

Navigaton System Pose Data IMU

Mapping

Steering Msg Car Controller

Global Position Data GPS

Fig. 4 The overall design of SLAM system and the relationships with other modules

Multimed Tools Appl

Steering

Cruise Control

Autonomous Autonomous Navigation SLAM

Reverse

Brake

Data Recorder

Throttle

Trim Left/Right

CNN

Fig. 5 The functional modules which are controlled by Xbox360

grid cell in 2D map. Then, the cost map is constructed and sent to node of Exploration Planner. The basic principle of the Exploration Planner is finding a frontier between known and unknown space in cost map and generating a desired trajectory to the destination. When there exist no frontiers to unknown space in cost map, the car will retrieve its path travelled so far and go to the places which have a large distance to the original place. At the same time, the data of cost map is updated and the internal data is resized if the size of cost map has changed. When confront with this situation, the planning command is called once and generates a new trajectory to the unknown destination. Finally, the path as well as the pose of the car are sent to the node of Exploration Controller to generate the command of twist.

3.3 Autonomous navigation (Path Plan) Contrasted with the navigation mode which utilizes the AMCL and sensor of odometry, the scheme (Fig. 8) proposed in this paper which is on the base of Hector SLAM is more compatible and stable and does not need the algorithm of AMCL and odometry and also achieves a decent performance in the experiment. On one hand, the map saved on Map Sever is sent to the node of Global Cost Mapping to compute the value of each grid cell. Then, the output value from the Global Cost Mapping node as well as the goal which is composed by the destination value and anticipant pose of the car are transmitted to the Path Plan node for computing and drawing the trajectory of the car. On the other hand, the node of Pose Estimation receives the raw data from IMU and generates the current pose of the car. Simultaneously, the value of navigation from Path Plan node and the pose value from Pose Estimation node are transmitted to the node of Controller to generate the twist value. The basic function of navigation is achieved which only makes the full use of global cost map and pose value of the car. However, this preliminary navigation system can only be applied into the scenario of no moving obstacles in the expected path. In view of this

Remapping Neato XV 11 Laser liDAR

Mapping

Map Server Map Saving

IMU

Pose Estimation

Fig. 6 The design of Hector SLAM based on teleoperation

Trajectory Server

Multimed Tools Appl Exploration Controller

Cost Mapping Neato XV 11 Laser liDAR

Mapping

Remapping

Pose Estimation

Map Server

IMU

Map Saving

Trajectory Server

Fig. 7 The design of Autonomous Hector SLAM

defect, when come across the situation which has various of dynamic interference factors, the measures must be taken to prevent the car from crash. So, a novel technique which fuses the method of Deep Learning and statistical approach is proposed and will be detailed in 3.4.

3.4 Intelligent obstacle avoidance system 3.4.1 Obstacle avoidance strategy based on CNN The function modules discussed above achieves the aim of navigation using global model. In this section, local model is constructed by the integration of Deep Learning method and statistical approach to realize the function of obstacle avoidance with high accuracy and robustness in high velocity (cruise mode). With regard to Deep Learning method, the technique of picture recognition is adopted as the scheme. CNN works as a kind of feedforward neural network. Its artificial neurons can respond to a portion of the surrounding unit which leads to the high-performance in the field of matrix computation especially in image processing. The process of image recognition can be divided into several components in general, for instance, data acquisition, data preprocessing, training model design, test model design, model training and model test, etc. As for data acquisition, the deepth camera is the core sensor. There are two main destination for the image acquired. The first one is Data Recorder Node which will be detailed in 3.5. The second one is the function node of CNN for generating the decision message to steer the front wheel. The raw image set we collected is not qualified for recognition model training or retraining and it must get through the preprocesssing phase which uses the library of OpenCV. Camera

liDAR

Obstacle Avoidance with Deep Learning

Obstacle Avoidance with liDAR

Controller

Goal

Map Server

Global Cost Mapping

Pose Estimation

IMU

Trajectory Server Remapping & Saving

Fig. 8 The design of Autonomous Navigation System

Multimed Tools Appl

Firstly, cv2.imread() method is adopted to read the image set to procedure. Secondly, the size of the image must be trimmed to 200 × 150 and the method of cv2.resize() can archive this goal. Thirdly, grayscale threshold method is taken to process the image. The fourth step is to process the label of image and extract the value of steering and throttle. The last step is taking the method of y.append((steering, throttle)) and x.append(img) for generating the label of training image and generating the image set of training. Meanwhile, 80% of the images are used for CNN model inputs and the rest of the images are taken to check the recognition rate of the model. Otherwise, each 100 images are encapsulated to a batch and send to model. The framework of model for training is same as model for test (Fig. 9). Considering that the embedded system is not suitable for the large-scale CNN model, so a 10-layer neural network framework (Fig. 9) is constructed in which 5 CNN layers are used for extracting the highdimensional features, one flatten layer is used to flatten the outputs of the fifth CNN layer to acquire a simple vector output, and 4 Full-Connection (FC) layers work as classifier to classify the features extracted in CNN layer. In CNN layers,the convolution kernel of first three layers is 5 × 5 and the rest two layer’s is 3 × 3. Getting through the processing of flatten layer, the output number is 512 and then they are sent to FC layer for classification. ReLU is chosen as activation function in FC layer for its higher performance in convergence, computing efficiency, overfitting avoidance better than tanh, softplus and sigmoid. The output class number of each fc layer are 512, 100, 50, and 10. Then, the method of tf.matmul() is taken to add the value of weight and bias to get final output and the method of tf.reduce_mean() is used for computing the value of loss.

3.4.2 Obstacle avoidance approach using liDAR images As for the approach of statistics, by scanning of 360 degree to circumstance and the situation within 5 m is acquired which is described as the point cloud with different color. The rectangle area 0.5 m away from the car for counting the point cloud in this paper is set to 2 × 1 (meter). by taking the method of checking and counting the number of point cloud in front of car as well as the direction of left and right, the raw data relevant with the steering decision is

Fig. 9 The model of Convolutional Neural Network

Multimed Tools Appl

acquired. In terms of ROS node design, the raw point cloud is not qualified for computing the steering angle directly. Some preprocessing methods are needed like the method of cv_bridge() in OpenCV which is used for creating a greyscale image of birds view. Primarily, iterate over point cloud in raw data. And then, map it to pixel. The next, convert pixel to ROS message. After the procedure of preprocessing, the number of point cloud needs to be counted in front of the car as well as the direction of left and right in birds view, and then, the steering message is generated. Meanwhile, the start and termination signals from Xbox360 are necessary. Otherwise, the format of the steering message is not qualified for the reason of missing velocity component. So, velocity message from Command Processing Centre node and steering message from liDAR Node mix together to a completion. Finally, publish the message to Command Processing Centre Node. The liDAR image which is transformed from Point Cloud and the trajectory of obstacle avoidance in slow velocity are demonstrated in Fig. 10. Up to now, the local model is constructed using the fusional method of Deep Learning and statistical method.

3.5 Design of command processing & data recording 3.5.1 Command processing The Command Processing Centre node is responsible for processing all messages from each functional modules which involve the control message from Xbox360, twist message from Path Plan and the twist message from liDAR as well as CNN. As for the message from the liDAR evasion mode, the velocity messages generated by Command Processing Centre node attach to the steering message from liDAR for forming the qualified decision message to manipulate the car finally. As to the messages from CNN as well as Path Plan node, the procedure of processing the decision messages from liDAR is the same as the procedure of CNN’s and Path Plan’s which means that the steering messages from CNN or Path Plan node mix with the messages from Command Processing Centre node to form the integral message and publish to hardware (Teensy3.6). With regard to the messages from Xbox360, the main workload includes the acquirement of the button event and the setting of relevant parameters and functions which involve the setting of cruise, control of velocity, initialization of throttle, the definition of steering & velocity & reverse, etc. Finally, the qualified messages from Command Processing Centre Node are sent to Teensy3.6 with the method of rospy.Publisher().

3.5.2 Data recording (co-collection) For the reason of limited recognition rate of CNN model because the image set acquired is limited, so it’s necessary to acquire more images for retraining the CNN model when the obstacle avoidance car is brought to a strange environment so as to enhance the success rate of obstacle avoidance. Meanwhile, considering the efficiency of data collection and inconvenience of data storing in each car, so, a scheme about the data recording is constructed. The fundamental principle of the Data Record node is collecting the image from each car (Fig. 1) and storing in PC which is equipped

Multimed Tools Appl Fig. 10 (a) is liDAR image transformed from Point Cloud and (b) is the obstacle avoidance result in slow velocity

(a)

obstacle

(b)

with high-performance GPU by transmission function in ROS. In this way, the timeconsuming phase of training is shorten to a great extent. the primary step is to initialize the relevant parameters and functions like CvBridge() and XBox360() which is the same as the node design before. Then, subscribe the raw messages which involve the RGB image, birds-view image from liDAR, and depth image from RealSense camera with the method of rospy.Subscriber(). Meanwhile, the messages from Command Processing Centre node and Xbox360 are processed at the same time for generating the image label and initializing the Data Recorder mode. The details are as follow. As to the decision messages from Command Processing Centre node, steering and velocity messages are extracted with the method of msg.twist(). With regard to the image messages from liDAR and camera, the format of the images are transformed to openCV messages with the method of bridge.imgmsg_to_cv2(). Then, qualified format of label with the sequence of image header, time stamp, value of velocity and value of steering angle is generated. Finally, the images attached with the label are transmitted to PC through the transmission function in ROS for retraining the CNN model. As for the message from Xbox360, the status of button in Data Recorder node is updated with the method of controller.update() and the button event is acquired to launch the Data Recorder mode with the method of controller.buttonEvents().

Multimed Tools Appl

Monocula Camera Depth Camera

Portable Battery (12V & 19V)

7 port USB hub

nVidia Jetson TX1

Motor Battery

Teensy3.6

Brushed Motor

Stock ESC

liDAR 6 DoF IMU

Monitor

Steering Servo

Fig. 11 The Hardware Framework of Autonomous Car

4 Experiment and conclusion 4.1 Hardware framework In this paper, the Traxxas car is adopted as the research target for the reason that the Brushed Motor, Steering Servo, and the Stock ESC are same as the commercial car to a great extent.

Fig. 12 (a) is the Hector SLAM by Teleoperation in campus of CUIT (outdoor) and (b) is Autonomous Hector SLAM in laboratory (indoor). (b) demonstrates that the system can autonomously plan and draw the path without human interference until obtaining the whole global map

Multimed Tools Appl

Fig. 13 The recognition rate of CNN

With regard to the selection of sensors, the Logitech Webcam, Intel Realsense Camera, Neato xv11 laser liDAR, and 6 DoF IMU are adopted to collect the raw data which include the images, point cloud and pose value for the purpose of mapping, pose estimation, localization, and obstacle avoidance. When the raw messages are collected, they are transmitted to 7 port USB hub and sent to nVidia Jetson TX1 which has high performance in computing especially in image processing and recognition. The navigation messages and steering messages are generated in Jetson TX1, and then they are sent to Teensy3.6 in which they are parsed to the format that can be recognized by Steering Servo and Stock ESC of car. Meanwhile, the PC is used for monitoring the road condition in front of car. The Portable Battery which has the output voltage of 12 V and 19 V as well as Motor Battery are adopted to provide power to whole system. The framework is demonstrated in Fig. 11.

4.2 Result of SLAM As for experiment of SLAM, the tests are conducted in the campus and master dormitory of Chengdu University of Information Technology. The results are demonstrated in Fig. 12.

Fig. 14 The success rate of obstacle avoidance using CNN and liDAR image

Multimed Tools Appl

Fig. 15 The trajectory of CNN evasion mode, liDAR evasion mode, and fusional strategy mode. The experiments involve 60 tests (20 tests for CNN mode, 20 tests for liDAR evasion mode, and 20 tests for fusional approach mode) and the experiments of obstacle avoidance demonstrate that the fusional strategy has a decent performance and robustness in cruise mode better than liDAR’s and CNN’s

4.3 Obstacle avoidance result The recognition model of CNN is trained using 10,000 steering images and a random batch (100 images) of steering images are tested. The recognition rate of CNN, the obstacle avoidance result and its trajectories are demonstrated in Figs. 13, 14, and 15.

4.4 Navigation result With regard to experiment of navigation, in the first place, Path Plan node is initiated by Xbox360. in addition, the goal destination is given in interactive interface of RViz and the planned path is drawn in RViz. Simultaneously, the car is driven to the destination precisely in high speed and avoid the obstacle in scheduled path accurately with robustness. The results of navigation with goal and autonomous SLAM and navigation without human interference are demonstrated in Fig. 16.

5 Discussion and acknowledgment 5.1 Discussion about security and future works Security of autonomous car is a hot and essential research field in recent years. With regard to security, it can be divided into two components. First one is the robustness and accuracy when running. Another one is whether it is vulnerable to be attacked by hacker from internet. The design and implementation of autonomous car mentioned above is about the robustness and

Multimed Tools Appl

Fig. 16 (a) is a successful case in navigation with goal in cruise mode and (b) is autonomous SLAM and navigation without interference in laboratory

accuracy in high speed. So, in this section of discussion about security, the attention is focused on the network security of autonomous car. The communication of autonomous car is on the basis of Internet Protocol (IP). In view of this, it is vulnerable to be hacked by the malevolent packages made by hackers which is relevant to the decision commands like steering, stopping, and speeding. So, the research of preventing the car from attack arouses the interests of scientists and engineers all over the world. In the summer of 2015, two American hackers succeeded in hacking into a car and taking over vital functions such as the engine and the brakes [22]. In the same year, Mevlut Turker Garip conducted a VANET-based botnet attack in an autonomous vehicle scenario that can cause serious congestion by targeting hot spot road segments [9] as well as other attack cases which attract most researchers attention. In view of this cases, some measures are taken to prevent the car from being hacked. For instance, the intrusion detection system against malicious attacks on the communication network of driverless cars [2]. However, the relevant security measures are based on the traditional methods or algorithms which have low intelligence level and the performance is not very decent. So, the following works will introduce Neural Network like Recursive Neural Network (RNN) to the autonomous car system. In addition, the experiment of autonomous system in indoor scenario in this paper will be conducted in outdoor scene and some sensors will be imported to suit the outdoor scenario (e.g. GPS) to navigate the car in more complex situations (e.g. urban block, expressway). Acknowledgements The research of autonomous car are funded by Sci-Tech Support Plan of Sichuan Province, China [Grant Numbers: 2016GZ0343].

References 1. Aldibaja M, Suganuma N, Yoneda K (2017) Robust Intensity Based Localization Method for Autonomous Driving on Snow-wet Road Surface. IEEE Trans Ind Info 1

Multimed Tools Appl 2. Alheeti KMA, Gruebler A, Mcdonald-Maier KD (2015) An intrusion detection system against malicious attacks on the communication network of driverless cars. Consumer Comm Network Conf 916–921 3. Bag S (2017) Deep Learning Localization for Self-driving Cars 4. Carlone L, Ng MK, Du J, Bona B, Indri M (2011) Simultaneous Localization and Mapping Using RaoBlackwellized Particle Filters in Multi Robot Systems. J Int Robot Syst 63:283–307 5. Chu K, Kim J, Jo K, Sunwoo M (2015) Real-time path planning of autonomous vehicles for unstructured road navigation. Int J Automot Technol 16:653–668 6. Dolgov D, Thrun S, Montemerlo M, Diebel J (2009) Path Planning for Autonomous Driving in Unknown Environments, Experimental Robotics, The Eleventh International Symposium, ISER 2008 13–16, 2008, Athens, Greece, 55–64 7. Endres F, Hess J, Sturm J, Cremers D, Burgard W (2017) 3-D Mapping With an RGB-D Camera. IEEE Trans Robot 30:177–187 8. Fernandes LC, Souza JR, Pessin G, Shinzato PY, Sales D, Mendes C, Prado M, Klaser R, Magalhães AC, Hata A (2014) CaRINA Intelligent Robotic Car: Architectural design and applications. J Syst Archit 60: 372–392 9. Garip MT, Gursoy ME, Reiher P, Gerla M (2015) Congestion Attacks to Autonomous Cars Using Vehicular Botnets, The Workshop on Security of Emerging NETWORKING Technologies 10. Grisetti G, Stachniss C, Burgard W (2007) Improved Techniques for Grid Mapping With RaoBlackwellized Particle Filters. IEEE Trans Robot 23:34–46 11. International S (2014) Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems 12. Jalalmaab M, Fidan B, Jeon S and Falcone P (2015) Model predictive path planning with timevarying safety constraints for highway autonomous driving, International Conference on Advanced Robotics 213–217 13. Kang MC, Chae SH, Sun JY, Lee SH, Ko SJ (2017) An enhanced obstacle avoidance method for the visually impaired using deformable grid. IEEE Trans Consum Electron 63:169–177 14. Kohlbrecher S, Meyer J, Graber T, Petersen K, Klingauf U, Stryk OV (2013) Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots. Springer, Berlin 15. Kohlbrecher S, Stryk OV, Meyer J and Klingauf U (2011) A flexible and scalable SLAM system with full 3D motion estimation. IEEE Int Sympos Saf Sec Rescue Robot 155–160 16. Lee SH, Chung CC (2017) Robust Multirate On-Road Vehicle Localization for Autonomous Highway Driving Vehicles. IEEE Trans Control Syst Technol 25:577–589 17. Mur-Artal R and Tardos JD (2015) Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM, Robotics: Science and Systems 18. Oliveira M, Santos V, Sappa AD, Dias P, Moreira AP (2016) Incremental texture mapping for autonomous driving. Robotics & Autonomous Systems 84:113–128 19. Pink O, Becker J, Kammel S (2015) Automated driving on public roads: Experiences in real traffic, it. Info Technol 57:223–230 20. Rosolia U, Bruyne SD and Alleyne AG (2016) Autonomous Vehicle Control: A Nonconvex Approach for Obstacle Avoidance. IEEE Trans Control Syst Technol 1–16 21. Sallab AAA, Abdou M, Perot E and Yogamani S (2016) Deep Reinforcement Learning framework for Autonomous Driving, NIPS 2016 Workshop - MLITS 22. Schellekens M (2016) Car hacking: Navigating the regulatory landscape. Comput Law Sec Rev Int J Technol Law Pract 32:307–315 23. Taş ÖŞ, F. KUHNT, J. M. ZöLLNER and C. STILLER (2016) Functional system architectures towards fully automated driving, Intelligent Vehicles Symposium 24. Tijmons S, Croon GCHED, Remes BDW, Wagter CD, Mulder M (2017) Obstacle Avoidance Strategy using Onboard Stereo Vision on a Flapping Wing MAV. IEEE Trans Robot 33:858–874 25. Ulbrich S, Reschka A, Rieken J, Ernst S, Bagschik G, Dierkes F, Nolte M and Maurer M (2017) Towards a Functional System Architecture for Automated Vehicles 26. Wolcott RW, Eustice RM (2017) Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving. Int J Robot Res 36:027836491769656 Chengmin Zhou is a AI researcher in the School of Cyber Security of Chengdu University of Information Technology, Chengdu, China. His research interest involves various of field in computer science. For instance, Computer Vision, Deep Learning, Parallel computing (CUDA) and Robot Operation System. Now, he is the main leader in the project of Implementation of Autonomous Car based on ROS, Face Recognition in Caffe and Intrusion Detection & Prevention (IDP) based on neural network of RNN (LSTM).

Multimed Tools Appl Fei Li is the professor and dean in the School of Cyber Security of Chengdu University of Information Technology. His research interest involves Machine Learning and Security of Car.

Wen Cao is a associate professor in the School of Information Engineering of Southwest University of Science and Technology in Mianyang, China. His research interest involves Deep Learning and Autonomous Car.

Suggest Documents