Development Of Autonomous Downscaled Model Car

1 downloads 0 Views 848KB Size Report
Model Car Using Neural Networks And. Machine Learning ... to the microprocessor used in the car. A Raspberry ... folder as a pickle file. Here's a sample dataset ...
Development Of Autonomous Downscaled Model Car Using Neural Networks And Machine Learning Uvais Karni 1, S.Shreyas Ramachandran 2, K.Sivaraman 3, A.K.Veeraraghavan 4 1,3,4

UG Student, Department of Electrical and Electronics Engineering, Sri Sairam Engineering College, West Tambaram,Chennai 2 UG Student, Department of Computer Science and Engineering, Meenakshi College of Engineering, Chennai Email:1 [email protected],2 [email protected],3 [email protected], 4 [email protected]

Abstract : In our upcoming world, the number of accidents occurring has increased drastically during the recent years leading to increase in fatal deaths. This is mostly caused by the distractions a driver encounters , for example, texting and driving,less attention span of driver,etc. Due to the above reasons, autonomous cars would be a better option which takes the errors of a driver away from the equation. The proposed concept in the paper is to make an autonomous downscaled model car using a generic RC car as base. We aim to achieve the above by using image processing which is trained by using neural networks to create a model through which autonomous cars are achieved. The hardware components used in this project are Raspberry PI 3 B microcomputer , camera module, HCSR04 ultrasonic sensor. We achieve the following features in our model, (a)Lane detection, (b)Traffic signal identification, (c)Road signs identification, (d)Obstacle detection avoidance, (e)Pedestrian Detection. The user can interface through an application that runs on a Raspberry Pi 3 Model B microcomputer that can be accessed on another computer via a graphical desktop sharing system termed as Remote Frame Buffer (RFB).

I.

INTRODUCTION

Automation in vehicles is a major focus in today’s research [1]. The objective of the project is to achieve autonomousity in a generic RC toy car so as to eliminate the existing disadvantages that comes with a human driver. This is done so as to illustrate that same can be done on a regular car/automobile. This is achieved by using image processing which is trained by using neural networks and machine learning to create a model through which autonomous cars are achieved.

Keywords: image processing; neural networks; machine learning; camera; autonomous; microcomputer; microcontroller.

Fig 1 Down-scaledRC Model Car

VNC server can be used as an application to interface with raspberry pi that runs on a Raspberry Pi 3 Model B microcomputer. The other hardware components include camera module, HCSR04 ultrasonic sensor other than a RC toy car as shown in Figure 1.

Fig 3 Convolution Neural Network Model & Random Forest Tree

Fig. 2 Block diagram

II.

NEURAL MODEL

NETWORK

The neural system utilized is convolution neural network because we working on classification orientated output, and it has a very high rate of accuracy which is around 95 %. CNN is similar to any other NN the only difference beginning that it process on chuck sized data, that is it can analysis detailed patterns. CNN make use of filters to detect features that are present throughout an image. The main advantage of using neural network is that once the model is trained it just needs to stack prepared parameters subsequently, in this manner forecast can be quick and it can progressively produce display dependent on the information. The Whole picture is used for preparing the model. There are 38,400 nodes in the information layer and 32 nodes in the covered layer. The quantity of nodes in the disguised layer is totally founded on the quantity of pixels in the picture. There are four nodes in the output layer where every hub relates to the guiding control directions: left, ideal, forward and turn around (Figure 3) [2] .

For training of model Ensemble method is used which combines solution of neural network as well as Random forest tree classifier. By using Ensemble method we can attain higher accuracy.In order to train the model using images ,we need to first convert these images into format that the computer can under such as n-array or matrix.So that we use his as data to train. The car also collects information with ultrasonic sensor. The data from the camera are fed to the microprocessor used in the car. A Raspberry Pi 3 Model B is used as the processor for processing all the data in the car, which has a Broadcom BCM2837 64bit ARMv7 Quad Core Processor controlled Single Board Computer running at 1.2GHz, 1 GB RAM, BCM43143 Wi-Fi on board, Bluetooth Low Energy (BLE) on board, 40pin expanded GPIO, 4 x USB 2 ports, 4 shaft Stereo yield and Composite video port, Full size HDMI, CSI camera port for interfacing the Raspberry Pi camera, DSI port for associating the Raspberry Pi contact screen, Micro SD port for stacking your working framework and putting away information [3].

Fig 4 Raspberry Pi 3

The camera used is PI camera module. The camera is interfaced with the controller, which captures 2592 * 1944 pixel static images and also supports 1080p at 30fps @ 60fps and 540 * 480p

60/90 video recording. Camera module is interfaced with the central micro-controller Raspberry Pi 3.

Fig 5 Pi Camera Module

All the data required to train the data is done , we might need to run the data through a process called data cleaning to further more make the data biased. The more attribute the better the model is the thumb rule in Machine learning. Next stage is simple as passing the data into the training function such as sklearn and developed neural model structure.

images are recorded in the same folder along with the corresponding key press. The resolution of the image determines the nodes of the hidden layers, which intern determines the how much time , computing resource will be determined. For which we capture images of size 240 x 160 and convert into 38,400 array where each value represent the data of each individual pixel. The above is also the same to determine the number of nodes and hidden layer in neural network. Data cleaning is done before segregating the images into their respective class folders based on the key press indicated in their filenames.

B. Training and operation of Neural Network Model After segregating the images into their corresponding class folders, the neural network is trained using data set generated.Now that we have the train data , we just need to add target or label which is crucial as its a supervised method. To train a model we need two information attributes and target,in our case the attribute is array of images and target is Label , the label in our scenario is the direction. The images are loaded from the corresponding class folders and are assigned the class values indicated in the configuration file. The generated model is stored in the optimized_thetas folder as a pickle file. Here's a sample dataset and trained model to get you started. Once we have the trained model, the RC car is run autonomously using generated model which takes an optional argument for the trained model [4].

C. Training the Random Forest Tree Using cost function we can find the most effective learn rate suitable for the model. As its with all neural network the images used to train network need to be converted to n-array and labeled, CNN is supervised method so all the data that is used to train the model must be labeled. Now that the model is trained , all that needs to be done is classify the output generated when prediction is run which is quite simple and can be achieved using Sigmoid function. Sigmoid is a gradient curve which classifies the output as forward ,backward , right extra. As all the processing is done locally their would be no issue regarding the loss of data and its hence the data maintain its integrity.

A. Setting up the Neural Network Model The images for training are captured, the car is controlled using the direction arrows and all the

As its an ensemble method more than one classification is used. Random forest tree is a well know machine learning algorithm that is good for classification. Here also we pass the same data to train the model. The only difference being the method and parameters such as no of trees , learning rate to be specified before running the model.

D. Combining and Classification of Output Now the outputs are combined and classified by using Majority voting from the set of outputs of a given instance.Which are further refined using Sigmoid function and cost function.

III.

ENVIRONMENTAL ANALYSIS

A. Object Detection Using Haar-Cascade For detection of objects such as stop sign, shape-based approach and utilized Haar Cascade based course classifiers for recognition is used. Since each object requires its own classifier and takes after a similar procedure in preparing and detection, this venture just centered around stop sign.

Fig 7 HC-SR04 Ultrasonic (US) sensor

It is used for obstacle detection. Ultrasonic sensor transmits the ultrasonic waves from its sensor head and again receives the ultrasonic waves reflected from an object. IV.

.

CONTROLLING THE CAR

The controlling process consists of 4 parts: Figure 6 Haar Cascade



The sensor interface layer includes various programming modules worried about getting and time stamping all sensor information.



The ultrasonic sensor used is HC-SR04 Ultrasonic sensor. It has 4 pin module, whose pin names are Vcc, Trigger, Echo and Ground respectively. This sensor is an exceptionally mainstream sensor utilized in numerous applications where estimating separation or detecting objects are required. The module has two eyes like undertakings in the front which frames the Ultrasonic transmitter and Receiver. The sensor works with the straightforward secondary school equation that

The discernment layer maps sensor information into inward models. The essential module in this layer is the PI camera, which decides the vehicle's introduction and area. Two distinct modules enable auto to explore in view of ultrasonic sensor and the camera. A street discovering module utilizes the PI camera determined pictures to discover the limit of a street, so the vehicle can focus itself along the side. At last, a surface evaluation module separates parameters of the present street to determine safe vehicle speeds.



The control layer is in charge of managing the controlling, throttle, and brake reaction of the vehicle. A key module is the way organizer, which sets the direction of the vehicle in controlling and speed space.

Separation = Speed * Time



The vehicle interface layer fills in as the interface to the robot's drive-by-wire framework. It contains all interfaces to the vehicle's brakes, throttle, and controlling wheel. It likewise includes the interface to the vehicle's server, a circuit that manages the physical capacity to a significant number of the framework segments [7].

The different process for object detection is represented above in Fig 6. Positive samples contain target object were acquired using pi camera, and were cropped that only desired object is visible. Negative samples without target object, on the other hand, were collected randomly. The same negative sample data set was used for stop sign [5].

B. Collision Waves

Avoidance

using

Sound

The Ultrasonic transmitter transmits a ultrasonic wave, this wave goes in air and when it gets protested by any material it gets reflected back toward the sensor this reflected wave is seen by the Ultrasonic collector module as appeared in the Figure 7 below [6].

In the proposed system, the raspberry Pi is used to control the L293D board, which allows motors to be controlled through the raspberry pi through the pulses provided by it. Based on the images

obtained, raspberry pi provides PWM pulses to control the L293D controller. L293D is a 16 Pin Motor Driver IC as shown in Figure 9. This is designed to provide bidirectional drive currents at voltages from 5 V to 36 V.

Fig 9 L293D Breakout Board

V. RESULT So far, the performance of theRC Autonomous Downscaled Model Car using Raspberry PI 3 B microcomputer , camera module, HCSR04 ultrasonic sensor has been build using Harr Cascade method asmentioned in the paper has been developed and evaluated by testing it on a map. Here, first the car is controlled by user when the Pi Cam takes pictures of its environment, and so a map is built. This neural network model once built, makes the car ride correctly on the map without user input as it knows what direction has to be changed/ how to control the motors on the map based on the neural network model from the pictures taken by Pi cam [10]. The figure 10 shows lane detection and and motor control according to the lane such the car moves along the lane.

It also allows the speed of the motor to be controlled using PWM. It’s a series of high and low. The Duration of high and low determine the voltage supplied to the motor and hence the speed of the motor. PWM Signals: The DC motor speed all in all is specifically relative to the supply voltage, so if lessen the voltage from 9 volts to 4.5 volts, then our speed turn out to be half of what it initially had. Yet, for changing the speed of a dc motor we can't continue changing the supply voltage constantly. The speed controller PWM for a DC motor works by changing the normal voltage provided to the motor.The input signals we have given to PWM controller may be a simple or computerized motion as per the outline of the PWM controller. The PWM controller acknowledges the control flag and modifies the obligation cycle of the PWM motion as indicated by the prerequisites. In these waves frequency is same but the ON and OFF times are different[8][9]. Recharge power bank of any capacity, here, 2800 mAH is used (operating voltage of 5V DC), can be used to provide supply to central microcontroller. The microcontroller used will separate and supply the required amount of power to each hardware component. This battery power pack is rechargeable and can get charged and used again and again.

Figure 10 RC car on Map riding autonomously Identification of stop sign and traffic signal lights was acheived via Haar Cascade and the Car stops when it comes by a signal. The Figure 11 shows the stopping of RC car at signal [11].

Figure 11 Haar Cascade Stop Sign Working

And finally, the car stops when an obstacle is put in front fit as expected. The obstacle detections using ultrasonic sensor was executed and the stopping of car was obtained at an obstacle.

[4]

Dürr, Oliver & Pauchard, Yves & Browarnik, Diego & Axthelm, Rebekka & Loeser, Martin. (2015). Deep Learning on a Raspberry Pi for Real Time Face Recognition. 10.2312/egp.20151036.

[5]

L. Cuimei, Q. Zhiliang, J. Nan and W. Jianhua, "Human face detection algorithm via Haar cascade classifier combined with three additional classifiers," 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), Yangzhou, 2017, pp. 483-487. Ayesha Iqbal ; Syed Shaheryar Ahmed ; Muhammad Danish Tauqeer ; Ali Sultan ; Syed Yasir Abbas; Design of multifunctional autonomous car using ultrasonic and infrared sensors. 2017 International Symposium on Wireless Systems and Networks (ISWSN)Year: 2017Pages: 1–5

VII. FUTURE SCOPE 1. Neural network model can be generated on Laptop instead of Raspberry Pi so as to reduce time and increase processing power for generating said model. 2. Number of laps ran on map can be increased so as to improve accuracy of autonomous driving for effective level 5 autonomous car system [12]. VIII. CONCLUSION This paper explains the viability of theRC Autonomous Downscaled Model Car. It can be a path changer as it will reduce the number of accidents that take place and in turn reduce wastage of time during transport in the future without any hindrances. This idea when implemented will be a boon to blind people as they cannot drive themselves at present.

Acknowledgement The authors would like to acknowledge that the concept explained in the paper above has been made in to a prototype and was presented at IEEE SS12 Maker Fair 2018 Pilot at Jeppiaar Institute of Technology, Chennai, India and was recognized as being a noteworthy project and concept. REFERENCES [1]

Q. Memon, M. Ahmed, S. Ali, A. R. Memon and W. Shah, "Self-driving and driver relaxing vehicle," 2016 2nd International Conference on Robotics and Artificial Intelligence (ICRAI), Rawalpindi, 2016, pp. 170-174.

[2]

Dürr, Oliver & Pauchard, Yves & Browarnik, Diego & Axthelm, Rebekka & Loeser, Martin. (2015). Deep Learning on a Raspberry Pi for Real Time Face Recognition. 10.2312/egp.20151036.

[3]

Kichun Jo ; Yongwoo Jo ; Jae Kyu Suhr ; Ho Gi Jung ; Myoungho Sunwoo; Precise Localization of an Autonomous Car Based on Probabilistic Noise Models of Road Surface Marker Features Using Multiple Cameras. IEEE Transactions on Intelligent Transportation SystemsYear: 2015, Volume: 16, Issue: 6Pages: 3377 – 3392

[6]

[7]

N. Kehtarnavaz and W. Sohn, "Steering Control of Autonomous Vehicles by Neural Networks," 1991 American Control Conference, Boston, MA, USA, 1991, pp. 3096-3101.

[8]

I. G. A. P. R. Agung, S. Huda and I. W. A. Wijaya, "Speed control for DC motor with pulse width modulation (PWM) method using infrared remote control based on ATmega16 microcontroller," 2014 International Conference on Smart Green Technology in Electrical and Information Systems (ICSGTEIS), Kuta, 2014, pp. 108-112.

[9]

Chitra, Venugopal & Rontala Subramaniam, Prabhakar. (2018). BLDC Motor Speed and Distance Control Using Raspberry Pi. International Journal of Applied Engineering Research. 10.

[10] B. T. Nugraha, S. Su and Fahmizal, "Towards self-driving car using convolutional neural network and road lane detector," 2017 2nd International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information Technology (ICACOMIT), Jakarta, 2017, pp. 65-69. [11] de Charette, Raoul & Nashashibi, Fawzi. (2009). Traffic Light Recognition using Image Processing Compared to Learning Processes. 333 338. 10.1109/IROS.2009.5353941. [12] M. V. Rajasekhar and A. K. Jaswal, "Autonomous vehicles: The future of automobiles," 2015 IEEE International Transportation Electrification Conference (ITEC), Chennai, 2015, pp. 1-6.