How we implement this trained Network in car to stop

0 downloads 0 Views 3MB Size Report
Collect the data and resources about artificial neural network and machine learning. ... Connect ARDUINO Serial with MATLAB for aquiring input data for NEURAL ..... Network Toolbox that are designed to improve network generalization ...... the similar critical accident situation and lock the drivers manual steering and.
KHULNA UNIVERSITY OF ENGINEERING

& TECHNOLOGY

EE 3200 ELECTRICAL & ELECTRONIC PROJECT DESIGN

Submitted to: Name: Dr. Md. Abdur Rafiq Designation: Professor Khulna University of Engineering & Technology

Submitted by: Name: Md. Samiul Haque Sunny Roll: 1203046 Khulna University of Engineering & Technology

AVOIDING ROAD ACCIDENT APPLYING ARTIFICIAL NEURAL NETWORK IN THE CAR

WHAT HAVE I DONE: STEP 1: Collect the data and resources about artificial neural network and machine learning.

STEP 2: Gather some electrical and electronic instruments and components to build an prototype.

STEP 3: Build the prototype with distance measuring sensors and motors

STEP 4: Get the sensor reading through Arduino serial communication.

STEP 5: Add a Bluetooth device on the prototype To make the Data transfer System Wireless

STEP 6: Collect the sensor data from arduino Serial monitor wirelessly via Bluetooth

STEP 7: Run the prototype manually with arduino and processing code

STEP 8: Connect ARDUINO Serial with MATLAB for aquiring input data for NEURAL NETWORK training.

STEP 9: According to the Highest possible critical situation for road accident is detected from the gathered data and to avoid the accident and set the target value for neural network training.

STEP 10: Complete the training of the network in the matlab

STEP 11: Check the network with unknown input whether it is working properly or not.

STEP 12: Extract the Network Weights from MATLAB and implemented in arduino.

WHY THIS PROJECT IS IMPORTANT? Every year people are dying because of Road accident. Nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day. An additional 20-50 million are injured or disabled. More than half of all road traffic deaths occur among young adults ages 15-44. We lost our beloved lives, property, time for this cause. So this all project is about to eradicate this problem.

Figure: Accident caused by falling from the road

Figure: Accident caused by hitting other cars I come up with a solution to remove road accident by giving some intelligence to the car along with the driver by Artificial Intelligence. This will help the driver if the driver fails to control the car in some specified critical situation. If we can train the car with accident causing critical situation then car can

recognize the accident situation and it can take a decision to save the car with the passengers. In this project I made a prototype and train it with Artificial Neural network. To do this I focused on two main objectives to train . These are: 1. Not to fall from the road. 2. Not to Hit other cars. So , WHAT IS ARTIFICIAL INTELLIGENCE? The intelligence exhibited by machines or software is known as Artificial intelligence. I used Artificial neural network to train the prototype. The central nervous system of animals, particularly the brain is the main inspiration of the family of statistical learning model in machine learning and cognitive science which are known as artificial neural networks. These networks are used to estimate or approximate functions than can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning.

PROTOTYPE CONSTRUCTION 1. Arduino Mega:

Figure: Arduino Board. The MEGA 2560 is designed for more complex projects. With 54 digital I/O pins, 16 analog inputs and a larger space for sketch. It is the recommended board for 3D printers and robotics projects. This gives your projects plenty of room and opportunities. The Mega 2560 is a microcontroller board based on the ATmega2560. It has 54 digital input/output pins (of which 15 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an

ICSP header, and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started. The Mega 2560 board is compatible with most shields designed for the Uno and the former boards Duemilanove or Diecimila.

Technical specs Microcontroller Operating Voltage Input Voltage (recommended) Input Voltage (limit) Digital I/O Pins Analog Input Pins DC Current per I/O Pin DC Current for 3.3V Pin Flash Memory SRAM EEPROM Clock Speed Length Width Weight

ATmega2560 5V 7-12V 6-20V 54 (of which 15 provide PWM output) 16 20 mA 50 mA 256 KB of which 8 KB used by bootloader 8 KB 4 KB 16 MHz 101.52 mm 53.3 mm 37 g

2. Ultrasonic Sonar Sensor(HC-SR04): The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats do. It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. From 2cm to 400 cm or 1” to 13 feet. It operation is not affected by sunlight or black material like Sharp rangefinders are (although acoustically soft materials like cloth can be difficult to detect). It comes complete with ultrasonic transmitter and receiver module.

Features         

Power Supply :+5V DC Quiescent Current : = 0 && r >= 0) 77. { 78. analogWrite(leftMotor[0], l); 79. digitalWrite(leftMotor[1], LOW); 80. analogWrite(rightMotor[0], r); 81. digitalWrite(rightMotor[1], LOW); 82. } 83. 84. else if (l < 0 && r < 0) 85. { 86. r = -r; 87. l = -l; 88. digitalWrite(leftMotor[0], LOW); 89. analogWrite(leftMotor[1], l); 90. digitalWrite(rightMotor[0], LOW); 91. analogWrite(rightMotor[1], r); 92. } 93. 94. else if (l > 0 && r < 0) 95. { 96. analogWrite(leftMotor[0], l); 97. digitalWrite(leftMotor[1], LOW); 98. digitalWrite(rightMotor[0], LOW); 99. analogWrite(rightMotor[1], r); 100. } 101. 102. else if (l < 0 && r > 0) 103. { 104. digitalWrite(leftMotor[0], LOW); 105. analogWrite(leftMotor[1], l); 106. analogWrite(rightMotor[0], r); 107. digitalWrite(rightMotor[1], LOW);

108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124.

} else { digitalWrite(leftMotor[0], LOW); digitalWrite(leftMotor[1], LOW); digitalWrite(rightMotor[0], LOW); digitalWrite(rightMotor[1], LOW); } } void loop() { updateDistance(); printdistance(); motordirection(); }

Processing Code to run Manually:

Send data wirelessly into matlab: 1. 2. 3. 4. 5. 6. 7.

clear all clc a=Bluetooth('SUNNY', 1); fopen(a); alltogether=[0 0 0 0 0];

8. for i=1:1000 9. y=fscanf(a) 10. b = strsplit(y,' '); 11. front(i) = str2double(b(1)); 12. left(i) = str2double(b(2)); 13. right(i) = str2double(b(3)); 14. leftd(i) = str2double(b(4)); 15. rightd(i) = str2double(b(5)); 16. alltogether= [front(i) left(i) right(i) leftd(i) rightd(i)] 17. end 18. all_sensor_reading=[front;left; right;leftd;rightd]; 19. fclose(a);

Sensor Reading For training (input and target): Front 13 13 13 13 13 13 13 13 13 13 14 13 27 13 13 13 1 13 13 13 13 13 13 13 14 13 13

Left 9 9 9 9 9 9 9 9 9 9 9 22 9 9 9 9 0 9 9 9 9 9 9 9 9 9 9

Front Right Leftdown Rightdown target 73 5 19 0 182 5 5 0 233 5 5 0 231 5 5 0 231 5 5 0 232 5 5 0 232 5 5 0 231 5 5 0 232 5 5 0 233 5 5 0 233 5 5 0 220 5 5 0 219 5 5 1 220 5 5 0 231 5 5 0 232 5 5 0 160 5 5 0 231 5 5 0 235 5 5 0 232 5 5 0 231 5 5 0 231 5 5 0 233 5 5 0 232 5 5 0 233 5 5 0 232 5 5 0 215 5 5 0

Left target 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Right target 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Back target 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

13 13 13 13 13 13 13 13 13 13 13 23 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 0 13 13 13 13 13 13 13 13 13 13

9 91 89 88 88 84 84 85 84 85 85 87 83 83 86 93 87 87 87 101 100 110 111 86 111 85 84 108 110 86 110 110 108 109 95 112 111 110 111 85 111 113 116

232 20 20 20 20 20 20 20 20 28 20 20 36 20 20 20 20 20 20 20 36 34 20 20 20 20 20 20 20 19 20 20 19 20 20 20 20 20 20 20 20 20 20

5 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 20 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 0 6 6 6 6

5 5 5 5 5 5 5 5 5 5 5 5 5 0 5 5 5 5 5 5 5 5 5 5 5 5 0 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 0 13 13 13 13 13 2 13 86 86 86 86 86 86 86 87 86 86 141 144 142 141 142 144 142 143

91 111 87 111 112 111 111 104 113 116 104 110 113 114 115 76 110 112 107 106 109 107 107 105 105 125 114 112 111 106 105 105 106 110 119 109 109 109 109 111 111 111 109

20 20 20 20 20 8 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 19 73 74 73 73 73 83 73 73 73 73 228 112 127 128 124 123 124 229

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 0 6 6 6 6 6 5 5 21 5 5 5 5 5

5 5 5 5 5 5 21 5 5 5 5 5 5 5 5 5 5 5 0 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1

1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0

0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

143 254 254 143 138 137 167 135 192 131 132 134 131 98 97 91 97 90 98 98 98 98 98 98 0 40 0 0 57 42 0 60 113

98 116 109 98 125 125 141 125 123 146 148 146 146 39 26 27 27 28 29 39 62 39 138 34 180 229 229 229 229 229 229 229 229

122 182 229 228 44 36 35 35 35 196 194 193 195 279 280 279 278 282 281 280 280 281 278 279 59 61 60 60 60 59 60 49 61

5 5 5 5 83 83 83 83 83 5 5 5 5 83 83 83 83 90 82 83 82 82 83 82 72 72 74 74 74 74 89 74 61

6 6 6 6 70 70 70 69 71 83 83 83 95 5 21 5 17 5 5 5 5 5 5 5 5 7 6 79 5 6 70 83 79

1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 0 0 0

0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 1 1 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1

TRAINING OF THE NETWORK It would be better if we talked about the network and training algorithm before the training process. I used Backpropagation algorithm to train a Feedforward Neural Network along with LevenbergMarquardt algorithm.

Backpropagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities. Standard backpropagation is a gradient descent algorithm, as is the Widrow-Hoff learning rule, in which the network weights are moved along the negative of the gradient of the performance function. The term backpropagation refers to the manner in which the gradient is computed for nonlinear multilayer networks. There are a number of variations on the basic algorithm that are based on other standard optimization techniques, such as conjugate gradient and Newton methods. The Neural Network Toolbox implements a number of these variations. Properly trained backpropagation networks tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output similar to the correct output for input vectors used in training that are similar to the new input being presented. This generalization property makes it possible to train a network on a representative set of input/target pairs and get good results without training the network on all possible input/output pairs. There are two features of the Neural Network Toolbox that are designed to improve network generalization - regularization and early stopping. The primary objective of using the backpropagation training functions in the toolbox to train feedforward neural networks to solve specific problems. There are generally four steps in the training process: 1. Assemble the training data 2. Create the network object 3. Train the network 4. Simulate the network response to new inputs

Error Derivation of back propagation algorithm = − Input component for input layer PE: M units = − Output component for hidden layer PE: N units = − Output component for hidden layer PE: N units = − Target vector component = − Weight from hidden to output layer = − Weight from input to hidden layer = Pattern μ = − Target vector component for training pattern μ (. ) = Activation (threshold) function Υ = Learning Parameter = Smoothing parameter ℎ

Error form input layer to hidden layer Activation value of a hidden PE is given by: ℎ = (∑ ) (1) ℎ ; is the activation of a hidden layer PE. The activation value for an output PE is: = (∑ ℎ ) = (∑ (∑ ) (2) Where is the activation value of an output layer PE. Now we examine the weight changes on the connections from the input layer to the hidden layer over t input-output pairs:

Δ

= −Υ

= −Υ .



= Υ



.



.

ʹ

(

ℎ )

ʹ

(

).

= Υ.

.

Where, = ʹ (∑ ). (∑ ). The weight change can be written as: Δ = . . Once again, if the threshold is a sigmoid, we get: ∆

=

. [ℎ (1 − ℎ )

.

(1 −

)(



)]

Error from hidden layer to output layer We look first at the weight changes on the connections from the hidden layer to the output layer over t input-output pairs: ∆

= −Υ





= Υ. .

ʹ

(

(



)

ℎ ). ℎ = Υ.

. ℎ μ



=

=

ʹ

(∑

ℎ ) . ℎ .The weight change can be written as: ∆ = . ℎ If the threshold is a sigmoid we get: Where

1−



.ℎ

The error between the calculated and the targeted value (

) of an output layer PE can be defined as:

= ∑ ( − ) = ∑ ( − (∑ (∑ ))) (3) This is a continuous differentiable function and therefore we can perform gradient descent.

Training and Run at ARDUINO platform We can run the prototype with continuous training at every input with this arduino code. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

#include #include "NewPing.h" /****************************************************************** * Network Configuration - customized per network ******************************************************************/ const int PatternCount =1; const int InputNodes = 5; const int HiddenNodes = 10; const int OutputNodes = 4; const float LearningRate = 0.3; const float Momentum = 0.9; const float InitialWeightMax = 0.5; const float Success = 0.0004;

14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73.

float Input[PatternCount][InputNodes+1]; float Target[PatternCount][OutputNodes]; double O; /****************************************************************** * Get distance and calculate for input ******************************************************************/ #define MAX_DISTANCE 400 double leftDistance, frontDistance, rightDistance,leftdownDistance,rightdownDistance; int lefttarget,righttarget,fronttarget,backtarget; int leftMotor[] = {2, 3}; int rightMotor[] = {5, 4}; NewPing NewPing NewPing NewPing NewPing

leftSonar(9, 8, MAX_DISTANCE); frontSonar (28, 29, MAX_DISTANCE); rightSonar (13, 12, MAX_DISTANCE); rightdownSonar (50, 51, MAX_DISTANCE); leftdownSonar (52, 53, MAX_DISTANCE);

void updateDistance() { Serial.println("Updating distances: "); leftDistance = leftSonar.ping() / US_ROUNDTRIP_CM; frontDistance = frontSonar.ping() / US_ROUNDTRIP_CM; rightDistance = rightSonar.ping() / US_ROUNDTRIP_CM; leftdownDistance = leftdownSonar.ping() / US_ROUNDTRIP_CM; rightdownDistance = rightdownSonar.ping() / US_ROUNDTRIP_CM; Serial.println("Front: " + String(frontDistance) + " cm"); Serial.println("Left: " + String(leftDistance) + " cm"); Serial.println("Right: " + String(rightDistance) + " cm"); Serial.println("Leftdown: " + String(leftdownDistance) + " cm"); Serial.println("Rightdown: " + String(rightdownDistance) + " cm"); } void calculateinputdistance() { if(leftdownDistance10) 92. { 93. lefttarget=0; 94. righttarget=0; 95. fronttarget=0; 96. backtarget=1; 97. } 98. } 99. void inputtarget() 100. { 101. updateDistance(); 102. calculateinputdistance(); 103. 104. Input[0][0] = leftDistance; 105. Input[0][1] = frontDistance; 106. Input[0][2] = rightDistance; 107. Input[0][3] = leftdownDistance; 108. Input[0][4] = rightdownDistance; 109. /********************** 110. Bias for Hidden layer 111. ***********************/ 112. Input[0][5]=1; 113. 114. Target[0][0] = lefttarget; 115. Target[0][1] = fronttarget; 116. Target[0][2] = righttarget; 117. Target[0][3] = backtarget; 118. } 119. /****************************************************************** 120. * End Network Configuration 121. ******************************************************************/ 122. int i, j, p, q, r; 123. int ReportEvery1000; 124. int RandomizedIndex[PatternCount]; 125. long TrainingCycle; 126. float Rando; 127. float Error; 128. float Accum; 129. float Hidden[HiddenNodes+1]; 130. float Output[OutputNodes]; 131. float HiddenWeights[InputNodes+1][HiddenNodes]; 132. float OutputWeights[HiddenNodes+1][OutputNodes]; 133. float HiddenDelta[HiddenNodes]; 134. float OutputDelta[OutputNodes];

135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195.

float ChangeHiddenWeights[InputNodes+1][HiddenNodes]; float ChangeOutputWeights[HiddenNodes+1][OutputNodes]; void setup() { Serial.begin(9600); randomSeed(analogRead(3)); ReportEvery1000 = 1; for( p = 0 ; p < PatternCount ; p++ ) { RandomizedIndex[p] = p ; } for (int i = 0; i < 2; i++){ pinMode(leftMotor[i], OUTPUT); pinMode(rightMotor[i], OUTPUT); } } void setMotors(int l, int r) { if (l >= 0 && r >= 0) { analogWrite(leftMotor[0], l); digitalWrite(leftMotor[1], LOW); analogWrite(rightMotor[0], r); digitalWrite(rightMotor[1], LOW); } else if (l < 0 && r < 0) { r = -r; l = -l; digitalWrite(leftMotor[0], LOW); analogWrite(leftMotor[1], l); digitalWrite(rightMotor[0], LOW); analogWrite(rightMotor[1], r); } else if (l > 0 && r < 0) { analogWrite(leftMotor[0], l); digitalWrite(leftMotor[1], LOW); digitalWrite(rightMotor[0], LOW); analogWrite(rightMotor[1], r); } else if (l < 0 && r > 0) { digitalWrite(leftMotor[0], LOW); analogWrite(leftMotor[1], l); analogWrite(rightMotor[0], r); digitalWrite(rightMotor[1], LOW); } else { digitalWrite(leftMotor[0], LOW); digitalWrite(leftMotor[1], LOW); digitalWrite(rightMotor[0], LOW); digitalWrite(rightMotor[1], LOW); } } void motordirection() { if(O==Output[0]) { setMotors(0,0);

196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254.

delay(1000); setMotors(0,150); delay(600); setMotors(0,0); delay(1000); updateDistance(); } else if(O==Output[1]) { setMotors(90,90); } else if(O==Output[2]) { setMotors(0,0); delay(1000); setMotors(150,0); delay(600); setMotors(0,0); delay(1000); updateDistance(); } else if(O==Output[3]) { setMotors(0,0); delay(1000); setMotors(-150,-150); } } void toTerminal() { for( p = 0 ; p < PatternCount ; p++ ) { Serial.println(); Serial.print (" Training Pattern: "); Serial.println (p); Serial.print (" Input "); for( i = 0 ; i