Therefore, I sincerely hope that you will appreciate my effort and accept my project report as it may ... once the servo motor is attached to an ultrasonic sensor, the ultrasonic sensor ... not only shows the existence of an object under the Radar, but also points out the .... YOLO - real-time object detection . ..... If we look at the.
Development of An Intelligent Border Management System For Bangladesh
Supervised by: Dr. Nafees Mansoor Assistant Professor Department of Computer Science and Engineering University of Liberal Arts Bangladesh (ULAB)
Submitted by: Mujtahid Alam ID: 132014001 Department of Computer Science & Engineering University of Liberal Arts Bangladesh
Date of Submission: 08-24-2017
Declaration I, the undersigned hereby declare that this internship report on “Development of An Intelligent Border Management System For Bangladesh.” has been prepared by me under the guidance of Dr. Nafees Mansoor for the partial fulfillment of the degree of Bachelor in Computer Science & Engineering from department program from the Department of Computer Science and Engineering (CSE), University of Liberal Arts Bangladesh (ULAB). I assure that this report is original in nature and has not been submitted elsewhere for any other motive.
________________________ (Mujtahid Alam) ID: 132014001 Dept. of Computer Science and Engineering University of Liberal Arts Bangladesh Date: 08-24-2017
Letter of Transmittal Date: 05-01-2017 Dr. Nafees Mansoor Assistant Professor Department of Computer Science and Engineering University of Liberal Arts Bangladesh
Subject: Submission of the project report.
Dear Sir, With great gratification, I am submitting my Internship Report on “Development of An Intelligent Border Management System For Bangladesh” that fulfills partial requirement of Computer Science and Engineering (CSE) degree. It was an interesting opportunity for me to work on projects over there to enhance my knowledge and skill in the practical field which will be very useful in my future career. I have tried my best to prepare this report and to gather all relevant information from different available sources and followed your guidelines.
Therefore, I sincerely hope that you will appreciate my effort and accept my project report as it may suffer from some shortcomings. I shall be grateful if my report is accepted for the applicable purpose. Thanking you for your kind supervision.
Sincerely Yours,
Mujtahid Alam ID: 132014001 Dept. of Computer Science and Engineering University of Liberal Arts Bangladesh
Certificate of Approval This is to certify that the report on “Development of An Intelligent Border Management System for Bangladesh.” has been submitted by Mujtahid Alam, ID No. 132014001, University of Liberal Arts Bangladesh (ULAB), accepted as satisfactory for the partial fulfillment of the requirements for the degree of Bachelor of Science (B.Sc.) in Computer Science and Engineering (CSE).
_____________________________
_____________________________
Head of the Department
Supervisor:
Dr. Sifat Momen
Dr. Nafees Mansoor
Assistant Professor
Assistant Professor
Dept. of Computer Science and Engineering
Dept. of Computer Science and Engineering
University of Liberal Arts Bangladesh
University of Liberal Arts Bangladesh
Acknowledgement The gratification that accompanies that the successful completion of any task would be incomplete without the mention of people whose cooperation made it possible, whose constant guidance and encouragement crown all efforts with success. At first, I desire to express my deepest sense of gratitude of almighty Allah who has blessed me with the ability to finish my project report within due time. This project report is the important part of the program criteria that is a requirement to fulfill the Bachelor Program in Computer Science and Engineering at University of Liberal Arts Bangladesh. I would like to give my heartiest salutations to my supervisor Dr. Nafees Mansoor for his encouragement, supervision, support and time. His constant guidance and kind nature had helped me to gain the maximum amount of knowledge from my internship experience. I would like to thank our department head Dr. Sifat Momen sir and my supervisor again Dr. Nafees Mansoor sir for approving my report.
Abstract This project focuses on the development of an intelligent border management system for Bangladesh. In this research, a framework for intelligent border in the context of Bangladesh is proposed and later a proto-type for such system has been developed. In order to develop the proto-type, the project requires the several hardwares, namely Arduino UNO, Ultrasonic Sensor, PIR sensor, Monitor, LED light, a servo motor, etc. Iinitially, the author requires to plan and prepare several C programs for the Arduino Micro controller, since Arduino needs to control the ultrasonic sensor, servo motor, and PIR sensor. Afterwards, the programs are uploaded to Arduino device. Next, the ultrasonic sensor is attached to the servo motor, where the implemented program in servo motor ensures and tracks a 180 Degree angular rotation. Thus, once the servo motor is attached to an ultrasonic sensor, the ultrasonic sensor also ables to move along the same angle. The ultrasonic sensor sends out a high-frequency sound pulse and counts the waiting time for the reflected echo sound. The sensor has 2 openings on its front. One opening transmits ultrasonic waves and the other one receives the echo. This is how the ultra sonic sensor detects an object. The whole process can be presented in a computing environment where a monitor in the computing environment shows the presence of any object using a Radar map. Precise object detecting system has been developed in this project and later the result is presented in the Radar Map. This developed system not only shows the existence of an object under the Radar, but also points out the position of the object. The PIR sensor, a passive infrered sensor, is used in this project to detect motions. If human or animal passes through the PIR sensor’s monitoring zone, this sensor can detect the object and LED light will be lit to indicate this situation. In this project, video camera is also used to identify human where image processing is conducted using OpenCV and Darknet. Experimental results from the proto-type shows the feasibility to implement such concept in larger scale along the border of Bangladesh. .
Table of Contents CHAPTER 1 ............................................................................................................................................. 1 Introduction ............................................................................................................................................... 1 1.1
Background and motivation of research ..................................................................................... 1
1.2
Research Objective ..................................................................................................................... 2
1.3
Research Scope........................................................................................................................... 2
CHAPTER 2 ............................................................................................................................................. 3 Literature Review ...................................................................................................................................... 3 2.1
Introduction ................................................................................................................................ 3
2.2
Image Processing ........................................................................................................................ 3
2.3
Purpose of Image processing ...................................................................................................... 4
2.4
Types .......................................................................................................................................... 4
2.5
Why need of an image processing .............................................................................................. 4
2.6
Requirements .............................................................................................................................. 5
2.6.1
Linux Operating System ..................................................................................................... 5
2.6.2
NVidia Cuda Core Graphics Processor ............................................................................... 6
2.6.3
OpenCV .............................................................................................................................. 7
2.6.4
YOLO - real-time object detection ..................................................................................... 8
2.6.5
How it works ....................................................................................................................... 8
2.6.6
Darknet with Pre-trained models ...................................................................................... 10
CHAPTER 3 ........................................................................................................................................... 13 Methodology ........................................................................................................................................... 13 3.1
Introduction .............................................................................................................................. 13
3.2
Research Activity ..................................................................................................................... 13
CHAPTER 4 ........................................................................................................................................... 14 Proposed System Design and Analysis ................................................................................................... 14 4.1
Introduction .............................................................................................................................. 14
4.2
Model........................................................................................................................................ 14
4.3
Radar ........................................................................................................................................ 14
4.3.1
Arduino UNO .................................................................................................................... 15
4.3.2
Ultrasonic Sensor .............................................................................................................. 15
4.3.3
PIR Sensor......................................................................................................................... 16
4.3.4
LED light........................................................................................................................... 16
4.3.5
Servo Motor ...................................................................................................................... 17
4.3.6
Monitor.............................................................................................................................. 17
4.4
Arduino Code ........................................................................................................................... 18
4.5
Basic Design ............................................................................................................................. 19
4.6
Radar Body Design .................................................................................................................. 21
4.7
Introduction of Processing........................................................................................................ 22
4.8
What is Processing ................................................................................................................... 22
4.9
Working with Processing ......................................................................................................... 22
4.10
Processing Code.................................................................................................................... 22
4.11
Analysis ................................................................................................................................ 28
4.12
Results................................................................................................................................... 29
CHAPTER 5 ........................................................................................................................................... 30 Conclusion .............................................................................................................................................. 30 Reference ................................................................................................................................................ 31
List of Figures Figure 3.1 How to process an image …………………………………………… 9 Figure 3.2 Final output of image processing ………………………………….... 9 Figure 4.3 Arduino UNO ……………………………………………………… 15 Figure 4.4 Ultrasonic Sensor ………………………………………………….. 15 Figure 4.5 PIR sensor ………………………………………………………….. 16 Figure 4.6 LED light …………………………………………………………... 16 Figure 4.7 Servo Motor ………………………………………………………… 17 Figure 4.8 Monitor ……………………………………………………………... 17 Figure 5.9 Design of Radar …………………………………………………….. 20 Figure 5.10 Radar Body Design ……………………………………………….... 21 Figure 5.11 Digital Visualization of Radar ……………………………………… 28
CHAPTER 1 Introduction 1.1
Background and motivation of research
This generation is the moment where everything is depending on technology and gadgets. Each and every day, many of us are trying to improve our life styles and security. Starting from the world’s security, every nation’s concern is to protect their people with top priority. Thinking about the topic, the security can turn up from the border. Depending on border’s security, the government can provide protection for their people’s safety from unauthorized issues. If we look at the present scenario of India, India is currently trying to develop their security system of their border so that they can ensure proper safety for their people. Taking as a short article from Automatic Intruder Combat System: A way to Smart Border Surveillance, “Borders in Indian scenarios have enormous problems of illegal intrusions in terms of terrorism. Indian Army takes care of patrolling these border fences all along day and night. The patrolling becomes even more typical during winter in Kashmir and summer in Rajasthan. Fog, hue, mist and sand storms create inhuman conditions for patrolling parties doing surveillance. Intrusions leading to the militant activities take benefit of these severe climatic conditions. The harsh climatic conditions create the increasing demand of man power for patrolling and as well have consequences of loss of life of soldiers. To overcome these problems being faced by BSF and Indian Army in surveillance, an automatic surveillance mechanism could be a solution. The task of automatic surveillance involves automatic detection of human intruders continuously in the real-time surveillance scene. In automatic detection, the real-time surveillance video from the camera is processed for detecting the human intruder and then checking its position relative to the fences. Intruder position could be behind the fence, on the fence or have crossed the fence.” From this concern, easily that can be determined that India is developing their border management system. Basically, this is radar system with digital image processing system. In this paper, I also proposed a design for Digital Radar System and Image processing via camera. I also described the installation and new open-source packages to develop the design. The idea, the proposed model, model design, description, Diagrams, and results are also included in this paper.
1
1.2
Research Objective
The objective is to provide right information with successful operation. As there are many different ways to complete this project. Automated surveillance technology is the main object of this research and it can be possible in nationally and globally and provide the ultimate solution for border security system in a smart way.
1.3
Research Scope
This report is mainly prepared for academic purpose and for the fulfillment of the partial requirement of CSE program from the Department of Computer Science and Engineering (CSE), University of Liberal Arts Bangladesh (ULAB). This report has covered the cognitive radio ad-hoc network routing protocol, wireless ad-hoc network and their challenges and limitations for our proposed interface.
2
CHAPTER 2 Literature Review 2.1
Introduction
In two thousand generations, everything is changing with the help of technology. That improved us with easier life and security. Thinking about the advantage of the modern security, my project is all about smart defense system which can be used as border surveillance for a country. This system will make an effect with 24 hours and 7 days’ security services and also done by machines. What we are doing right now, border surveillance by human. The problem is created when the surveillance is not doing properly because of less monitoring area. Some of the areas cannot be patrolled because of some specific reasons. Remembering about the problems, the decision is needed to make the defense system in a smart way to reduce manpower, for better accuracy. As a result, we can expect advanced threat protection. The whole process is costly and I need to show positive results of smart defense system which can be possible within short budget. To detect object properly, image processing is highly recommended to improve border security. In this section, I will describe my research about image processing. This section will provide also the information that can be used to train image processing models. How the image processing works and what type of pre-trained model available and the results are also included in this area.
2.2
Image Processing
Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is an image, like video frame or photograph and output, may be image or characteristics associated with that image. Usually, Image Processing system includes treating images as two-dimensional signals while applying already set signal processing methods to them.
It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too.
Image processing basically includes the following three steps.
3
· Importing the image with an optical scanner or by digital photography. · Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs. · Output is the last stage in which result can be altered image or report that is based on image analysis.
2.3
Purpose of Image processing
The purpose of image processing is divided into 5 groups. They are: 1.
Visualization - Observe the objects that are not visible.
2.
Image sharpening and restoration - To create a better image.
3.
Image retrieval - Seek for the image of interest.
4.
Measurement of pattern – Measures various objects in an image.
5.
Image Recognition – Distinguish the objects in an image.
2.4
Types
The two types of methods used for Image Processing are Analog and Digital Image Processing. Analog or visual techniques of image processing can be used for the hard copies of printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to an area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing. Digital Processing techniques help in the manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction. [1]
2.5
Why need of an image processing
Image processing is needed to determine the object. The moving object can be detected through it. As the image processing is done by the camera to capture the image or video, so it can easily detect either it is human or animal. We need image processing as the main purpose is to make sure the maximum security with the high detection process.
4
2.6
Requirements
There are some basic requirements which need to be ready to do image processing as a research experiment. •
Linux operating system
•
NVidia CUDA core Graphics processor.
•
OpenCV
•
YOLO real time object detection
•
Pre-trained models
2.6.1 Linux Operating System An operating system is a software that manages all of the hardware resources associated with your desktop or laptop. To put it simply – the operating system manages the communication between your software and your hardware. Without the operating system (often referred to as the “OS”), the software wouldn’t function.
The OS is comprised of a number of pieces: The Bootloader: The software that manages the boot process of the computer. For most users, this will simply be a splash screen that pops up and eventually goes away to boot into the operating system. The kernel: This is the one piece of the whole that is actually called “Linux”. The kernel is the core of the system and manages the CPU, memory, and peripheral devices. The kernel is the “lowest” level of the OS.
Daemons: These are background services (printing, sound, scheduling, etc) that either start up during boot or after you log into the desktop. The Shell: This is the shell – a command process that allows the user to control the computer via commands typed into a text interface. This is what, at one time, scared people away from Linux the most
5
(assuming they had to learn a seemingly archaic command line structure to make Linux work). This is no longer the case. With modern desktop Linux, there is no need to ever touch the command line.
Graphical Server: This is the sub-system that displays the graphics on your monitor. It is commonly referred to as the X server or just “X”.
Desktop Environment: This is the piece of the puzzle that the users actually interact with. There are many desktop environments to choose from (Unity, GNOME, Cinnamon, Enlightenment, KDE, XFCE, etc). Each desktop environment includes built-in applications (such as file managers, configuration tools, web browsers, games, etc).
Applications: Desktop environments do not offer the full array of apps. Just like Windows and Mac, Linux offers thousands upon thousands of high-quality software titles that can be easily found and installed. Most modern Linux distributions (more on this in a moment) include App Store-like tools that centralize and simplify application installation. For example, Ubuntu Linux has the Ubuntu Software Center which allows the user to quickly search among the thousands of apps and install them from one centralized location. [2]
2.6.2 NVidia Cuda Core Graphics Processor CUDA Cores - Just a small part of the larger whole when it comes to a NVidia GPU. A "CUDA Core" is NVidia’s equivalent to AMD's "Stream Processors." NVidia's proprietary parallel computing programming model, CUDA (Compute Unified Device Architecture), is a specialized programming language that can leverage the GPU in specific ways to perform tasks with greater performance. Each GPU can contain hundreds to thousands of CUDA cores. Architecture changes in a fashion that makes cross-generation comparisons often nonlinear, but generally speaking (within a generation), more CUDA cores will equate more raw compute power from the GPU. The Kepler to Maxwell architecture jump saw nearly a 40% efficiency gain in CUDA core processing ability, illustrating the difficulty of linearly drawing comparisons without proper benchmarks.
CUDA Cores are parallel processors, just like your CPU might be a dual- or quad-core device, NVidia GPUs host several hundred or thousand cores. The cores are responsible for processing all the data that is fed into and out of the GPU, performing game graphics calculations that are resolved visually to the
6
end-user. An example of something a CUDA core might do would include rendering scenery in-game, drawing character models, or resolving complex lighting and shading within an environment. [3]
2.6.3 OpenCV OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSDlicensed product, OpenCV makes it easy for businesses to utilize and modify the code.
The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of the user community and estimated number of downloads exceeding 14 million. The library is used extensively in companies, research groups and by governmental bodies.
Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many start-ups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching street view images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan.
It has C++, C and Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDA and OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that comprise
7
or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers. [4]
2.6.4 YOLO - real-time object detection It can detect the 20 Pascal object classes: 1. person 2. bird, cat, cow, dog, horse, sheep 3. the aero plane, bicycle, boat, bus, car, motorbike, train 4. bottle, chair, dining table, potted pl 5. ant, sofa, tv/monitor
2.6.5 How it works All prior detection systems repurpose classifiers or localize to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections.
This uses a totally different approach. This applies a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities.
The model has several advantages over classifier-based systems. It looks at the whole image at test time so its predictions are informed by the global context in the image. It also makes predictions with a single network evaluation unlike systems like R-CNN which require thousands for a single image. This makes it extremely fast, more than 1000x faster than R-CNN and 100x faster than Fast R-CNN. [5]
8
Figure 2.1: How to process an image
Figure 2.2: Final output of Image Processing
9
2.6.6 Darknet with Pre-trained models Open Source Neural Networks in C. Darknet is an open-source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation [6]
a. ImageNet ImageNet is an image data set organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, the majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy. Pre-Trained Models Here is a variety of pre-trained models for ImageNet classification. Accuracy is measured as singlecrop validation accuracy on ImageNet. GPU timing is measured on a Titan X, CPU timing on an Intel i7-4790K (4 GHz).
Table 2.6.6-1: Pre-trained Models and differences Model
Top-1
Top-5
Ops
GPU
CPU
Cfg
Weights
AlexNet
57.0
80.3
2.27 Bn
1.5 ms
0.3 s
cfg
285 MB
Darknet
61.1
83.0
0.81 Bn
1.5 ms
0.16 s
cfg
28 MB
VGG-16
70.5
90.0
30.94 Bn
10.7 ms
4.9 s
cfg
528 MB
Extraction
72.5
90.8
8.52 Bn
6.4 ms
0.95 s
cfg
90 MB
Reference
10
b. AlexNet The model that started a revolution! The original model was crazy with the split GPU thing so this is the model for some follow-up work.
Top-1 Accuracy: 57.0% Top-5 Accuracy: 80.3% Forward Timing: 1.5 ms/img CPU Forward Timing: 0.3 s/img cfg file Weight file (285 MB)
c. Darknet Reference Model This model is designed to be small but powerful. It attains the same top-1 and top-5 performance as AlexNet but with 1/10th the parameters. It uses mostly convolutional layers without the large fully connected layers at the end. It is about twice as fast as AlexNet on CPU making it more suitable for some vision applications. Top-1 Accuracy: 61.1% Top-5 Accuracy: 83.0% Forward Timing: 1.5 ms/img CPU Forward Timing: 0.16 s/img cfg file Weight file (28 MB)
d. VGG-16 The Visual Geometry Group at Oxford developed the VGG-16 model for the ILSVRC-2014 competition. It is highly accurate and widely used for classification and detection. I adapted this version from the Caffe pre-trained model. It was trained for an additional 6 epochs to adjust to Darknet-specific image preprocessing (instead of mean subtraction Darknet adjusts images to fall between -1 and 1).
Top-1 Accuracy: 70.5% Top-5 Accuracy: 90.0% Forward Timing: 10.7 ms/img
11
CPU Forward Timing: 4.9 s/img cfg file Weight file (528 MB)
e. Extraction I developed this model as an offshoot of the GoogleNet model. It doesn't use the "inception" modules, only 1x1 and 3x3 convolutional layers.
Top-1 Accuracy: 72.5% Top-5 Accuracy: 90.8% Forward Timing: 6.4 ms/img CPU Forward Timing: 0.95 s/img cfg file Weight file (90 MB)
12
CHAPTER 3 Methodology 3.1
Introduction
First of all, before needs to complete the project, there should have a plan. So, started with the project idea to know the availability of the research product. The actual project in development is costly, but I need to complete the project within budget materials. Also, need to know the fact which device will need to complete the task. In this section, I will describe how I get the information of research and process.
3.2
Research Activity
Firstly, I did image processing as my experimental project. Image processing is another major part of this research. I need to know the whole process of image processing and how it works. Took a deep search of image processing system and I found that, that will need of API to do the process. There are some APIs available to do image processing such as Google’s Tensor flow, OpenCV. The user can also train their model based on activities of the result. Not all the model’s dataset got all the necessary information to process an image for a perfect result. Some model got some restrictions and delay time periods. The image processing is done by LINUX operating system. After completing image processing, I focused my development for Radar which will be done by Arduino UNO. the choice is Arduino UNO board. As this device is affordable and easy to handle. The line of coding is simple. The programming language is C. the components of this device is also cost-friendly and available in the market. This information should be mentioned that this device is highly recommended for this research. The structure of the project is as simple as it explained. In my activity, Arduino UNO board is connected with necessary components such as Ultrasonic Sensor, Servo motor, PIR sensor and LED light. As we know Arduino board is the controller via the microprocessor and it controls the rest of the components. To check the research analysis, I need the output result. I searched the internet and found Processing Software which is the easy solution to get the output result. I downloaded it and the code was in “JAVA”. The sketch code is already categorized in the processing website. I got the necessary information to sketch the radar. After completing the sketch, I connected the Arduino UNO board with USB serial port. The sketch will work after connecting with the board. And The movement is also depending on Ultrasonic sensor.
13
CHAPTER 4 Proposed System Design and Analysis 4.1
Introduction
The objective of this section is to provide a model design and with a short brief of the components that will be used to do research. There will be a proposed design for radar with sensors. The output and the result of the experiment will be analyzed in this section.
4.2
Model
To propose a design for the rightful choice, the necessary product information which is currently available in the market.
Components
4.3
•
Arduino UNO
•
Ultrasonic Sensor
•
PIR Sensor
•
LED light
•
Servo Motor
•
Monitor
Radar
The smart defense system’s basic need is a digital radar, which can be simplified through Arduino UNO. To make this radar, the list of the necessary things is
14
4.3.1 Arduino UNO The Arduino Uno is a microcontroller board based on the ATmega328 (datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header, and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with an ACto-DC adapter or battery to get started [7]
Figure 4.3: Arduino UNO
4.3.2 Ultrasonic Sensor An Ultrasonic sensor is a device that can measure the distance to an object by using sound waves. It measures distance by sending out a sound wave at a specific frequency and listening for that sound wave to bounce back. By recording the elapsed time between the sound wave being generated and the sound wave bouncing back, it is possible to calculate the distance between the sonar sensor and the object. [8]
Figure 4.4: Ultrasonic Sensor
15
4.3.3 PIR Sensor A passive infrared sensor (PIR sensor) is an electronic sensor that measures infrared (IR) light radiating from objects in its field of view. They are most often used in PIR-based motion detectors. An individual PIR sensor detects changes in the amount of infrared radiation impinging upon it, which varies depending on the temperature and surface characteristics of the objects in front of the sensor. When an object, such as a human, passes in front of the background, such as a wall, the temperature at that point in the sensor's field of view will rise from room temperature to body temperature, and then back again. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection. Objects of similar temperature but different surface characteristics may also have a different infrared emission pattern, and thus moving them with respect to the background may trigger the detector as well. [9]
Figure 4.5: PIR Sensor 4.3.4 LED light LED lights are the latest technology in energy-efficient lighting. LED stands for ‘Light Emitting Diode’, a semiconductor device that converts electricity into light. [10]
Figure 4.6: LED Light
16
4.3.5 Servo Motor A servomotor is a rotary actuator or linear actuator that allows for precise control of angular or linear position, velocity, and acceleration. It consists of a suitable motor coupled to a sensor for position feedback. It also requires a relatively sophisticated controller, often a dedicated module designed specifically for use with servo motors. [11]
Figure 4.7: Servo Motor 4.3.6 Monitor The monitor is a digital scene which shows the graphical representation of radar. The user can track through the monitor and determine the radar signal if the signal is correctly working or not.
Figure 4.8: Monitor
17
4.4
Arduino Code
#include #include #define TRIGGER_PIN 11 // Arduino pin 2 tied to trigger pin on the ultrasonic sensor. #define ECHO_PIN 12 // Arduino pin 3 tied to echo pin on the ultrasonic sensor. #define MAX_DISTANCE 150 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm. #define SERVO_PWM_PIN 9 //set servo to Arduino's pin 9 // means -angle .. angle #define ANGLE_BOUNDS 80 #define ANGLE_STEP 1 int sensor = 7; int led = 13; int state = LOW; int val = 0; int angle = 0; // direction of servo movement // -1 = back, 1 = forward int dir = 1; Servo myservo; NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); void setup() { Serial.begin(9600); // initialize the serial port: pinMode(sensor, INPUT); pinMode(led, OUTPUT); myservo.attach(SERVO_PWM_PIN); //set servo to Arduino's pin 9 } void loop() { val = digitalRead(sensor); if (val == HIGH){ digitalWrite(led, HIGH); delay(1000); if (state == LOW){ Serial.println("motion detected"); state = HIGH; } } else {
18
digitalWrite(led, LOW); delay(100); if(state == HIGH){ Serial.println("motion ended"); state = LOW; } } delay(50); // we must renormalize to positive values, because angle is from -ANGLE_BOUNDS .. ANGLE_BOUNDS // and servo value must be positive myservo.write(angle + ANGLE_BOUNDS); // read distance from sensor and send to serial getDistanceAndSend2Serial(angle); // calculate angle if (angle >= ANGLE_BOUNDS || angle