Global Sci-Tech

5 downloads 716 Views 5MB Size Report
2Mechanical Engineering Department, CET-IILM-AHL, Greater Noida ... To address this issue, some software companies (DCCC, The College of Davidson ..... The development of a successfull software depends upon the accurate estimation,.
ISSN 0975-9638 (Print) ISSN 2455-7110 (Online)

Global Sci-Tech Al-Falah's Journal of Science & Technology A -FA AH

VOLUME - 8

NUMBER - 1

JANUARY-MARCH 2016

UNIVERSITY 1997

CHIEF EDITOR Prof. Z.H. ZAIDI

EDITORS Global Sci-Tech., 8, 1 (1-60) January-March 2016

Prof. Khalil Ahmad Prof. Z.A. Jaffery Prof. Saoud Sarwar

EDITORIAL BOARD Prof. Abdullah M. Jarrah, Jordan Prof. Akhtar A. Khan, USA Prof. Ash Mohd Abbas, India Prof. Carlos Castro, USA Prof. D. Bahuguna, India Prof. H.P. Dikshit, India Prof. H.R. Khan, Germany Prof. Ishwar Singh, India Prof. Lovely Agarwal, USA Prof. M.S. Jamil Akhtar, India Prof. Mohd Zulfiquar, India Prof. Mohd. Sharif, India Prof. Mursalin, India Prof. Pankaj Maheshwari, USA Prof. R.K. Pandey, India Prof. R.M. Mehra, India Prof. Tabrez Alam Khan, India Prof. Vikram Kumar, India Prof. Zahid A. Khan, India

Available at: www.alfalahuniversity.edu.in www.kurra.co.in

Published by

Al-Falah Charitable Trust, New Delhi

CHIEF EDITOR Prof. Z.H. ZAIDI

EDITORS Prof. Khalil Ahmad Al-Falah University, Faridabad

Prof. Z.A. Jaffery Jamia Millia Islamia, New Delhi

Prof. Saoud Sarwar Al-Falah University, Faridabad

EDITORIAL BOARD Prof. Abdullah M. Jarrah, Jordan

Prof. Mohd Zulfiquar, India

Prof. Akhtar A. Khan, USA

Prof. Mohd. Sharif, India

Prof. Ash Mohd Abbas, India

Prof. Mursalin, India

Prof. Carlos Castro, USA

Prof. Pankaj Maheshwari, USA

Prof. D. Bahuguna, India

Prof. R.K. Pandey, India

Prof. H.P. Dikshit, India

Prof. R.M. Mehra, India

Prof. H.R. Khan, Germany

Prof. Tabrez Alam Khan, India

Prof. Ishwar Singh, India

Prof. Vikram Kumar, India

Prof. Lovely Agarwal, USA

Prof. Zahid A. Khan, India

Prof. M.S. Jamil Akhtar, India

Global Sci - Tech Al-Falah’s Journal of Science & Technology Volume 8

Number 1

January-March 2016

1.

A Review on-virtual Metrology Laboratory Tasleem Ahmad and Ankita Agarwal

1

2.

Literature review on Design, Analysis and Fatigue Life of a Mechanical Spring S K Jha and Mohd. Parvez

7

3.

Software Cost Estimation Huda Saif

15

4.

Security of Routing Protocols in MANETs: A Survey Sher Jung and Rajinder kumar

22

5.

Comparative study of Agile Business Intelligence and Agile Data Warehouse Md. Deedar Shamsi

37

6.

Intelligent Web Agent through Web Text Mining Techniques with Machine Learning Md Barique Quamar

50

CONTENTS

Owned and Published by J.A. Siddiqui (Chairman, Al-Falah Charitable Trust) Global Sci-Tech- 274-A, Al-Falah House, Jamia Nagar, Okhla, New Delhi-110 025. Printed at Alpha Printers, WZ-35/C, Naraina, Ring Road, New Delhi-110 028. Editors: Khalil Ahmad, Z.A. Jaffery and Saoud Sarwar, 274-A, Al-Falah House, Jamia Nagar, New Delhi-110 025.

DOI No.: 10.5958/2455-7110.2016.00001.X Global Sci-Tech, 8 (1) January-March 2016; Metrology pp. 1-6 Laboratory A Review on-virtual

A Review on-virtual Metrology Laboratory TASLEEM AHMAD1* and ANKITA AGARWAL2 Mechanical and Automation Engineering Department, Al-Falah University, Faridabad 2 Mechanical Engineering Department, CET-IILM-AHL, Greater Noida *E-mail: [email protected]

1

ABSTRACT Measuring problems in Engineering need no introduction. From recent development in computer field, it is very well understood that lack of awareness is one of the contributing factors for huge errors. In order to increase the awareness about the effects of measurement among professionals involved in industries construction, it is necessary to make them understand the concepts of measurement in engineering. To address this issue, some software companies (DCCC, The College of Davidson and Davie Counties, The North Carolina, Advance Manufacturing Alliance.) have developed a software tool named "Virtual Micrometer training Tool" "Virtual Calipers Training Tool" for academic Laboratory. Using this tool, a person with some knowledge of measurement in engineering can be able to learn by himself the fundamentals of measurement. For convenience in explaining the concepts, we divided this tool into two modules. Each module contains few experiments with good G.U.I. In these experiments, user can find the explanation with virtual instruments. At the end of module user can understand the measurement technique of the machine parts. This software tool aims to help students, mechanical engineers in understanding the principles of measurement. Key words: measurement, virtual calipers training tool, Virtual micrometer.

1. INTRODUCTION

industry-standard computer technologies to create user-defined instrumentation solutions.

1.1. What Is Measurement and Virtual Instrumentation?

1.2. System Components for Taking Measurements with Virtual Instruments

You take measurements with instruments. Instrumentation helps science and technology progress. Scientists and engineers around the world use instruments to observe, control, and understand the physical universe. Our quality of life depends on the future of instrumentation from basic research in life sciences and medicine to design, test and manufacturing of electronics, to machine and process control in countless industries. Virtual instrumentation is defined as combining hardware and software with

Different hardware and software components can make up your virtual instrumentation system. Many of these options are described in more detail through manuals. There is a wide variety of hardware components you can use to monitor or control a process or test a device. As long as you can connect the hardware to the computer and understand how it makes measurements, you can incorporate 1

Tasleem Ahmad and Ankita Agarwal

with the use of micrometer. Simulating the operation of its real world counterpart to exacting standards, this virtual micrometer (Fig. A1) provides the opportunity to operate a micrometer and learn its functions.

it into your system. 1.3. History of Instrumentation As a first step in understanding how instruments are built, consider the history of instrumentation. Instruments have always made use of widely available technology. In the 19th century, the jeweled movement of the clock was first used to build analog meters. In the 1930s, the variable capacitor, the variable resistor, and the vacuum tube from radios were used to build the first electronic instruments. Display technology from the television has contributed to modern oscilloscopes and analyzers. And finally, modern personal computers contribute high-performance computation and display capabilities at an ever-improving performance-to-price ratio.

In addition to simulate the physical function of the micrometer, it also provides digital readouts of the current measured location in eights and tenths of an inch. It features both manual operation and a snap-to-input system (Fig. A2 a) along with barrel camera (Fig. A2 b) which allows the user to specify an exact decimal measurement. A built in mini manual (Fig. A4) provides all the basic information need to successfully operate The Micrometer Training Tool, making it a great stand alone training utility or an excellent in-class educational aid.

2. TRAINING TOOLS MODULES For easy understanding of virtual training tools, I have divided the tool in two different modules. Name of different module as listed below : 2.1. The Virtual Micrometer Training Tool 2.2 The Virtual Caliper Training Tool 2.1. The Virtual Micrometer Training Tool The simulation application is designed to help educate and familiarize students

Fig. A2 (a) Input Control

Fig. A1. Virtual Micrometer Training Tool

Fig. A2 (b) Barrel Camera

2

A Review on-virtual Metrology Laboratory

2.1.1. Basic Functions

Special Restrictions : • In order to cycle an object, the micrometer must be fully open. • You must not use Reset to Closed while an object is present. • Input measure must be between 0.0 and 1.0 • Input measure must be greater than or equal to the size of the object loaded into the micrometer.

The Manual Control panel (Fig.A3 b) lets you use the VM like you would in real life. Course moves fast, Fine moves slowly but accurately. The reset buttons snap the VM to fully open or closed positions. The Cycle Object button (Fig.A3 a) runs through a list of objects you can measure, and then back to nothing (empty)

Requirement : Compatible with iPad. Requires iOS 4.3 or later, Size: 22.1 MB. 2.1.2. Least Count :

Fig. A3 (a) Object Control Panel

The working of all micrometers is based on the use of a screw-and-nut mechanism which transforms the rotary movement of micrometer screw into axial movement. The pitch of screw thread in all micrometers is equal to 1/40". The tapered end of thimble is provided with a circular scale having 25 divisions. Thus when rotated the thimble moves along barrel by 1/40" per revolution. When the thimble is rotated by one division (i.e. 1/40 of a revolution) then the

Fig. A3 (b) Manual Control Panel

Fig. A4 Mini Manual

3

Tasleem Ahmad and Ankita Agarwal

Fig. B1 Virtual Caliper Training Tool

spindle moves axially by 1/40 × 1/25 =0.001". It is the value of one division on the thimble which is the least can be read correctly with the help of a simple micrometer, and is called the Least Count. 2.2. The Virtual Caliper Training Tool The second tool in this series, The Calipers Training Tool (CTT) is designed to help aid a student in the proper use of real caliper measuring instrument. As with the Micrometer Training Tool, the CTT is a realistic simulation of an actual caliper tool, working in the same manner as its real world counterpart.

Fig. B2 (a) Gauge Camera

A free camera system allow the user to pan around the work area and zoom in or out, giving a better feeling of connection to the tool as if it were in a work area. The fixed Gauge (Fig. B2 a) and rule cameras (Fig. B2 b) provide consistent and accurate measurement information, no matter where the viewer may have focused his main view.

Fig. B2 (b) Rule Camera

(a). Object

Usage is simulated by free interactive measurement as well as a simulation mode where the student can interact with two objects measured in different ways each. A "Ready-Check system is in place to avoid overlapping inputs and prevent errors, making the simulated caliper easy to use and more intuitive to learn.

(a). Main Camera

4

(a). Input

A Review on-virtual Metrology Laboratory

Requirement :

4. CONCLUSIONS

Compatible with iPad. Requires iOS 4.3 or later, Size: 20.7 MB.

In order to increase the awareness about the virtual instrumentation among professionals involved in measurement, it is necessary to make them understand the concepts of virtual measurements. To address this issue, we have to develop Virtual Measuring Training Tools or can buy from the professionals supplying the same. These are easy tools and cost effective for trainee and users to understand virtual measurement in a virtual metrology lab. Total eight experiments were explained to understand the fundamentals of virtual measurements.

2.2.1. Least Count : The dial caliper is consisting of main scale with a circular dial. The dial carries 100 division equivalents to 0.1" of main scale. The dial caliper is primarily intended for measuring both inside and outside diameter of shafts, thickness of parts etc. to accuracy of 0.001". 3. MODULE "B" : THE VIRTUAL CALLIPER TRAINING TOOL EXPERIMENT B1: Outer diameter measurement of disk.

5

Tasleem Ahmad and Ankita Agarwal

Khanna Publishers, Delhi, 2009. ISBN-13 9788174091536

5. REFERENCES [1]

[2]

[3]

The Virtual Micrometer Training Tool Davidson County Community College (DCCC) and The North Carolina Advance Manufacturing Alliance (NCAMA). Updated: Jan 23, 2013. http://advancedmanufactur ingalliance.org/ Date of access and time : 20/03/2015 (10:30 pm). The Virtual Caliper Training Tool Davidson County Community College (DCCC) and The North Carolina Advance Manufacturing Alliance (NCAMA) Updated: Mar 19, 2013 http://advancedmanufactur ingalliance.org/Date of access and time : 20/03/2015 (10:30 pm) R. K. Jain, Engineering Metrology

6

[4]

http://www.youtube.com/ watch?v=jNwiRLM3STA

[5]

http://www.businessstandard.com/article/pti-stories/ super-technical-metrology-labinuagurated-in-delhi115032700956_1.html, Date of access and time: 20/03/2015 (10:40 pm)

[6]

http://nptel.ac.in/courses/ 112106138/2 date of access and time: 20/03/2015 (10:40 pm)

[7]

https://www.gov.uk/nationalmeasurement-system--2, Date of access and time : 20/03/2015 (10:50 pm)

Global Sci-Tech, (1) January-March 2016;and pp.Fatigue 7-14 DOI Literature8review on Design, Analysis Life No.: of a10.5958/2455-7110.2016.00002.1 Mechanical Spring

Literature review on Design, Analysis and Fatigue Life of a Mechanical Spring S K JHA1* and MOHD. PARVEZ2 Project Scientist, Indian Institute of Technology Delhi, INDIA 2 Mechanical Engineering Department, Al-Falah University, Faridabad *E-mail: [email protected] 1

ABSTRACT In this paper are reviewed some papers on the design and analysis of spring performance and fatigue life prediction of spring. There is also the analysis of failure in spring. The aim of this paper is to represent a general study on the analysis of spring. Compression springs are commonly used in the I.C. Engine valves, 2 wheeler horn & many more and they are subjected to number of stress cycles leading to fatigue failure. A lot of research has been done for improving the performance of spring. Now the automobile industry has shown interest in the replacement of steel spring with composite spring. In general, it is found that fiberglass material has better strength characteristic and are lighter in weight as compare to steel for spring. We can reduce product development cost and time while improving the safety, comfort and durability of the vehicles produce. The CAE tool has where much of the design verification is now done using computer simulation rather than physical prototype testing. Key words: Spring, finite element analysis, FEM, CAE tool.

safety. It has been reported by the warranty/ maintenance department that frequent complaints are being received over the failures of these springs well within their intended life span. The springs must be designed for reliability. The springs must be designed to withstand the cyclic loading during operation. Therefore in this dissertation work it is proposed to carry out the design and fatigue analysis of compression spring used for Horn in a twowheeler so as to have better performance in terms of longer life.

1. INTRODUCTION Springs are mainly used in the industry as members absorbing shock energy as well as for restoring the initial position of a part upon displacement for initiating a given function. Compression springs are coil springs that resist a compressive force applied axially. Compression springs may be cylindrical, conical, tapered, concave or convex in shape. Coil compression springs are wound in a helix usually out of round wire. Every two-wheeler has a provision for a sounding a horn to be used while communicating so as to warn the passerby of the approaching vehicle as well as a signal for maintaining a safe distance or to communicate for any other reasons for

2. SOME OF THE IMPORTANT DESIGN CONSIDERATIONS IN SPRING WORK To adhere to proper procedures and design considerations, some of the 7

S K Jha and Mohd. Parvez

important design considerations in spring work are outlined here.

2.2 General spring design recommendations

2.1 Selection of material for spring construction

a. Try to keep the ends of the spring, where possible, within such standard forms as closed loops, full loops to centre, closed and ground, and open loops.

a. Space limitations : Do you have adequate space in the mechanism to use economical materials such as oiltempered ASTM A229 spring wire? If your space is limited by design and you need maximum energy and mass, you should consider using materials such as music wire, ASTM A228 chrome vanadium or chrome silicon steel wire.

b. Pitch. Keep the coil pitch constant unless you have a special requirement for a variable pitch spring. c. Keep the spring index [mean coil diameter, between 6.5 and 10 wherever possible. Stress problems occur when the index is too low, and entanglement and waste of material occur when the index is too high.

b. Economy : Will economical materials such as ASTM A229 wire suffice for the intended application?

d. Do not electroplate the spring unless it is required by the design application. The spring will be subject to hydrogen embrittlement unless it is processed correctly after electroplating.

c. Corrosion resistance : If the spring is used in a corrosive environment, you may select materials such as 17-7 PH stainless steel or the other stainless steels (301, 302, 303, 304, etc.).

Hydrogen embrittlement causes abrupt and unexpected spring failures. Plated springs must be baked at a specified temperature for a definite time interval immediately after electroplating to prevent hydrogen embrittlement. For cosmetic and minimal corrosion protection, zinc electroplating is generally used, although other plating such as chromium, cadmium, and tin are also used as per the application requirements. Die springs usually come from the die spring manufacturers with coloured enamel paint finishes for identification purposes.

d. Electrical conductivity : If you require the spring to carry electric current, materials such as beryllium copper and phosphorous bronze are available. e. Temperature range : Whereas low temperatures induced by weather are seldom a consideration, hightemperature applications will call for materials such as 301 and 302 stainless steel, nickel chrome A286, 17-7 PH, Inconel 600, and Inconel X750. Design stresses should be as low as possible for springs designed for use at high operating temperatures.

2.3 Special processing either during or after manufacture

f. Shock loads, high endurance limit, and high strength : Materials such as music wire, chrome vanadium, chrome silicon, 17-7 stainless steel, and beryllium copper are indicated for these applications.

a. Shot penning improves surface qualities from the standpoint of reducing stress concentration points on the spring wire material. This process can also improve the endurance limit

8

Literature review on Design, Analysis and Fatigue Life of a Mechanical Spring

along with the successive resonances and achieve significant values even at large distances from the resonance frequencies.

and maximum allowable stress on the spring. b. Subjecting the spring to a certain amount of permanent set during manufacture eliminates the set problem of high energy versus mass on springs that have been designed with stresses in excess of the recommended values. This practice is not recommended for springs that are used in critical applications.

B. Pyttel, I. Brunner, et al. [2] Longterm fatigue tests on shot peened helical compression springs were conducted by means of a special spring fatigue testing machine at 40 Hz. Test springs were made of three different spring materials - oil hardened and tempe red SiCr- and SiCrValloyed valve spring steel and stainless steel. With a special test strategy in a test run, up to 500 springs with a wire diameter of d = 3.0 mm or 900 springs with d = 1.6 mm were tested simultaneously at different stress levels. Based on fatigue investigations of springs with d = 3.0 mm up to a number of cycles N = 109 an analysis was done after the test was continued to N = 1.5 _ 109 and their results were compared. The influence of different shot peening conditions were investigated in springs with d = 1.6 mm. Fractured test springs were examined under optical microscope, scanning electron microscope (SEM) and by means of metallographic microsections in order to analyse the fracture behaviour and the failure mechanisms. The paper includes a comparison of the results of the different spring sizes, materials, number of cycles and shot peening conditions and outlines further investigations in the VHCF-region.. For comparison the results for the springs with d = 1.6 mm and d = 3.0 mm and Ps = 98% are summarised in Fig. 1. Except for springs made of the stainless steel wire, the fatigue strength of springs with d = 3.0 mm is higher than for springs with d = 1.6 mm. The size effect would imply higher fatigue strength for smaller wire diameters.

2.4 Stress considerations Design the spring to stay within the allowable stress limit when the spring is fully compressed or "bo ttomed." This can be done when there is sufficient space available in the mechanism, and economy is not a consideration. When space is not available, design the spring so that its maximum working stress at its maximum working deflection does not exceed 40 to 45 percent of its minimum yield strength for compression and extension springs and 75 percent for torsion springs. Remember that the minimum tensile strength allowable is different for differing wire diameters; higher tensile strengths are indicated for smaller wire diameters. 3. LITERATURE REVIEW K. Michalczyk [1] The analysis of elastomeric coating influence on dynamic resonant stresses values in spring is presented in this paper. The appropriate equations determining the effectiveness of dynamic stress reduction in resonant conditions as a function of coating parameters were derived. It was proved that rubber coating will not perform in satisfactory manner due to its low modulus of elasticity in shear. It was also demonstrated that about resonance areas of increased stresses are wider and wider

Touhid Zarrin-Ghalami, Ali Fatemi [3] Elastomeric components have wide usage in many industries. The typical service loading for most of these components is 9

S K Jha and Mohd. Parvez

Fig. 1. Comparison of fatigue strength of springs made of various spring steel wires with d = 1.6 mm and d = 3.0 mm at P5 = 98%.

variable amplitude and multiaxial. In this study a general methodology for life prediction of elastomeric components under these typical loading conditions was developed and illustrated for a passenger vehicle cradle mount. Crack initiation life prediction was performed using different damage criteria. The methodology was validated with component testing under different loading conditions including constant and variable amplitude in-phase and out-of-phase axial-torsion experiments. The optimum method for crack initiation life prediction for complex multiaxial variable amplitude loading was found to be a critical plane approach based on maximum normal strain plane and damage quantification by cracking energy density on that plane. Rain flow cycle

counting method and Miner's linear damage rule were used for predicting fatigue life under variable amplitude loadings. The fracture mechanics approach was used for total fatigue life prediction of the component based on specimen crack growth data and FE simulation results. Total fatigue life prediction results showed good agreement with experiments for all of the loading conditions considered. Wei Li, Tatsuo Sakai et al. [4] Very high cycle fatigue (VHCF) properties of a newly developed clean spring steel were experimentally examined under rotating bending and axial loading. As a result, this steel represents the duplex S-N property only for surface -induced failure under rotating bending, whereas it represents the 10

Literature review on Design, Analysis and Fatigue Life of a Mechanical Spring

helical spring under an axial load. The study provides a clear match between the evolution of the theoretical and the numerical tensile and compression normal stresses, being of sinusoidal behaviour. The overall equivalent stress isovalues increases radially from 0_ to 180_, being maximal on the internal radial zone at the section 180_. On the other hand, the minimum stress level is located in the centre of the filament cross section.

single S-N property for surface-induced failure and interior inhomogeneous microstructure-induced failure under axial loading. Surface small grinding defectinduced failure is the predominant failure mode of this steel in VHCF regime. The surface morphology of the interior inhomogeneous microstructure with distinct plastic deformation is much rougher than that of the ambient matrix, which means the stress concentration resulted from the strain inconsistency between the micro structural in homogeneity as soft phase and the ambient matrix as hard phase plays a key role in causing interior crack initiation. Considering the effect of surface compressive residual stress, the threshold stress intensity factor for surface small defect-induced crack propagation of this steel is evaluated to be 2.04 MPam1/2, which means that the short crack effect plays a key role in causing the surface small defect-induced failure of this steel in the VHCF regime. From the viewpoint of defect distribution, surface and interior failure probabilities are equivalent under a fixed characteristic value of defect density. If the interior defect size is less than or even equal to the surface defect size, surface defectinduced failure will become the predominant failure mode in VHCF regime, especially under rotating bending.

B. Pyttel , D. Schwerdt, et al. [6 ] The paper gives an overview of the present state of research on fatigue strength and failure mechanisms at very high number of cycles (Nf > 107). Testing facilities are listed. A classification of materials with typical S-N curves and influencing factors like notches, residual stresses and environment are given. Different failure mechanisms which occur especially in the VHCF-region like subsurface failure are explained. There micro structural in homogeneities and statistical conditions play an important role. A double S-N curve is suggested to describe fatigue behaviour considering different failure mechanisms. Investigated materials are different metals with bodycentred cubic lattice like low- or highstrength steels and quenched and tempered steels but also materials with a face-centred cubic lattice like aluminium alloys and copper. Recommendations for fatigue design of components are given.

Sid Ali Kaoua A, Kamel Taibi A et al. [5] This paper presents a 3D geometric modelling of a twin helical spring and its finite element analysis to study the spring mechanical behaviour under tensile axial loading. The spiralled shape graphic design is achieved through the use of Computer Aided Design (CAD) tools, of which a finite element model is generated. Thus, a 3D 18dof pentaedric elements are employed to discrete the complex ''wired-shape" of the spring, allowing the analysis of the mechanical response of the twin spiralled

Stefanie Stanzl-Tschegg [7] Ever since high-strength steels were found to fail below the traditional fatigue limit when loaded with more than 108 cycles, the investigation of metals' and alloys' very high cycle fatigue properties has received increased attention. A lot of research was invested in developing methods and machinery to reduce testing times. This overview outlines the principles and testing 11

S K Jha and Mohd. Parvez

of our proposal. The non-linear differential equation derived from the model is solved, obtaining large stiffness variations. A prototype of the actuator was fabricated and tested for different load cases. Experimental results were compared with numerical simulations for model verification, showing excellent agreement for a wide range of work.

procedures of very high cycle fatigue tests and reports findings in the areas of crack formation, non-propagating small cracks, long crack propagation and thresholds. Furthermore, superimposed and variable amplitude loading as well as frequency effects are reported. Yuxin Penga, Shilong Wangb, et al. [8] A stranded wire helical spring (SWHS) is a unique cylindrically helical spring, which is reeled by a strand that is formed of 2~16 wires. In this paper, a parametric modelling method and the corresponding 3D model of a closed-end SWHS are presented based on the forming principle of the spring. By utilizing a PC + PLC based model as the motion control system, a prototype machine tool is designed and constructed, which improves the manufacturing of the SWHS. Via the commercial CAD package Pro/Engineering, numerical simulation is carried out to test the validity of the parametric modeling method and the performance of the machine tool. The scheme of the tension control system is analyzed and the control mechanism is set up, which have achieved the constant tension of each wire. A human machine interface is also proposed to achieve the motion control and the tension control. Experimental results show that the tension control system is well-qualified with high control precision.

Matjaz Mrsnik, Janko Slavic, et al. [10] The characterization of vibrationfatigue strength is one of the key parts of mechanical design. It is closely related to structural dynamics, which is generally studied in the frequency domain, particularly when working with vibratory loads. Fatigue-life estimation in the frequency domain can therefore prove advantageous with respect to a time-domain estimation, especially when taking into consideration the significant performance gains it offers, regarding numerical computations. Several frequency-domain methods for a vibrationfatigue-life estimation have been developed based on numerically simulated signals. This research focuses on a comparison of different frequency-domain methods with respect to real experiments that are typical in structural dynamics and the automotive industry. The methods researched are: Wirsching-Light, the a0.75 method, GaoMoan, Dirlik, Zhao-Baker, Tovo-Benasciutti and Petrucci-Zuccarello. The experimental comparison re searches the resistance to close-modes, to increased background noise, to the influence of spectral width, and multi-vibration-mode influences. Additionally, typical vibration profiles in the automotive industry are also researched. For the experiment an electro-dynamic shaker with a vibration controller was used. The reference-life estimation is the rainflow- counting method with the Palmgren-Miner summation rule. It was foun d that the Tovo-Benasciutti method

A.Gonzalez Rodríguez, J.M. Chacon, et al. [9] An adjustable-stiffness actuator composed of two antagonistic non-linear springs is proposed in this paper. The elastic device consists of two pairs of leaf springs working in bending conditions under large displacements. Owing to this geometric non-linearity, the global stiffness of the actuator can be adjusted by modifying the shape of the leaf springs. A mathematical model has been developed in order to predict the mechanical behaviour 12

Literature review on Design, Analysis and Fatigue Life of a Mechanical Spring

uniaxial tensile load is conducted & findings are compared against those obtained from a theoretical approach based on a transformation of curvilinear coordinates.

gives the best estimate for the majority of experiments, the only exception being the typical automotive spectra, for which the enhanced Zhao-Baker method is best suited. This res earch shows that besides the Dirlik approach, the Tovo- Benasciutti and Zhao-Baker methods should be consid ered as the preferred methods for fatigue analysis in the frequency domain.

The characterization of the fatigue properties of materials and components at very high numbers of cycles necessitates a careful selection of fatigue loading machinery and measuring devices, as well as a diligent application of testing and evaluation procedures.

Nenad Gubeljaka, Mirco D. Chapettib, et al. [11] High strength steel grade 51CrV4 in thermo-mechanical treated condition is used as bending parabolic spring of heavy vehicles. Several investigations show that fatigue threshold for very high cycle fatigue depends on inclusion's size and material hardness. In order to determine allowed size of inclusions in spring's steel the Murakami's and Chapetti's model have been used. The stress loading limit regarding to inclusion size and applied stress has been determine for loading ratio R=-1 in form of S-N curves. Experimental results and prediction of S-N curve by model for given size of inclusion and R ratio show very good agreement. Pre-stressing and shot-penning causes higher compress stress magnitude and consequently change of loading ratio to more negative value and additionally extended life time of spring.

The present paper proposes a new model of an adjustable-stiffness spring. The proposed device has four leaf springs with nonlinear elastic deformations. The geometry of the leaf spring can be modified by means of an electric motor that adjusts the stiffness of the spring to the desired value. This paper also proposes a mathematical model that allows the leaf springs to be dimensioned for every specific purpose. A prototype of the spring has been built and tested. 5. REFERENCES [1]

K. Michalczyk, Analysis of the influence of elasto meric layer on helical spring stresses in longitudinal resonance vibration conditions, archives of civil and mechanical engineering, (2013).

[2]

B. Pyttel, I. Brunner, B. Kaiser, C. Berger and M. Mahendran, Fatigue behaviour of helical compression springs at a very high number of cycles- Investigation of various influences, International Journal of Fatigue, (2013).

[3]

Touhid Zarrin-Ghalami and Ali Fatemi, Multiaxial,fatig ue and life prediction of elastomeric components, International Journal of Fatigue, (2013).

4. CONCLUSIONS From above papers it is to conclude that rubber is not suitable material for the coating due to too low value of its modulus of elasticity in shear. Elastomeric coating has a positive impact on the reduction of dynamic stresses in the spring but also contribute to lowering of resonant frequencies. Shorter total life was observed for out-of-phase loading compared to inphase loading at the same level for both constant and variable amplitude loadings. In finite element analysis of the mechanical behaviour of the twin helical spring under 13

S K Jha and Mohd. Parvez

[4]

[5]

design, numerical simulation and control system of a machine tool for stranded wire helical springs, Journal of Manufacturing Systems, (2012).

Wei Li, Tatsuo Sakai, Masami Wakitac and Singo Mimura, Influence of microstructure and surface defect on very high cycle fatigue properties of clean spring steel, International Journal of Fatigue, (2013). Sid Ali Kaoua, Kamel Taibia, Nacera Benghanem, Krimo Azouaoui and Mohammed Azzaza, Numerical modelling of twin helical spring under tensile loading, Applied Mathematical Modelling, (2011).

[6]

B. Pyttel , D. Schwerdt and C. Berger, "Very high cycle fatigue - Is there a fatigue limit?, International Journal of Fatigue, (2011).

[7]

Stefanie Stanzl-Tschegg, Very high cycle fatigue me asuring techniques, International Journal of Fatigue, (2013).

[8]

Yuxin Penga, Shilong Wang, Jie Zhou and Song Lei, Structural

14

[9]

A González Rodríguez, J.M. Chacón, A. Donoso and A.G. González Rodríguez, Design of an adjustable-stiffness spring: Mathematical modeling and simulation, fabrication and experimental validation, (2011).

[10]

Matjaz Mrsnik, Janko Slavic and Miha Boltezar, Fre quency-domain methods for a vibration-fatigue-life estimation - Application to real data, Internat ional Journal of Fatigue (2013).

[11]

Nenad Gubeljaka, Mirco D. Chapettib, Jozef Predana and Bojan Sencic, Variation of Fatigue Threshold of Spring Steel with Prestressing, Procedia Engineering, (2011).

Global Sci-Tech, 8 (1) January-March 2016; 15-21 DOI Software Costpp. Estimation

No.: 10.5958/2455-7110.2016.00003.3

Software Cost Estimation HUDA SAIF Dux Concept, B-21, Jasola Vihar, New Delhi 110025 *E-mail: [email protected] ABSTRACT The development of a successfull software depends upon the accurate estimation, as various factors are responsible for the overall assessment of any project. The Sofware Cost Estimation is the most complicated and challenging task in software industry. Many estimation models are introduced by the time, that concludes estimation is not a precise science and demanding of new methodologies should be proposed day by day. Software Cost Estimation of a project is vital to the acceptance or rejection of the development of software project. Various techniques have been introduced. This paper enlightens two categories of Software Cost Estimation techniques ie Algorithimic and Non Algorithimic.This paper evaluates the Algorithmic Models such as SLIM, COCOMO, Function Points and Non Algorithimic Models such as Analogy,Expert Judgement which are used to estimate software costs. Poor planning frequently leads to project failure and considerable outcomes for the project team.Software project managers should be aware of the increasing of project failures. In this paper, several existing methods for software cost estimation are illustrated and their aspects will be discussed. Key words: COCOMO, SLIM, Algorithimic.

1. INTRODUCTION

• •

As Software Engineers have expressed concern over their inability to accurately estimate costs associated with software development. This concern has become more crucial as costs associated with development continue to increase. As a result, considerable research attention is now directed at gaining a better understanding of the software-development process as well as constructing and developing software cost estimation techniques.

Project Planning and Control Software improvement investment analysis

The software has one important characteristic that its cost increases with time that is as time grows the cost increases. This section gives the introduction of the paper.Section 2 introduces Estimation Techniques which consists of Algorithimic and Non Algorithimic models.Section 3 includes Problem Identification that is problem definition and solution.Section 4 includes the Conclusion.

Software engineering cost estimation techniques areused for a number of purposes. These include: • Budgeting • Tradeoff and Risk analysis

2. ESTIMATION TECHNIQUES There are many models for software cost estimation, which are divided into two 15

Huda Saif

FP = CFP × (0.65 + 0.01 × RCAF)

groups: Algorithmic and Non-algorithmic. Using of the both groups is required for performing the accurate estimation. If the requirements are known better, their performance would be better. Some popular estimation methods are discussed below. 2.1.

Relates to the following five software components : • Number of user inputs. • Number of user outputs. • Number of user online queries. • Number of logical files. • Number of external interfaces.

Algorithimic Models

These models work on a special algorithm. They need data at first and make results by using the mathematical relations.Many software estimation methods use these models. Algorithmic Models are classified into some different models.

Weighted factors are applied to each components according to their complexity. Table 1 : CFP Calculation Form Software system Components

2.1.1. Source line of code SLOC is an estimation parameter that illustrates the number of all commands and data definition but it does not include instructions such as comments, blanks, and continuation lines. This parameter is usually used as an analogy based on an approach for the estimation. After computing the SLOC for software, its amount is compared with other projects which their SLOC has been computed before, and the size of project is estimated. SLOC measures the size of project easily.

Complexity Level Simple

Average

Complex

User Inputs User Outputs User Online Queries Logical Files

3 4 3 7

4 5 4 10

6 7 6 15

External Interfaces

5

7

10

Relative Complexity Adjustment Factor(RCAF) : Table below summarizes the complexity characteristics of the software system : - assign grades (0 to 5) to the 14 subjects that substantially affect the development effort: - RCAF = Σ(i=1 to 14) si

2.1.2. Function point estimates

The RCAF determines the technical complexity factor (TCF) :

It was devised in 1979 by A. J. Albrecht, as a means of measuring software size and productivity.

TCF = 0.65 + 0.01 × RCAF FP = CFP × TCF

The function point estimation process consists of :

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

Stage 1 : Compute the unadjusted (crude) function count (points) (UFC / CFP). Stage 2 : Compute the relative complexity adjustment factor (RCAF) for the project. RCAF varies between 0 and 70. Stage 3 : Compute the number of function points (FP).

16

Reliable back-up and recovery Distributed functions Heavily used configuration Operational ease Complex interface Reusability Multiple sites Data communications Performance Online data entry

Software Cost Estimation F11 F12 F13 F14

packages. The Applications Composition model is 190 B. Boehm et al./Software development cost estimation approaches A survey based on Object Points [Banker et al. 1994; Kauffman and Kumar 1993]. Object Points are a count of the screens, reports and 3 GL language modules developed in the application. Each count is weighted by a three-level; simple, medium, difficult; complexity factor. This estimating approach is commensurate with the level of information available during the planning stages of Application Composition projects.

Online update Complex processing Installation ease Facilitate change

2.1.3. COCOMO (Constructive Cost Model) The COCOMO (COnstructive COst MOdel) cost and schedule estimation model was originally published in [Boehm 1981]. It became one of most popular parametric cost estimation models of the 1980s. But COCOMO '81 along with its 1987 Ada update experienced difficulties in estimating the costs of software developed to new life-cycle processes and capabilities. The COCOMO II research effort was started in 1994 at USC to address the issues on non-sequential and rapid development process models, reengineering, reuse driven approaches, object oriented approaches, etc.

COCOMO II has some special features, which distinguish it from other ones. The Usage of this method is very wide and its results usually are accurate. 2.1.4. Putnam’s Software Life-cycle Model (SLIM)

COCOMO II was initially published in the Annals of Software Engineering in 1995 [Boehm et al. 1995]. The model has three submodels : •

Applications Composition



Early Design



Post-Architecture

Putnam's Software Life-cycle Model (SLIM) Larry Putnam of Quantitative Software Measurement developed the Software Lifecycle Model (SLIM) in the late 1970s [Putnam and Myers 1992]. SLIM is based on Putnam's analysis of the life-cycle in terms of a so-called Rayleigh distribution of project personnel level versus time. It supports most of the popular size estimating methods including ballpark techniques, source instructions, function points, etc. It makes use of a so-called Rayleigh curve to estimate project effort, schedule and defect rate. A Manpower Buildup Index (MBI) and a Technology Constant or Productivity factor (PF) are used to influence the shape of the curve. SLIM can record and analyze data from previously completed projects which are then used to calibrate the model; or if data are not available then a set of questions can be answered to get values of MBI and PF from the existing database. In SLIM,

which can be combined in various ways to deal with the current and likely future software practices marketplace. The Application Composition model is used to estimate effort and schedule on projects that use Integrated Computer Aided Software Engineering tools for rapid application development. These projects are too diversified but sufficiently simple to be rapidly composed from interoperable components. Typical components are GUI builders, database or objects managers, middleware for distributed processing or transaction processing, etc. and domainspecific components such as financial, medical or industrial process control 17

Huda Saif

2.2.

Non Algorithmic Models

2.2.1. Analogy In this method, several similar completed software projects are noticed and estimation of effort and cost are done according to their actual cost and effort. Estimation based on analogy is accomplished at the total system levels and subsystem levels. By assessing the results of previous actual projects, we can estimate the cost and effort of a similar project. The steps of this method are considered as : Fig. 1. The Rayleigh model

Productivity is used to link the basic Rayleigh manpower distribution model to the software development characteristics of size and technology factors.



Choosing of analogy.



Investigating differences.



Examining of analogy quality.



Providing the estimation.

similarities and

2.2.2. Expert Judgment

Productivity P, is the ratio of software product size S and development effort E.

Estimation based on Expert judgment is done by getting advices from experts who have extensive experiences in similar projects. This method is usually used when there is limitation in finding data and gathering requirements.

That is, P = S/E The Rayleigh curve used to define the distribution of effort is modeled by the differential equation

Consultation is the basic issue in this method. One of the most common methods which works according to this technique, is Delphi. Delphi arranges an especial meeting among theproject experts and tries to achieve the true information about the project from their debates.

dy/dt= 2Kate-at2 An example is given in figure above where K = 1:0, a = 0:02, td = 0:18 where Putnam assumes that the peak staffing level in the Rayleigh curve corresponds to development time (td). Different values of K, a and td will give different sizes and shapes of the Rayleigh curve. Some of the Rayleigh curve assumptions do not always hold in practice (e.g., flat staffing curves for incremental development; less than t4 effort savings for long schedule stretchouts). To alleviate this problem, Putnam has developed several model adjustments for these situations.

Delphi includes some steps : i.

The coordinator gives an estimation form to each expert.

ii. Each expert presents his own estimation (without discussing with others). iii. The coordinator gathers all forms 18

Software Cost Estimation

terms.

and sums up them (including mean or median) on a form and ask experts to start another iteration.

Stage 2 : To develop the complexity matrix by producing a new linguistic term.

iv. steps (ii-iii) are repeated until an approval is gained.

Stage 3 : To determine the productivity rate and the attempt for the new linguistic terms.

2.2.3. Machine Learning Models

Stage 4 : Defuzzification: to determine the effort required to complete a task and to compare the existing method.

Most techniques about software cost estimation use statistical methods, which are not able to present reason and strong results. Machine learning approaches could be appropriate at this filed because they can increase the accuracy of estimation by training rules of estimation and repeating the run cycles. Machine Learning Methods could be categorized into two methods namely :

3. PROBLEM IDENTIFICATION 3.1.

Problem Definition

Estimation always remains difficult and challenging task. As the review shows that selection of techniques, models and metrics are responsible for the inaccuracy and overruns in the case of budgets and time. This overruns effects the estimations and that directly effects environment. Software possesses characteristics that cost of any product increase with increase in time. Various models are developed but no single is responsible for correct estimation. The programmers and managers passes through the phases of SDLC where each process affects other. After this there might be some escapes factors and those are rework on same project, extension of project and maintenance.

2.2.3.1. Neural network Neural Networks include several layers where each layer is composed of several elements called neurons.Neurons, by investigating the weights defined for inputs, produce the outputs. Outputs will be the actual effort, which is the main goal of estimation. Back propagation neural network is the best selection for software estimation problem because it adjusts the weights by comparing the network outputs and actual results. In addition, training is done effectively.

3.2.

2.2.3.2. Fuzzy Method

Solution Domain

The identification of problem clearly defines the difficulties faced by the researchers. To overcome this problem there are some solutions as reuse can become the key factor for reduction of cost and effort.

All systems, which work based on the fuzzy logic try to simulate human behavior and reasoning. In many problems, which decision making is very difficult and conditions are vague, fuzzy systems are an efficient tool in such situations. This technique always supports the facts that may be ignored.

1. Reusability concept of OOP can be utilized by using the suitable technique and metric.

There are four stages in the fuzzy approach :

2. As for the reusability estimation by analogy is appropriate technique which describes that the past project is compared with new project and if the

Stage 1 : Fuzzification: to produce trapezoidal numbers for the linguistic 19

Huda Saif

code is matched then it can be reused.

significance in different manner in the field of software cost estimation. The suggestions of using combination of different techniques and models can be much more efficient as alone model and method are not much effective in estimation Concept of reusability helps in reducing the cost and effort with use of analogy estimation and suitable searching and retrieval techniques. Lastly, gradual increment of new approaches and the hybrid scheme of methods with models can be used.

3. LOC metric can be used to estimate software size as others can cause the overruns within time. 4. Analogy concept [13] needs searching technique in the form of search engine as past projects should be placed in some code repository, and through this method the historical projects can be matched with new projects. 5. If the codes are matched then it can be used in new project reduce the effort which reduces the cost.

5. REFERENCES

6. As tentative cost of new project should be calculated and the old projects cost are already known, it helps to asses reduced cost in case of no. of lines used by new project. Through this approach the problems can be reduced as estimating by analogy with use of LOC as software size. And the retrieval techniques like cosine similarity Euclidean distance can be used such that reduction of cost and effort. 4. CONCLUSION Software development in this era is at demanding phase, and estimation of cost and effort in this filed always remains an open challenge and considered to be a complex task. Software engineering and SDLC have their significance presence in the estimation. The review also shows that many reviewers and researchers state that assessment of cost gradually increases or decreases. Though it is an essential task the ignorance is not acceptable. Awareness of project managers and selection of methods are responsible for over budget. Each estimation techniques like COCOMO, SLIM model have it own prospects to be good and at the same time suffered with pitfalls. Estimation by algorithmic, non algorithmic, top-bottom approach or bottom-up approach etc shows their own 20

[1]

Geetika Batra and Kuntal Barua, "A Review on Cost and Effort Estimation Approach for Software Development", M.Tech Scholar, Department of Computer Science, L.N.C.T, Indore, Asst Professor, Department of Computer Science, L.N.C.T, Indore.

[2]

Vahid Khatibi and Dayang N. A. Jawawi, "Software Cost Estimation Methods: A Review".

[3]

Attarzadeh and I. Siew Hock Ow, "Proposing a New Software Cost Estimation Model Based on Artificial Neural Networks", IEEE International Conference on Computer Engineering and Technology (ICCET), 3, V3-487 (2010).

[4]

Attarzadeh and I. Siew Hock Ow, "Improving the accuracy of software cost estimation model based on a new fuzzy logic model", World Applied science journal 8(2), 117 (2010-10-2).

[5]

A.J. Albrecht and J. E. Gaffney, "Software function, source lines of codes, and development effort prediction: a software science validation", IEEE Trans Software Eng. SE, 639-648, (1983).

Software Cost Estimation

[6]

[7]

B.W. Boehm and R. Valerdi, "Achievements and challenges in cocomo-based software resource estimation", IEEE Software, 25(5), 74 (2008).

[8]

Boehm,"Software Engineering Economics", Prentice Hall, (1981).

[9]

N.H. Chiu and S.J. Huang, "The adjusted analogy-based software effort estimation based on similarity distances", Journal of Systems and Software 80(4), 628 (2007).

[10]

[11]

D.D. Galorath," Inside SEER-SEM", CrossTalk, The Journal of Defense Software Engineering, (2005).

M.R. Braz and S. R. Vergilio. "Software Effort Estimation Based on Use Cases". Computer Software and Applications Conference, COMPSAC '06. 30 th Annual International, (2006).

J.J. Cuadrado-Gallego and Rodri, et al. "Analogies and Differences between Machine Learning and Expert Based Software Project Effort Estimation". Software Engineering Artificial Intelligence Networking and Parallel/Distributed Computing (SNPD), 11 th ACIS International Conference on, (2010). L. Fischman, K. McRitchie and

21

[12]

D.D. Galorath and M.W. Evans, "Software sizing, estimation, and risk management: When performance is measured performance improves". Boca Raton, FL: Auerbach, (2006).

[13]

A. Idri and S. Mbarki, et al. "Validating and understanding software cost estimation models based on neural networks". Information and Communication Technologies: From Theory to Applications, 2004. Proceedings. 2004 International Conference on, (2004).

[14]

C. Jones, "Estimating software costs: Bringing realism to estimating (2nd ed.)". New York, NY: McGrawHill, (2007).

[15]

W. Jianfeng and L. Shixian, et al. " Improve Analogy-Based Software Effort Estimation Using Principal Components Analysis and Correlation Weighting". Software Engineering Conference, 2009. APSEC '09. Asia-Pacific, (2009).

DOI No.: Global Sci-Tech, 8 (1) January-March Sher Jung 2016; and Rajinder pp. 22-36 kumar

10.5958/2455-7110.2016.00004.5

Security of Routing Protocols in MANETs: A Survey SHER JUNG* and RAJINDER KUMAR Department of Computer Science and Engineering Al Falah University, Faridabad, India *E-mail: [email protected] ABSTRACT A mobile ad hoc network or MANET is a wireless network that is dynamic and can be formed without any fixed and pre existing infrastructure whereas each node can play the role of a router. Security is an important requirement in mobile ad hoc networks. This paper deals with two aspects of the security in MANETs one is security attacks and the other is security in routing protocols of MANET. The important purpose of the description of security aspects of MANET is to implement security including authentication, confidentiality, integrity, anonymity, and availability of services to the mobile users. In MANETs there are both legitimate and malicious nodes. In this paper, various important security issues in MANET have also been analyzed. The various routing attacks, such as black hole, gray hole, impersonation, worm hole etc. are reviewed in this paper. These attacks are one of the reasons of major problems in MANETs. Key words: MANET, Security, Routing Protocol, Security attack.

1. INTRODUCTION

mobile nodes which are powered by a battery. The traffic in an ad hoc network is through the relay nodes so it is desirable that participating node in the network forwards the packets which it receives but that is meant for some other node as destination. These nodes may have two reasons for their non cooperation: malicious attitude or selfish attitude [3]. The malicious attitude of a node can be due to the opponent's intervention in the network where it intends to sabotage the network activity. The selfish attitude may be due to the various reasons where a legitimate node in the network starts avoiding the forwarding activity due to its current low power status or it feels so over utilized in the forwarding activity and it fears that it will drain so much power that

Many improvements in the field of wireless are being introduced since the last decades. As people are more eager and needy to use more secured and robust network, it is very much required to establish a MANET equipped with secured and reliable mechanisms in order to give access of this technology to the current and the future generation, we must facilitate a secure and reliable MANET. The ad hoc networks provide ubiquitous connectivity without the need of fixed infrastructure [12]. This makes them very suitable choice when the communication has to be provided temporarily such as in case of battle field, disaster hit area or to create a network between members of an interim group. This type of network is made up of 22

Security of Routing Protocols in MANETs: A Survey

Fig. 1. Architecture of MANET Example

A more powerful and efficient encryption algorithm that can be used to encrypt the data under transmission process which is impossible for the attacker to get the useful and effective information.

it will not have enough energy to send or receive its own packets in the future. Architecture of MANET is given in the figure 1. 2. CLASSIFICATION OF ATTACKS

An active attack attempts to change the data which is being exchanged in the network thereby destroys the functioning of the network. Active attacks can be internal or external. Most often, external attacks are initiated from outside the network. Internal attacks are initiated from a node belonging to the network itself such as impersonation, modification, fabrication and replication. Since the attacker is part of the network, internal attacks are more difficult to detect as compared to the external attacks [8]. Examples of active attacks are the actions such as message modifications, message replays, message fabrications and the denial of service attacks [12]. There are four types of active attacks as mentioned below:

According to behavior of ad hoc network, attacks are classified as Passive attacks and Active attacks and according to source, attacks are classified as internal attacks and External attacks. A brief description of attacks can be found in [5]. 2.1 Passive and Active Attacks A passive attack is that attack in which the original message content and context do not get changed and normal operations on the network are not get disrupted. Here the requirement of confidentiality gets violated [8]. Detecting this kind of attack is difficult because neither the system resources nor the critical network functions are physically affected to prove the intrusions [5].

1. Masquerade - To pretend to be someone 23

Sher Jung and Rajinder kumar

else. In this attack some user is logged with different user account credentials to get access to some authorized account. For example, if a user knows the password and user name of System Administrator the user can pretend to that he/she is the administrator.

attacks. Internal attacks are initiated by both compromised and misbehaving nodes in the network. The objective of the mobile ad-hoc network is to provide a security such as data integrity, confidentiality and authentication of the data. On the other hand, if misbehaving nodes are authorized, they can use the system resource and fails to use resources when they are not authorized [5]. Internal nodes might misbehave to save the resources, such as the power consumption, the processing capabilities, and the communication bandwidth.

2. Replay - To acquire information to send it, or copy it elsewhere. 3. To make changes in the information or data being sent or received. 4. Denial of service- To cause a disruption of all services or some services to the existing network.

External attacks are those attacks where users are not initially authorized to participate in the network operations. These attacks cause network congestion, denying access to particular network function or to destroying the whole network

2.2 Internal and External Attacks The other severe attack in the ad hoc networks known as internal attack originates from the second source of

Fig. 2. Classification of Different Types of Attack

24

Security of Routing Protocols in MANETs: A Survey

operations. Various external attacks are bogus packets injection, denial of service, and impersonation.

network. With the help of the program the user can log into a system without an authentication test or gain administrative privilege.

2.3 Other Attacks

4. Man in the middle Attacks -This attack is a type of an access attack, and it can also be used as the initial point of a modification attacks. In this a software piece is placed between the server and the client. Neither the sever side administrator nor the client user is aware of this. This software is firstly intercepts the data and after that forward the information to the server. The server sends a response message to the software thinking that it is communicating with the client.

There are various other attacks that are associated with MANET. These are explained below: 1. Snooping Attacks - This type of attack arises when someone seeks in your file with the hope of finding something interesting whether electronically or in paper. People might inspect our recycle bins or file cabinets and may try to find something interesting. 2. Distributed Denial-of-Service Attacks - This attack is similar to Denial-ofService attack except the fact that the environment is distributed. These attacks amplify the DOS attack by using multiple computer system to execute the attack on an organization when someone seeks in your file with the hope of finding something interesting whether electronically or in paper. In this case people might inspect the recycle bins or file cabinets. The attacker can load different attacks to dozens of computer systems that uses cable modem. The attack signal is send to different computer from the master computer. This signal triggers the system which launches an attack on the target network and tries to find something interesting.

5. Password Guessing Attacks - This type of attacks occurs when an account is attacked repeatedly. This is possible when possible password is send to the account in the systematic manner. Through these attacks we can guess the password. The password guessing attacks are of two types : (a) Brute Force Attacks - This attack allows you to guess a password until a successful guess occurs. This makes the password more difficult to be guessed. It is also time consuming. (b) Dictionary Attacks - These types of attacks use a dictionary type of commonly used words for making attempts to find user password. These attacks are executed using dedicated programs and tools which exist in public domain. This is possible when possible password is send to the account in the systematic manner. Through these attacks we can guess the password.

3. Back Door Attacks -This type of attacks have two different meanings; the original term back door refers to troubleshooting. The back doors allow examining the operation inside the program code at the time of running the program. In another type of back doors, the attacker gains access to the network and insert a program or some other utility in order to create entrance to the

3.

SECURITY ATTACKS Security is an important requirement in

25

Sher Jung and Rajinder kumar

malicious nodes [5]. Classified example of attack under message modification is impersonation attacks and packet misrouting.

mobile ad hoc network (MANETs).This paper deals with two aspects of the security in MANET one is security attacks and the other is routing protocols in MANET. The main purpose of security solutions for MANETs is to provide security solutions like authentication, anonymity, integrity, confidentiality and availability of services to the mobile user. An attacker may aim to do one of the following :

[1) Impersonation attacks Impersonation attacks are known as spoofing attacks. The identity of a node in the network is found out by the attacker. Then the attacker can receive messages sent to that node. This is the first step to intrude into a network to carry out some more attacks for disruption of operations. The attacker may also reconfigure the network in order to let other attackers join the network or remove security measures from the network to allow a number of subsequent attempts of various types of attacks. A captured node can also be used for encryption of keys and authentication of information. In most cases, a malicious node can deny proper routing by injecting false routing packets into the path or by false updating routing information [2].The impersonation attacks generally occur in the mobile ad hoc network because there is no authentication occurs of the routing packet as a result change in the original content.

3.1 Modification This attack is used to modify information which is sent through the communication channel between two or more users. 3.2 Interception Interception is an attack to gain unauthorized access to a system. It is a simple eavesdropping during communication such as packet sniffing or copying of information. 3.2 Fabrication It is also known as counterfeiting. It bypasses authentication checks, and is like impersonating and adds new information in a file. 3.3 Interruption

(2) Packet misrouting attacks - In this attack, malicious nodes again route traffic from their original path to reach to the undesirable destinations [9]. Here Attackers can misroute the packet in such a way so that a packet can stay longer in the network than its original lifetimes, this will results retransmission of the lost packets from the source which will required extra bandwidth and because of that the overhead in the network will increase [5].

Interception attack is an attack that is achieved due to unauthorized access to the routing messages. Attacker tries to gain access to some of the confidential information available in the network. 3.4 Modification Modification attack is used to modify information which is sent through the communication channel between two or more users. In modification attack, attacker modifies the routing messages, and hence the packets integrity in the networks is endangered. Nodes in the ad hoc networks are free to move and might include the

3.5 Interception Interception attackers are those attacks 26

Security of Routing Protocols in MANETs: A Survey

and it threatens all its neighboring nodes [5].

which are achieved due to unauthorized access to the routing messages that are not sent. In this type of attack, attacker wants to access some of the confidential information in the network. The secret information may be public key, private key or password etc. This information should be kept secure from the unauthorized user. The packet could be analyzed before passing to the destination which depicts confidentiality. Classified example under the interception attacks are wormhole attacks and black hole attacks.

3.6 Fabrication This is also known as counterfeiting. It bypasses authenti-cation checks, and is like impersonating and adds new information in a file. Attackers could launch the message fabrication attacks by introducing some big packets into the network in the same manner as in the sleep deprivation attack. The difference is that the message fabrication attacks are not only launched by malicious nodes but such attacks also may come from the misbehaving nodes within the network such as in the route salvaging attacks [5].

(1) Wormhole attacks - The wormhole attacks are those attacks in which shortcut is created with the external attacker in the ad hoc network. By using the shortcut, they could play the trick of the source node to win over the route discovery process and launch the interception attacks later on [5]. The attackers transmit the packet through the wired medium in order to create the fastest route from source to destination. If the bogus routes are consistently maintained by the wormhole node then they could deny other routes from being established. These results in a denied route and intermediate nodes are unable to participate in the network operations [5].

(1) Sleep deprivation attacks - Sleep deprivation attacks are those attacks which aim to drain off limited resources in the mobile ad hoc nodes (e.g. the battery powers), by con-stantly make them busy. These attacks are more specific to the mobile ad hoc network. In a routing protocol, sleep deprivation attacks might be launched by flooding the targeted node with unnecessary routing packets. Flooding in the sleep deprivation attacks is done by sending a huge number of route request (RREQ), route replies (RREP) or route error (RERR) packets to the targeted node. As a result, that parti-cular node is unable to participate in the routing mechanisms and rendered unreachable by the other nodes in the networks [5].

(2) Black hole attacks - Black hole attacks are those attack in which malicious nodes tricks all their neighboring nodes to attract all the routing packets to them. In the wormhole attacks, malicious nodes could launch the black hole attacks by advertising themselves to the neighboring nodes as having the most optimal route to the requested destinations. However, unlike in the wormhole attacks where multiple attackers colluded to attack one neighboring node, in the black hole attacks, only one attacker is involved

(2) Route salvaging attacks - Route salvaging attacks are those attacks which launched by the greedy internal nodes in the networks. In a mobile ad hoc network, there is no guarantee that each transmitted packet will successfully reach the desired destination node [14]. Packets might not reach the destination node because of 27

Sher Jung and Rajinder kumar

sending any packet in the networks [5].

the network failures. Therefore, misbehaving internal nodes retransmit their packets although no sending error messages received.

4. MANET VULNERABILITIES A system is to be vulnerable when an unauthorized data manipulation access is provided to the users. In this case the system does not verify a user's identity before allowing data access. Some of the vulnerabilities are being described as follows:

3.7 Interruption Interruption attacks are those attacks in which the routing packet does not reach to the destination. Routing messages are used to fetch the network messages. Mobile nodes are also used for the purpose. Attacks are initiated through the modification, interception, and fabrication attacks by interrupting the normal operations of the ad hoc networks. For instance, adversaries aiming to interrupt the availability service in the networks might destroy all paths to a particular victim node by using the message modification attacks [5]. Examples such of attacks that could be classified under the interruption attacks category are packet dropping attacks, flooding attacks, and lack of cooperation attacks.

4.1 Lack of centralized management A centralized monitor server is not available in MANETs. The absence of any centralized management makes the detection of attacks very difficult because due to this the traffic will be highly dynamic and very large in the network. Lack of centralized manage-ment will impede the trust management for nodes. 4.2 Resource availability Availability of resources is a major issue in MANETs. Availability of secure communications in such environments where changing the network topology and protecting the network against particular attacks will lead to the development of many security schemes and respective architectures.

(1) Flooding attacks - Intermediate nodes might interrupt the normal operations in the packet forwarding process by flooding the targeted destination nodes with huge amount unnecessary packets. In flooding attacks nodes are unable to receive or forward any type of packet in the network.

4.3 Scalability

(2) Lack of cooperation attacks - In such types of attacks, internal nodes are not ready to cooperate in the network operations that did not benefit them because participating in such operations will drain off their resources. Different strategies might be used by misbehaving internal nodes in order to save their limited resources. The internal nodes are not ready to forward the packets to the other nodes, not even send back the route error report to the sender when failing to forward packets, or might turn off their devices when not

Scalability feature in ad-hoc networks is required because of mobility of nodes in MANETs, their topology and so the connectivity keeps on changing all the time. So scalability is one of the major issues which concern the security aspect. Security mechanisms should be able to handle a varying size network [22]. 4.4 Cooperativeness Various routing algorithms for MANETs assume that the nodes are cooperative and non-malicious. This provides an 28

Security of Routing Protocols in MANETs: A Survey

5. MANET APPLICATIONS

opportunity to a malicious attacker to become a routing agent and thereby enabling the attacker to interrupt the network operations by disobeying the specifications given in the protocol.

In ad hoc networks the devices can easily be added and removed from the network for maintain the connection. The applications of MANET are varied, ranging from large-scale, movable. Typical applications include [22, 24, 25].

4.5 Dynamic topology The dynamic topology in MANETs may interrupt/disturb the trust relationship among the nodes. The trust relationship may also be interrupted/disturbed if some of the nodes are detected as compromised. Distributed and adaptive security mechanisms are used to protect the network having this dynamic behavior [22].

5.1 Military Military equipments may contain some type of computer equipment. In military the ad-hoc network technology (MANET) can maintain an information network between the soldiers and military information headquarters.

4.6 Limited power supply

5.2 Commercial Sector

The nodes in MANETs should be aware of the limited power availability, which creates many problems. As a result, a node in MANET may behave in a selfish manner.

MANET can be used in emergency or salvage operations for natural calamity efforts e.g. in fire, flood, or earthquake. Salvage operations should be taken place where rapid deployment of a communication network and infrastructure is damaged. Transfer of information should take between two saving team members. Other commercial scenarios include e.g. ship-to-ship communication through MANET.

4.7 Bandwidth constrain Variable links exists in wireless network which are renderer towards noise, interference and signal attenuation effects [22]. 4.8 Adversary inside the Network The mobile nodes in the MANET can freely join or leave the network. The nodes in the network may also behave maliciously. In MANET the node malicious behavior is harder to detect. Thus this attack is more dangerous than the external attack.

5.3 Local Level

4.9 No predefined Boundary

5.4 Personal Area Network (PAN)

In MANET physical boundary is not allowed. The nodes performed the task in a roaming behavior where the nodes can join and exit the network. The communication exists when the opponent comes in the radio range of a node. This includes tempering replay, denial of service eave dropping impersonation.

Ad-hoc network can validate the intercommunication between the various portable devices (such as a, a laptop, a cellular phone and PDA). The PAN is an application field of MANET in the future perspective computing context.

Local level application is mainly seen in home networks where exchange of information is directly done. Similarly in other environment like sports stadium, taxi cab and small aircraft, [22].

29

Sher Jung and Rajinder kumar

structure. The various protocols in this category are described as follows:

5.5 MANET-VOVON A MANET enabled different versions of JXTA peer to peer, open platform is used to support JXTA virtual network. Using MANET JXTA a user can set up a call when the path is available. Here XML messages are exchanges over MANET JXTA communication channel [22, 17].

A. Destination-Sequenced Distance Vector (DSDV) - In DSDV, Each node maintains its own routing table of the possible destination in the network. The number f hops counts to each destination are recorded. The sequence numbers are assigned by the destination nodes after the each entry. To maintain the table Consistency routing tables are updated periodically. The DestinationSequenced Distance Vector (DSDV) routing protocol is a table-driven routing Algorithm which is based on BellmanFord routing mechanism [16] .The improvements made in the BellmanFord algorithm which includes freedom from loops in routing tables.

6. ROUTING PROTOCOLS IN MANET The routing protocol of MANET is broadly divided in to 2 main categories as: Table driven routing protocol and Ondemand driven routing protocol. 6.1 Table-driven routing protocols Table driven routing protocol are also called as proactive protocol because in this protocol maintain the up to date and consistent information of each and every node in the network. Many tables are maintained in order to store the routing information of the network. These protocol are differs on the basics of number of table related and methods in the network

The broadcast route has the format contain number of hops for reaching the destination, the address of the destination, the sequence number regarding the destination, and new sequence number which is unique [20].

Fig. 3. Classification of Routing Protocols

30

Security of Routing Protocols in MANETs: A Survey

If in case the same sequence number are arrived then the sequence number which is smaller can be used in order to optimize the path.

cluster head and then further forwarded to another cluster head, with the help of gateway node and this process is continues till the destination node is found.

B. Optimized Link State Routing (OLSR) Protocol - OLSR is a proactive routing protocol .The main idea behind OLSR is the use of multipoint relay (MPR) which provides an effective flooding mechanism by reducing the number of transmissions [8].In OLSR, two types of routing messages are used i.e. a HELLO message and a topology control (TC) message [8].

6.2 On-Demand Driven Protocols The second category is on demand driven routing protocol. These types of protocols are also known as reactive protocols. This protocol is used to find the route of destination and once the route is found all possible route permutations are examined. The route is maintained by a route maintenance procedure after a route is identified.

C. Wireless Routing Protocol (WRP) Wireless routing protocol (WRP) [13, 11] is a path-finding algorithm. It is a type of loop free routing protocol. In WRP nodes are created in the network. Each node maintains 4 tables: link cost table, message retransmission list table distance table, routing table. Update messages are exchanged between neighboring nodes by propagating link from the link table. Typical Hello messages are exchanged periodically between neighbors.

A. Ad Hoc On-Demand Distance Vector (AODV) - It is an improvement of DSDV algorithm explained above. To minimize the broadcast route a route is created on a demand. For example when source A wants to deliver a packet to the destination B and if no route is available from A to B, Then B broadcast the message to the neighboring nodes via route request message then the neighboring node rebroadcast the message to its all neighbors. This process is continued till the destination node is identified. When the first arrived RREQ is received, the destination node sends a route reply (RREP) message to the source node through the reverse path through which the RREQ message arrived.

D. Cluster Head Gateway Switch Routing (CGSR) Protocol - The idea behind CGSR protocol is cluster movable wireless network with different heuristic schemes [18]. In CGSR by using a distributed algorithm the node is selected within the cluster which acts as cluster head. The cluster head changes according to the routing protocol performance

B. Dynamic Source Routing (DSR) - In this protocol the destination is identified on the basis of source routing [19]. There are two phases: route discovery and route maintenance phases. When a mobile node wants to send a packet to the destination, it first checks its route table so that it can identify availability of destination. If the route is present in the table, then the mobile node will use

Using clustering algorithm the Cluster head changes when two cluster heads come into contact of each other or they get out of contact of all other cluster head [8]. Between two cluster head there are some gateway nodes that are within the coverage of two cluster head. Here initially the packet is send to the 31

Sher Jung and Rajinder kumar

6.3 Hybrid Protocol

this route to send the packet. If the node does not have such a route in the table, then it will broadcast a route request message. The format of the route request contains the address of the destination node along with the address of source node and a UID number. All receiving node which has packet will check the destination route is known or not known. A route reply is originated in the network when the route request reaches the destination. If the destination route is not found in the network then node initially adds its own address to the route record and after that forwards the packet

Hybrid routing protocol is based on the concept of aggregating of nodes in a group. Then these nodes are assigned different functionality in a zone of the network. These different zones are portioned in the network. The most popular way of building hierarchy is to ensure that a group of nodes are geographically close to each other and they are grouped into clusters. For communication from another node each cluster has a cluster head. The ZRP( Zone routing protocol) based on Hierarchical link state(ZHLS) This type of protocol can also provide a better trade-off between communication over head, total number of nodes and frequency of topology change.

C. Temporary Ordered Routing Algorithm (TORA) - The Temporally Ordered Routing Algorithm (TORA) is proposed to operate on highly dynamic mobile networking environment. It provides multiple routes of source/ destination in pair. The main idea of TORA is localization of control messages over small set of nodes [8]. To achieve this goal, the routing information about each and every node is maintained.

7. CONCLUSION In this paper, we introduce some type of security attacks in mobile ad-hoc networks, as in today's world security is the major concern in the field of a network. Individuals want to keep information secure so that any unauthorized user can not access their information. Then we have discussed classification of attacks and various characteristics of attacks to be considered in designing any security measure for ad hoc networks. After analyzing the behavior of the attacks, one can determine the various attacks which could be launch against the ad hoc networks. In this paper, we have discussed most of the common attacks against the ad hoc networks routing protocols. Then we further classify the routing protocols in mobile ad hoc network. In future, various security solutions that have been proposed to secure routing protocols will be implemented and verified. The investigation will include various techniques that might be employed in protecting, detecting, and responding to the attacks against the routing message [5].

D. Relative Distance Micro Diversity Routing (RDMAR) - Relative Distance Micro Diversity Routing protocol determines the distance between the two nodes on the basis of relative distance estimation algorithm in radio loops. RDMAR is a source initiated protocol and having features similar to associative based routing protocol. The RDMAR is based on the concept is searching of route within the restricted range in order to save flooding cost of route request message in the network [17, 19, 20]. It is assumed in RDMAR that at the same fixed speed all ad hoc hosts are migrated. These types of assumptions can make good practical estimation of relative distance. 32

Security of Routing Protocols in MANETs: A Survey

8. FUTURE SCOPE

damage, contain the event, remediate, and bring operations back to normal as quickly as possible.

The proactive approach attempts to prevent security threats in the first place and provides lower latency than that of on demand protocols because they have to maintain routes to all the nodes in the network all the time but the problem in this is about excessive routing overhead. On the other hand, the reactive protocols identify routes only when they are needed. The reactive approach seeks to detect threats a posterior (derived by reasoning from observed facts) and react accordingly. •





Cybercriminals continue to develop new ways to monetize victims, while nation-state hackers compromise companies, government agencies, and nongovernmental organizations to create espionage networks and steal information. To better understand and combat threats associated with these changes, the developed and developing countries must continue to support investigative and defensive research. Researchers from academia, the private sector, and government must continue to work together and share information on emerging threats and innovate ways to combat them.

Before an attack: To defend their network, organizations must be aware of what's on it: devices, operating systems, services, applications, users, and more. Additionally, they must implement access controls, enforce security policies, and block applications and overall access to critical assets. However, policies and controls are only a small piece of a bigger picture. These measures can help to reduce the surface area of attack, but there will always be gaps that attackers will find and exploit to achieve their objectives.

9. REFERENCES

During an attack: Organizations must address a broad range of attack vectors with solutions that operate everywhere that a threat can manifest itself on the network, on endpoints, from mobile devices, and in virtual environments. With effective solutions in place, security professionals will be better positioned to block threats and help to defend the environment. After an attack: Invariably, many attacks will be successful. This means organizations need to have a formal plan in place that will allow them to determine the scope of the 33

[1]

Pradip M. Jawandhiya et al. "A Survey of Mobile Ad Hoc Network Attacks" International Journal of Engineering Science and Technology 2(9), 4063 (2010).

[2]

Latha Tamilselvan and V. Sankaranarayanan "Prevention of Impersonation Attack in Wireless Mobile Ad hoc Networks" IJCSNS International Journal of Computer Science and Network Security, 7(3), (2007).

[3]

Shin Yokoyama, Yoshikazu Nakane Osamu Takahashi and Eiichi Miyamoto, "Evaluation of the Impact of Selfish Nodes in Ad Hoc Networks and Detection and Countermeasure Methods", Proceedings of the 7th International Conference on Mobile Data Management (MDM'06).

[4]

S. Bouam and J.B. Othman, "Data Security in Ad hoc Networks using Multipath Routing," in Proc. of the

Sher Jung and Rajinder kumar

14th IEEE PIMRC, 1331 (2003). [5]

[6]

[7]

[8]

[9]

[10]

[11]

Demand and Table-Driven Routing for Ad Hoc Wireless networks", in Proceeding of IEEE ICC, (2000).

S.A. Razak, S.M. Furnell and P.J. Brooke, "Attacks against Mobile Ad Hoc Networks Routing Protocols", in Network Research Group, University of Plymouth, (2003).

[14]

C. Perkins and E Royer, "Ad Hoc On-Demand Distance Vector Routing," 2 nd IEEE Wksp. Mobile Comp. Sys. and Apps., (1999).

[15]

C. Perkins, E. Belding-Royer and S. Das, "Ad Hoc On demand Distance Vector (AODV) Routing," IETF RFC 3561, (2003).

S.Y. Ni, Y.C. Tseng, Y.S. Chen and J.P. Sheu, "The broadcast storm problem in a mobile ad hoc network," in Proc. Of the 5th annual ACM IEEE international conference on Mobile computing and networking, 151 (1999).

[16]

S. Ghazizadeh, O. Ilghami, E. Sirin and F. Yaman, "Security- Aware Adaptive Dynamic Source Routing Protocol," In Proc. Of 27th Conference on Local Computer Networks, 751 (2002).

L.R. Ford Jr. and D.R. Fulkerson, "Flows in Networks", Princeton Univ. Press, 1962 David Doermann. The Indexing and Retrieval of Document Images: A Survey. (1998).

[17]

Rashid Hafeez Khokhar, Md Asri Ngadi and Satria Mandala "A Review of Current Routing Attacks in Mobile Ad Hoc Networks", (2002).

V.D. Park and M.S. Corson, "A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks", Proc. INFOCOM '97, (1997).

[18]

C.K. Toh, "Ad Hoc Mobile Wireless Networks: Protocols and Systems," Prentice Hall Publications, (2002).

C.-C. Chiang, "Routing in Clustered Multihop, Mobile Wireless Networks with Fading Channel", Proc. E•• SICON '97, 197-211 (1997).

[19]

S. Rajavaram, H. Shah, V. Shanbhag, J. Undercoffer and A. Joshi, "Neighborhood Watch: An Intrusion Detection and Response Protocol for Mobile Ad Hoc Networks," Student Research Conference, University of Maryland at Baltimore County (UMBC), (2002).

D. Johnson and D. Maltz, "Dynamic Source Routing in Ad Hoc Wireless Networks", Mobile Computing, T. Imielinski and H. Korth, Ed., 153 (1996).

[20]

C.E. Perkins and P. Bhagwat, "Highly Dynamic DestinationSequenced Distance-Vector Routing (DSDV) for Mobile Computers", Comp. Com-mun. Rev., 234 (1994).

[21]

Chethan Chandra S. Basavaraddi and N.B. Geetha, "Performance Analysis of Mesh and Position Based Hybrid Routing In MANET", A Compre-hensive Study. Chethan Chandra S Basavaraddi et al., Int. J.

Y. Yorozu, M. Hirano, K. Oka and Y. Tagawa, "Electron spectroscopy studies on magneto-optical media and plastic substrate interface," IEEE Transl. J. Magn. Japan, 2, 740 (1987) [Digests 9 th Annual Conf. Magnetics Japan, 301, (1982)].

[12]

Joseph Macker and Scott Corson, "Mobile ad-hoc networks (MANET) "http://www.ietf.org/proceedings/ 01dec/183.htm, (2001).

[13]

Jyoti Raju and J.J. Garcia-LunaAceves, "A comparison of On34

Security of Routing Protocols in MANETs: A Survey

Gambardella, "A simulation study of routing performance in realistic urban scenarios for MANETs". In: Proceedings of ANTS 2008, 6 th International Workshop on Ant Algorithms and Swarm Intelligence, Brussels, Springer, LNCS 5217, (2008).

Computer Technology & Applications, 3(2), 804-812. [22]

K. Lakshmi, S. Manju Priya, A. Jeevarathinam, K. Rama and K. Thilagam, "Modified AODV Protocol against Blackhole Attacks in MANET", K. Lakshmi et al. International Journal of Engineering and Technology 2(6), 444 (2010).

[23]

Priyanka Goyal, Vinti Parmar and Rahul rishi, "MANET Vulnerabilities, Challenges, Attacks Application", IJCEM International Journal of Computational Engineering & Management, 11 (2011) ISSN (Online) 2230-7893 www.IJCEM.org.

[24]

[25]

[26]

Imrich Chlamtac, Marco Conti and Jennifer J.-N. Liu "Mobile ad hoc networking: imperatives and challenges", School of Engineering, University of Texas at Dallas, Dallas, TX, USA, (2003). M. Frodigh, P. Johansson and P. Larsson, Wireless ad hoc networking: the art of networking without a network, Ericsson Review, 4, 248 (2000). HaoYang, Haiyun and Fan Ye, Security in mobile ad-hoc networks: Challenges and solutions, 11(1), 38 (2004).

[27]

Luis Bernardo, Rodolfo Oliveira, Sérgio Gaspar, David Paulino and Paulo Pinto, A Telephony Application for Manets: Voice over a MANET-Extended JXTA Virtual Overlay Network.

[28]

A. Mishra and K.M. Nadkarni, Security in wireless Ad -hoc network, in Book. The Hand book of Ad Hoc Wireless Networks (chapter 30), CRC press LLC, (2003).

[29]

Gianni A. Di Caro, Frederick Ducatelle and Luca M. 35

[30]

C. Perkins. "(RFC) request for Comments-3561", Category: Experimental, Network, Working Group, (2003).

[31]

Satoshi Kurosawal, Hidehisa, Nakayama, Nei Kato, Abbas Jamalipour and Yoshiaki Nemoto. "Detecting Blackhole Attack on AODV-based Mobile Ad-Hoc Networks by Dynamic Learning Method." In: International Journal of Network Security, 5(3), 338 (2007).

[32]

H. Deng, W. Li and D.P. Agrawal. "Routing Security in Adhoc Networks." In: IEEE Communications Magazine, 40(10) 70 (2002).

[33]

M.A. Shurman, S.M. Yoo and S. Park, "Black hole attack in wireless ad hoc networks." In: Proceedings of the ACM 42 Southeast Conference (ACMSE'04), 96 (2004).

[34]

Sanjay Ramaswamy, Huirong Fu, Manohar Sreekantaradhya, John Dixon and Kendall Nygard, "Prevention of Cooperative Black Hole Attack in Wireless Ad Hoc Networks", International Conference on Wireless Networks (ICWN 03), Las Vegas, Nevada, USA. (2003)

[35]

Y.Hu, A Perrig and D. Johnson, Ariadne: A secure On-demand Routing Protocol for Ad Hoc Networks, in Proceeding of ACM MOBICOM'02, (2002).

[36]

K. Sanzgiri, B. Dahill, B.N. Levine,

Sher Jung and Rajinder kumar

C. Shield and E.M Belding- Royar, A secure routing protocol for Ad Hoc Networks, in Proceedings of ICNP'02, (2002). [37]

Y. Hu, D. Johnson and A Perrig, SEAD: Secure Efficient Distance Vector Routing for Mobile Wireless.

[38]

D. Johnson and D. Maltz, Dynamic Source Routing in Ad Hoc Wireless Networks, Mobile Computing, T. Imielinski and H. Korth, Ed., 153

(1996).

36

[39]

J. Broch, A.M David and B. David, A Performance comparison of multihop wireless ad hoc network routing protocols. Proc. IEEE ACM MOBICOM'98, 85 (1998).

[40]

C.E. Perkins and P. Bhagwat, Highly dynamic destinationsequenced distance vector routing for mobile computers, Comp, Comm. Rev., 234 (1994).

Global Sci-Tech, 8 (1)study January-March 2016;Intelligence pp. 37-49 DOI Comparative of Agile Business and No.: Agile10.5958/2455-7110.2016.00005.7 Data Warehouse

Comparative study of Agile Business Intelligence and Agile Data Warehouse MD. DEEDAR SHAMSI* G.B. Pant Institute of Technology, Okhla, New Delhi-110 020 *E-mail: [email protected] ABSTRACT In a rapidly changing business oriented economy, Business Intelligence solutions have to become more agile. This paper attempts to compare and contrast between Agile BI and Agile DW based on different analytical Agile driven technologies. Agile software development is a set of principles and practices that was influenced by practitioners of Extreme Programming, SCRUM, DSDM, Adaptive Software Development and others. It was driven out of the need for an alternative to documentation driven, heavyweight software development processes. Agile development processes can take a lot of pain out of building data warehouses and enable project teams to deliver functionality, and business value, on a rolling basis. Rapidly gaining in popularity, the Agile approach to data warehousing solves many of the thorny problems typically associated with data warehouse developmentmost notably high costs, low user adoption, ever-changing business requirements and the inability to rapidly adapt as business conditions change. The Agile approach can be used to develop any analytical database, the two mechanisms that uses agile approach are Data Warehouse and Business Intelligence. Also, this paper briefly looks at technologies that can be used for enabling an agile BI solution. Key words: Agile Analytics, Agile Project Management Methodology, Agile techniques, Data integration, Scrum.

1. INTRODUCTION

documentation driven, heavyweight software development processes.

Agile methodologies are becoming increasingly popular for software development projects of all kinds, but what considerations must be made when developing business intelligence applications? Agile Analytics is defined as an approach to Business Intelligence and Data Warehousing. Agile software development is a set of principles and practices that was influenced by practitioners of Extreme Programming, SCRUM, DSDM, Adaptive Software Development and others. It was driven out of the need for an alternative to

Agile software development refers to a group of software development methodologies that are based on similar principles[1]. Agile methodologies generally promote: 1. A project management process that encourages frequent inspection and adaptation; 2. A leadership philosophy that encourages team work, self-organization and accountability; 3. A set of engineering best practices that allow for rapid delivery of high-quality 37

Md. Deedar Shamsi

software; 4. And a business approach that aligns development with customer needs and company goals.

traditional BI architecture are: ETL tools, an enterprise data warehouse with metadata repository and business analytics (Figure1).

Business Intelligence (BI) was defined in different ways. The Data-Warehousing Institute has defined Business Intelligence as "the tools, technologies and processes required to turn data into information and information into knowledge and plans that optimize business actions" [6]. Turban has defined BI as "a broad category of applications and techniques for gathering, storing, analyzing and providing access to data to help enterprise user make better business and strategic decisions." [16]. The range of capabilities that can be defined as business intelligence is very broad. Most enterprises have hundreds of internal and external data sources such as: databases, e-mail archives, file systems, spreadsheets, digital images, audio files and more. Traditional Business Intelligence systems use a small fraction of all the data available. Also, traditional BI systems use only structured data. The core components of a

1.1 Performance Characteristics 49% Lack understanding of the benefits, 47% Lack of IT resources, 43% End-user needs not clearly defined [5]. Aberdeen's Maturity Class Framework uses three key performance criteria to distinguish the Best-in-class in the industry: 1. Availability of timely management information : IT should be able to provide the right and accurate information in timely manner to the business managers to make sound business decisions. "This performance metric captures the frequency with which business users receive their information they need in the timeframe they need it"[5]. 2. Average time required to add a column to an existing report : Sometimes new columns need to be

InformationQuality Management (IQM) Components

Fig. 1. Agile Business Intelligence Architecture

38

Comparative study of Agile Business Intelligence and Agile Data Warehouse

time at the beginning of the project as in traditional projects. In Agile project, scope can be changed any time during the development phase.

added to an existing report to see the required information. "If that information cannot be obtained within the time required to support the decision at hand, the information has no material value. This metric measure the total elapsed time required to modify an existing report by adding a column"[5].

3. Agile Infrastructure : The system should have virtualization and horizontal scaling capability. This gives flexibility to easily modify the infrastructure and could also maintain near-real-time BI more easily than the standard Extract, transform, load (ETL) model[8].

3. Average time required to create a new dashboard : This metric considers the time required to access any new or updated information and it measures the total elapsed time required to create a new dashboard[5].

4. Cloud & Agile BI : Many organizations are implementing cloud technology now as it is the cheaper alternative to store and transfer data. Companies who are in their initial stages of implementing Agile BI should consider the Cloud technology as cloud services can now support BI and ETL software to be provisioned in the cloud[8].

2. PERFORMANCE ANALYSIS 2.1 Five Steps To Agile BI 1. Agile Development Methodology : "need for an agile, iterative process that speeds the time to market of BI requests by shortening development cycles"[8].

5. IT Organization & Agile BI : To achieve agility and maximum effectiveness, the IT team should interact with the business, but also address the business problems and should have a strong and cohesive team[8].

2. Agile Project Management Methodology : continuous planning and execution. Planning is done at the beginning of each cycle, rather than one

Fig. 2. BI performance analysis and Efficiency (source: The Aberdeen group BI performance analysis)

39

Md. Deedar Shamsi

still follow a waterfall approach within the iterations.

3. MEASURES OF POPULARITY 3.1 How is Agile different from Waterfall and Spiral methodologies?

Agile methodologies started to become widely published and promoted in the 2000-2001 timeframe by developers of operational systems. These methodologies do not recognize a service request for a new system to be the final set of requirements. Instead, the developers view the service request as a vision for a system that may or may not end up looking the same when it is finally delivered. With the participation of the user, the developers dissect the requirements into desired features, which are put on a product backlog. The user (not IT) controls the product backlog where he or she can add or remove features at will. The user is also responsible for prioritizing the features on the product backlog. The developers select a few features from the prioritized list for the first (or next) sprint (software release). Rather than come up with estimates that are cast in concrete, the developers speculate how long it might take to turn the selected features into working code based on what is known to them at that point in time. Progress for developing the requested features is measured by the number of features delivered and not by the number of tasks performed. When it becomes evident that the trajectory of current (used up) effort in time will miss the deadline, the project is immediately re-scoped.

But before we answer the question whether or not an Agile approach can or cannot be used with EDW/BI, let's first examine the major differences of these three categories of methodologies. Waterfall methodologies were developed in the 1970s for managing operational systems projects. These methodologies are organized by phases that follow traditional engineering practices: planning, requirements, analysis, design, construction, and deployment. Each phase must be completed before the next phase can begin. The majority of development time is spent on paper, creating a requirements document, external design models, internal design specifications, and so on. Even with operational stovepipe systems, this type of methodology has been a problem because estimates are highly unreliable since each system is different, each project team is different, and each set of users is different. In addition, users don't see their system until acceptance testing, at which time they frequently notice errors and omissions that have to be corrected with future enhancements. Spiral methodologies became popular in the 1990s to support building large systems iteratively. These methodologies are popular in enterprise data warehousing where we build the EDW one BI application at a time. This type of methodology has an enterprise perspective. That means that spiral EDW methodologies have many additional tasks that need to be performed and some of these tasks involve stakeholders other than the primary user of the BI application. But, with the exception of developing the EDW in iterations, spiral methodologies basically

3.2 Scrum and XP Two of the most popular Agile software development methodologies are Scrum and XP. 40

Comparative study of Agile Business Intelligence and Agile Data Warehouse

Scrum is a term borrowed from Rugby and XP stands for eXtreme Programming. The authors of these methodologies, as well as most other prominent Agile practitioners, are project managers and seasoned developers with decades of experience in developing stand-alone operational systems - most written with object-oriented code. They are not EDW/ BI practitioners, and thus, Scrum and XP were not developed specifically for enterprise data warehousing. Writing software to create stand-alone operational systems does not require data integration efforts like data standardization, enterprise data modeling, business rules ratification by major business stakeholders, coordinated ETL data staging, common metadata, collectively architected (designed) databases, and so on. Instead, the basic premise behind Scrum and XP is to write and deliver quality software (code) in short prescribed intervals, but inherently without significant regard or focus on data standardization and data architecture from an enterprise perspective.

three key objectives of delivering BI. However, as long as the primary goal is to build separate BI solutions for individual users or departments, the popular Agile software development methodologies like Scrum or XP can certainly be made to work. Some BI teams try to wait for the data to be ready in the EDW (placed there by a separate EDW team) before they develop selected BI features using Scrum or XP. Many companies using this approach have gone so far as to separate their BI team from their EDW team and have both teams report to different managers. This organizational change not only disrupts the cohesion of the total EDW/BI effort, but also creates an unfair competition and ill feelings between the two teams. I hear BI teams complain bitterly about their EDW team being too slow, and I hear the EDW teams complain bitterly about their BI team not understanding their data efforts and thus having unreasonable expectations about the speed at which cleansed, standardized, and integrated data can be loaded into the EDW. I also see many of the BI teams trying to force their counterpart EDW teams to adopt Scrum or XP. Most EDW teams resist, recognizing that their projects are data-intensive and not code-intensive, and that the prescribed Agile rules in Scrum and XP cannot work for them. Other EDW teams try to adhere to the strict rules of these Agile methodologies and fail.

3.3 Can Agile be used for BI? That brings us to the next question: Can Agile be used for BI? Well, that depends on what you call BI. There are a growing number of companies that boast to be using Agile methodologies on BI projects. This analysis shows that most of those companies restrict their development effort mostly to writing code for stand-alone BI applications. In other words, the BI application developers don't deal with data standardization and integration - or at least not very effectively and rarely from an enterprise perspective. Many complain about the dirty data negatively affecting their aggressive deadlines, evidently not realizing that cleaning up dirty data, standardizing data, and integrating data across the enterprise are - or should be -

3.4 Can agile be used for EDW? Can agile be used for enterprise data warehousing? Let's first agree on what we mean by enterprise data warehousing. If your definition of BI includes building or expanding the necessary EDW components and having that effort be part of every project that delivers BI applications, and if you want to apply an 41

Md. Deedar Shamsi

confident that each software release can be accomplished within the allotted timebox (deadline), they create a detailed project plan with weekly milestones for the first software release. Starting with the deadline and working backwards, the core team members determine how far along they must be the week before the deadline in order to make the deadline. Put another way, they determine in what state the project or deliverable must be the week before the deadline. They repeat this process by backing up another week and another week and so on. If they pass the project start date, the core team members must determine if the scope is too large for the release deadline or if the activities between the milestones are overestimated.

Agile method to building the entire end-toend solution (including data cleansing, data standardization, enterprise data modeling, coordinated EDW ETL, and meta data repository), then - in my opinion - using the popular Agile software development methodologies Scrum and XP will not work. Remember that these methodologies were never designed for data-centric business integration projects. However, that does not mean that you cannot go Agile. 3.5 Extreme Scoping™ Extreme Scoping™ uses all of the agile principles that can be used on business integration projects and discards those agile principles that don't apply. It does not seek to replace the Agile coding methodologies Scrum and XP. Instead, it provides the necessary Agile EDW umbrella for the entire project effort, not just the coding.

After the project activities for the first software release are organized into weekly basis, the core team members self-organize themselves into the appropriate number of work teams. Knowing the makeup of the work teams and knowing the weekly milestones, the core team members decide on the detailed tasks and task deliverables for each milestone, referring to the work breakdown structure they created earlier. They also decide which tasks and deliverables are assigned to what person on what work team. The detailed daily task assignments and task deliverables are documented on a white board, a flip chart, a spreadsheet, or other informal media, which can be modified quickly and easily. The core team members use this informal detailed project plan on a daily basis to guide the day-to-day work activities, manage the change control process during prototyping, and monitor the progress of the project. They do not use this detailed plan to report the project status to management. Instead, they create a short one-page Milestone Chart showing whether weekly milestones have been completed, delayed, or eliminated.

Extreme Scoping™ has several distinct project planning steps, which are performed by a 4-5 member core team, not by a single project manager. The core team members start out by reviewing their EDW methodology and selecting the tasks into a preliminary WBS. Using this WBS as a guide, the core team members create a high-level project roadmap to give an understanding of the overall effort, resources, cost, schedule, risks, and assumptions for the entire new BI application. This is necessary in order to come up with the right number of software releases, the right sequence of those releases, the dependencies among the requirements, and thus, the deliverables and scope for each release. Without this crucial step, the process of breaking an application into software releases would be completely arbitrary. Once the core team members are comfortable with the scope and sequence of the proposed software releases and are 42

Comparative study of Agile Business Intelligence and Agile Data Warehouse

4. Working software is the primary measure of progress[9].

If the first software release was completed on time and without problems, the core team members can plan the second software release in the same manner. However, if there were problems with the first software release, such as underestimated tasks, incomplete deliverable, friction on the core team, constant adjustments to the scope, and so on, the core team members must review and adjust the high-level project roadmap produced in the first step. They must revisit their understanding of the overall effort, resources, cost, schedule, risks, and assumptions for the entire application. Then they must make the necessary adjustments to the remaining software releases. That can include changing the scope for the second software release, changing the number of software releases, reprioritizing and changing the sequence of the software releases, changing the deliverables for one or more software releases, changing the deadlines, or changing resources. Only then can the core team proceed with the detailed planning of the second software release.

3.6.2 People 5. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely[9]. 6. Business people and developers must work together daily throughout the project[9]. 7. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done[9]. 3.6.3 Other 8.

The most efficient and effective method of conveying information is face-to-face conversation[9].

9.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior[9].

10. Continuous attention to technical excellence and good design enhances agility[9].

3.6 Twelve Agile Principles

11. Simplicity-the art of maximizing the amount of work not done-is essential[9].

There are 12 Agile Principles (Manifesto) Agile Manifesto grouped as Process, People, and Other.

12. The best architectures, requirements, and designs emerge from selforganizing teams[9].

3.6.1 Process 1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software[9].

3.7 Data Virtualization Data Virtualization accomplishes this by decoupling reports from data structures, by integrating data in an on-demand fashion, and by managing meta data specifications centrally without having to replicate them. This makes data virtualization the ideal technology for developing agile business intelligence systems. This is the primary reason for the increased agility.

2. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage[9]. 3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale[9]. 43

Md. Deedar Shamsi

than just descriptive BI. Many organizations today want to go beyond this by implementing predictive and prescriptive analytics. To that end, many companies are now establishing advanced analytics teams in business departments to help develop new advanced and predictive analytics that can be deployed in real-time and in historical environments to produce new insights for competitive advantage. This is happening both in traditional Data Warehouse and in new Big Data environments where data scientists are analyzing new multi-structured data sources to produce new models and insights. Also business analysts are using these analytics in visual data discovery tools to help predict and forecast the future. In addition analytics are being embedded in applications to help embed recommendations, alerts and forward looking insights in processes and applications to optimize business operations.

When data virtualization is applied, an abstraction and encapsulation layer is provided which hides for applications most of the technical aspects of how and where data is stored; Because of that layer, applications don't need to know where all the data is physically stored, how the data should be integrated, where the database servers run, what the required APIs are, which database language to use, and so on. When data virtualization technology is deployed, to every application it feels as if one large database is accessed. The concepts of data consumer and data store are key to the definition of data virtualization: Data virtualization is the technology that offers data consumers a unified, abstracted, and encapsulated view for querying and manipulating data stored in a heterogeneous set of data stores. Typical business intelligence application areas that experience an increased level of agility are virtual data marts, self-service reporting and analytics, operational reporting and analytics, interactive prototyping, virtual sandboxing, collaborative development, and disposable reports.

Another challenge is the number of data sources that companies are now accessing to capture data for analysis to produce deeper insights. Clickstream data, social network interaction data, weather data, sensor data, location data and news feeds are just a few of these. The question is what should companies do with this data? How should it be organized and stored? The emergence of Hadoop has seen data cleansing and integration being offloaded from Data Warehouses, to cheaper lower cost Hadoop environments but is this at the expense of Data Governance? How do you govern data in Big Data environments and traditional Data Warehouses with confidence? What if structured data is brought into Hadoop?

To summarize, deploying data virtualization in BI systems leads to more light weights architectures with a smaller data footprint, resulting in more agile systems. 4. COMPLEXITY ANALYSIS 4.1 BI Performance Reports and Analysis Most companies today use Business Intelligence (BI) reports and dashboards to measure business performance at strategic, tactical and operational levels. However today, business is demanding much more

4.2 Recreating BI and DW: new architecture and advanced technologies The 44

original

Data

Warehouse

Comparative study of Agile Business Intelligence and Agile Data Warehouse

architecture of the 1980s separated "decision support" from day-to-day business operations [15]. This supported Decision-Making needs at the time and was easily implemented on then emerging technologies, such as relational databases. However, today's Business needs fully integrated processes, closely linking information and activities from all areas of the Enterprise. Decision-Making and Action-Taking are tightly bound. Business cycles are dramatically shorter and span company boundaries. So far, Enterprise IT, including Business Intelligence, has responded slowly and incoherently. Business Integrated Insight (BI2) is a new architecture that reintegrates all Decision-Making and Action-Taking into the overall processes of the Business. Starting from the Data Warehouse, it incorporates a variety of technological advances, such as SOA, distributed access, Web technologies, Content Management and specialised relational databases. BI2 thus provides a comprehensive structure for the full Enterprise IT integration demanded by modern Businesses. In addition, it directly addresses the current Data Warehousing issues, such as operational BI, executive Decision-Support, comprehensive information discovery and innovation, and Enterprise-Wide Decision Management. And, although novel, BI2 is designed as an evolution from current Data Warehouse, operational and collaborative technologies.

the millennium; and the subsequent emergence of the web mega-fauna-Google, Facebook, eBay and more. The financial crisis of 2008 and the subsequent Euro sovereign debt crisis. The Arab Spring. General Motors, from world's largest automobile maker for 77 years to bankruptcy in 2008 and re-emergence at the top spot in 2011. From three companies-Nokia, Research in Motion, and Motorola-dominating the smartphone market in 2006 to none of them among the leaders by last year. If there is one single, common factor in all this tumultuous change, it is technology and its intersection with business and people. Allow me to introduce you to an important evolution in the business world. A new species of business is emerging. It has been vaguely visible for some time now. But its time has come. A new environment in which people live and work.

The delegates will also receive a copy of the book "Business Intelligence: Insight and Innovation beyond Analytics and According to some interpretations, the end of the Mayan Calendar on 21 December, 2012 signifies the end of the world.

The initial technological signs can be seen as far back as the 1980s. The difference now, 30 years later, is that these seeds have matured and grown into a highly diverse and interdependent technology

And how business, and the world in general, has changed in the last decade! Dotcom boom to bust in the early years of 45

Md. Deedar Shamsi

jungle that today's business users dare to enter in significant numbers. And they are ready and able to do so; having grown up with computing and communications technology that, thirty years ago, would have been seen as magic by the vast majority of business people.

barriers between business intelligence, operational systems and office support (collaborative) environments. They will integrate external data directly into their business systems. Business intelligence will become Business Integrated Insight (BI2).

There are three key characteristics of the biz-tech ecosystem as it has emerged in the past few years: 1. Interdependence: Business and technology are each driven, one by the other, in a tight loop. New technology enables new business possibilities; new business opportunities drive advances in technology 2. Reintegration: The silos within and across both business and IT have grown increasingly uncomfortable to maintain; they deliver inefficiencies, miscommunications and errors. The cracks can no longer be hidden from Web-savvy customers; coherence is becoming mandatory 3. Cross-over: Business people need the skill to understand sufficient technology to envision how new advances could be used to recreate the business. Similarly, IT people need the business acumen to see how business needs can be satisfied in new ways by emerging technology

Business Integrated Insight in action Just as BI set the scope for much of the advances in information usage in the 1990s and the Web the 2000s, big data looks set to dominate information thinking in the 2010s. The term big data covers a multitude of sins. But, here, I focus on one aspect of the topic, sensor-generated data and, specifically, vehicle telemetric-the process of transmitting and receiving computergenerated data derived from electronic sensors, typically through an on-board controller/computer and the use of this data to remotely monitor a range of conditions and events occurring in the vehicle. 4.3 Idea Business Integrated Insight: What It Means for 2012 and Beyond. From the point of view of the biz-tech ecosystem and BI2, a number of messages are clear. First, we've seen a consistent move in BI towards operational BI-analysis and action-taking based on near real-time dataover the past decade. While this will continue, it will, at best, only partially meet demands for extreme business innovation. New data sources, big data in most cases, will be used operationally to drive new business processes. Business and IT must cooperate closely to understand how new information and technologies can be used to create new business processes and how such changes can be incorporated into existing IT systems-the interdependence, reintegration and cross-over characteristics described above.

In many ways, these characteristics are diametrically opposed to the way business and IT have operated for five decades already-business determines their requirements, hands them over to IT and waits for an application to be delivered, determines that it was incomplete or incorrect and goes around the loop again. That era is coming to a close and, with it, our entire approach to business intelligence is set to change. In the coming few years, successful businesses will be those that create a high degree of synergy between business and IT. They will break down the 46

Comparative study of Agile Business Intelligence and Agile Data Warehouse

examines how organizations should manage and govern all this data going forward.

Second, the distinction between what we traditionally call "operational" and "informational" processing is becoming increasingly unclear as the business demands real-time reaction to a rapidly changing environment. This demand, together with exploding data volumes, severely restricts the old approach of copying/cleansing data into a data warehouse and data mart environment. Of course, that approach will remain valid for statutory and regulatory reporting where the highest level of accuracy and consistency is mandatory. But, beyond that, minimizing the number of copies of data and accessing one, original data set for multiple purposes will be the norm. This is discussed in a number of papers on BI2 available on my website.

In summary, Extreme Scoping™ is an EDW-specific Agile project planning process, which is based on my robust methodology Business Intelligence Roadmap. It uses all of the Agile principles that work for EDW/BI projects, and it does not force you to use other Agile principles that do not work for EDW/BI projects. The biggest challenge facing the business intelligence industry today is how to develop business intelligence systems that have an agility level that matches the speed with which the business evolves. If the industry fails in this, current business intelligence systems will slowly become obsolete and will weaken the organization's decision-making strength. Now that the economic recession is not going to pass soon and now that businesses have to operate more competitively, the need to increase the agility of the business intelligence systems should be number one on every organization's list of business intelligence requirements. Agility is becoming a crucial property of each and every business intelligence system.

Third, not only are data volumes increasing rapidly but much of this "new" data we will be handling differs in some key characteristics from that which we traditionally used. The new data sources are external and of ill-defined quality. They have very different and often less formalized structures. They have been collected for other purposes than how we want to use them. Ownership and privacy will be serious concerns.

Most of the current Business Intelligence systems are not agile. It's not one aspect that makes them static. But undoubtedly one of the dominant reasons is the database-centric solution that forms the heart of so many Business Intelligence systems. The architectures of most Business Intelligence systems are based on a chain of data stores; Examples of such data stores are production databases, a data staging area, a data warehouse, data marts, and some personal data stores (PDS). The latter can be a small file or a spreadsheet used by one or two business users. In some systems an operational data store (ODS) is included as well. These data

Data virtualization is a technology that can help make business intelligence systems more agile. It simplifies the development process of reports through aspects such as unified data access; data store independency; centralized data integration, transformation, and cleansing; consistent reporting results, data language translation, minimal data store interference, simplified data structures, and efficient distributed data access. 5. CONCLUSION This paper examines trends in Business Analytics and Business Intelligence and 47

Md. Deedar Shamsi

stores are chained by transformation logic that copies data from one data store to another. ETL and replication are commonly the technologies used for copying. We will call systems with this architectyre classic Business Intelligence systems in this article.

Florida: Future Strategies Inc. & Workflow Management Coalition, 99114 (2010). [7]

M. Mircea and A. I. Andreescu, "Agile Systems Development for the Management of Service Oriented Organizations". In: 11th International Conference on Computer Systems and Technologies, Comp. Sys. Tech.' 10, Sofia, Bulgaria, 17-18 June, 341-346 (2010).

[8]

A. Andreescu and M. Mircea , "Actual Trends in Software Systems for Business Management", Comp. Sys. Tech.' 08, The Bulgaria Academic Society of Computer Systems and Information Technologies, (2008).

[9]

BI_Principles_for_Agile_Develop ment_12-0 http://www.allabout agile.com/what-is-agile-10-keyprinciples/

http://en.wikipedia.org/wiki/ Agile_Business_Intelligence#cite_refThree_ Steps_to_Analytic_Heaven_50.

[10]

Bruni, Margherita. "5 Steps To Agile BI", Informationweek.com, June 13, 2011 http://en.wikipedia.org/wiki/ Agile_Business_Intelligence#cite_ref5_Steps_To_Agile_BI_8-1.

M. Mircea, B. Ghilic-Micu and M. Stoica, "Combining Business Intelligence with Cloud Computing to Delivery Agility in Actual Economy".

[11]

Journal of Economic Computation and Economic Cybernetics Studies, 45(1), 39-54 (2011).

[3]

http://www.agiledata.org/essays/ dataWarehousingBestPractices.html

[12]

[4]

http://en.wikipedia.org/wiki/ Agile_Business_Intelligence#cite_noteWhat_Agile_Business_Intelligence_ Really_Means-10.

[5]

M. Mircea, B. Ghilic-Micu and M. Stoica, "Combining Knowledge, Process and Business Intelligence to Delivering Agility in Collaborative Environment". L. Fischer, ed. (2010).

M. Mircea and A. I. Andreescu, "Extending SOA to Cloud Computing in Higher Education". In: K. S. Soliman (ed.), The 15 th IBIMA conference on Knowledge Management and Innovation: A Business Competitive Edge Perspective, Cairo, Egypt 6-7 November, 602-615 (2010).

[13]

http://www.agiledata.org/essays/ dataWarehousingBestPractices.html rhayati A.Jawawi

[14]

M. Cunningham, "Cloud Computing

The reason why so many Business Intelligence systems have been designed and developed in this way, has to do with the state of software and hardware of the last twenty years. These technologies had their limitations with respect to performance and scalability, and therefore, on one hand, the reporting and analytical workload had to be distributed over multiple data stores, and on the other hand, transformation and cleansing processing had to be broken down into multiple steps. 6. REFERENCES [1]

[2]

[6]

BPM and Workflow Handbook, Spotlight on Business Intelligence. 48

Comparative study of Agile Business Intelligence and Agile Data Warehouse

Enables Self-serve BI", 2010, http:/ /www.dashboardinsight.com/artic les/business-performancemanagement/cloud-computingenables-self-serve-bi.aspx.

49

[15]

B. Ghilic-Micu, M. Mircea and M. Stoica, The Audit of Business.

[16]

http://www.technologytransfer.eu/ event/1365/Recreating_BI_and_ DW_new_architecture_and_adva.

Global Sci-Tech, 8 (1) January-March Md 2016; Bariquepp. Quamar 50-55

DOI No.: 10.5958/2455-7110.2016.00006.9

Intelligent Web Agent through Web Text Mining Techniques with Machine Learning MD BARIQUE QUAMAR CyberQ Consulting Private limited #622, DLF Tower A, Jasola, New Delhi-110025, India *E-mail: [email protected] ABSTRACT The simply the web, is the most dynamic environment. The web has grown steadily in recent years and his content is changing every day. web is recognized as the largest data source in the world. In this paper, we present a Web Mining process able to discover knowledge in a distributed and heterogeneous multi organization environment. The Web Text Mining process is based on flexible architecture and is implemented by four steps able to examine web content and to extract useful hidden information through mining techniques. An important aspect in Web Mining is played by the automation of extraction rules with proper algorithms. Machine Learning techniques have been successfully applied to Web Mining and Information Extraction tasks thanks to the generalization and adaptation capabilities that are a key requirement on general content, heterogeneous web pages In order to keep the recognition speed high enough for real-world applications an additional algorithm is proposed which lets the approach to boost both in speed and quality. Key words: Web mining, machine learning, unstructured data, and intelligent web agent.

1. INTRODUCTION

emerging field of web mining aims at finding and extracting relevant information that is hidden in Web-related data, in particular in (hyper-text) documents published on the Web. Like Data Mining, web mining is a multi-disciplinary effort that draws techniques from fields like information retrieval, statistics, machine learning, natural language processing, and others.

The advent of the World Wide Web (WWW) has overwhelmed home computer users with an enormous flood of information. To almost any topic one can think of, one can find pieces of information that are made available by other internet citizens, ranging from individual users that post an inventory of their record collection, to major companies that do business over the Web.

Web mining is commonly divided into the following three sub-areas:

Many of these systems are based on machine learning and Data Mining techniques. Just as Data Mining aims at discovering valuable information that is hidden in conventional databases, the 50



Web Content/text Mining: application of Data Mining techniques to unstructured or semi-structured text, typically HTML- documents.



Web Structure Mining: use of the

Intelligent Web Agent through Web Text Mining Techniques with Machine Learning

hyperlink structure of the Web as an (additional) information source. •

The Web is a critical channel of communication and promoting a company image. E-commerce sites are important sales channels. It is important to use data mining methods to analyze data from the activities performed by visitors on websites.

Web Usage Mining: analysis of user interactions with a Web server.

2. WEB MINING Web mining is the use of data mining techniques for automatic discovery and knowledge extraction from documents and Web services. This new area of research was defined as an interdisciplinary field (or multidisciplinary) that uses techniques borrowed from: data mining, text mining, databases, statistics, machine learning, multimedia, etc.

Web mining methods are divided into three categories:

Web mining has three operations of interests - clustering (finding natural groupings of users, pages etc.), associations (which URLs tend to be requested together), and sequential analysis (the order in which URLs tend to be accessed). As in most realworld problems, the clusters and associations in Web mining do not have crisp boundaries and often overlap considerably. In addition, bad exemplars (outliers) and incomplete data can easily occur in the data set, due to a wide variety of reasons inherent to web browsing and logging. Thus, Web Mining and Personalization requires modeling of an unknown number of overlapping sets in the presence of significant noise and outliers, (i.e. bad exemplars). Moreover, the data sets in Web Mining are extremely large.

2.1 Web Content Mining : Web Content Mining is the way of retrieving useful information from Web. The retrieved information may contain text, images, audio and video. Text Mining : Text Mining is a type of mining where data is extracted only in text format from the data bases. Text mining can be said as an extension of Data mining. Here the data can be retrieved by specifying the attributes or key words. Text mining or text data mining, the process of finding useful or interesting patterns, models, directions, trends, or role from unstructured text, is used to describe the application of data mining techniques to automated discovery of knowledge from text [5].

In this paper, the web mining process is divided into the following five subtasks: (1) Resource finding and retrieving; (2) Information processing;

selection

and

Image Mining : Image mining is the concept used to detect unusual patterns and extract implicit and useful data from images stored in the large data bases. Therefore, we can say that image mining deals with making associations between different images from large image databases

pre-

(3) Patterns analysis and recognition; (4) Validation and interpretation; (5) Visualization. 51

Md Barique Quamar

shown in Fig 3. Image mining is used in variety of fields like medical diagnosis, space research, remote sensing, agriculture, industries and also handling hyper spectral images. Images include maps, geological structures, and biological structures and even in the educational field, explained in [6]. Video Mining : Mining of video data is complicated than mining an image data. Video is a collection of moving images like animation. There are three types of videos in video mining. 1. The produced (includes movies, news videos and dramas). 2. The Raw (includes traffic videos, surveillance videos etc…).

Fig. 2. The Progress of Web Content Mining

used for the four types of web documents are listed below in the table 1. Table 1 techniques of web content mining for various web documents.

3. The Medical Video (includes ultra sound videos including echocardiogram etc…) The Fig. 4 given below clearly represents how the video mining takes place in a multimedia retrieval using web mining concept.

2.2 Web Structure Mining : Web Structure Mining is the way of discovering the structure information from the Web, which is further divided into two types based on the structure information. Web structure mining aims at developing techniques to identify quality of web page which can be find out with the help of hyperlinks. For example, from the links, we can discover important Web pages, which, incidentally, is a key technology used in search engines. We can also discover communities of users who share common interests. Traditional data mining does not perform such tasks because there is usually no link structure in a relational table.

Audio Mining : Audio is a continuous media type as video the techniques used in audio are similar to the video data extraction. Audio can be in the form of radio, speech, etc… To mine audio data, first it has to be converted into text using speech transcription techniques. Audio data can also be mined directly by using audio information techniques and then mining selects the audio data. Audio mining is very simple in designing when compared to video mining. 2.1.1 Methods of Web content Mining: The figure 2 shows the web content mining process and the information retrieved in the structured format.

2.3 Web Usage Mining: Web Usage Mining is the application of Data mining techniques in order to obtain the useful patterns from the Web, Which is a huge repository of different patterns.

Based on the documents in the web the traditional methods are partitioned into four parts [3] [7]. The techniques that are 52

Intelligent Web Agent through Web Text Mining Techniques with Machine Learning

Table 1 : Techniques for Web Content Mining Document

Techniques

Process

Unstructured

Information Extraction

Extracting information from unstructured data and converts into structured data. Pattern matching and transformation are used. Tracks the topics searched by the user and predicts the documents and produce to the user that of interest. Prediction techniques are used. Reduce the length of documents by minimizing the length of the documents. Analyzing the semantics and interprets the meaning of words. Documents are placed into a predefined group. Used to group the similar documents Grouping based on the properties are identified. To build a graphical representation to the user Feature extraction, indexing techniques are used.

Topic Tracking

Summarization

Categorization Clustering Information Visualization Structured

Web crawlers

Wrapper Generation

Page Content Mining Using OEM

Traverse the hypertext structure of the web. Internal crawlers go through internal web pages of sites. External web crawlers go to the unknown links or sites. Set of information extraction rule to extract the useful data from web pages. Provides a lot of meta information Page ranking is used. Extracts the content of a page. Page ranking is used to display the results according to the rank. Object Exchange Model. To understand the information structure of the web. Self describing structure of the data is produced.

Semi Structured Top Down Extraction

Traverse the hypertext structure of the web. Internal crawlers go through internal web pages of sites. External web crawlers go to the unknown links or sites. Web Data Extraction Language Set of information extraction rule to extract the useful data from web pages. Provides a lot of meta information Page ranking is used. SKICAT Based on astronomical data analysis and cataloging system.

Multimedia

Color Histogram Matching Multimedia Miner Shot Boundary Detection

Find the correlation between the color components. Unwanted artifacts are removed using smoothing techniques. Extraction of images. Videos for the feature extraction, and feature comparison for matching queries. Automatic detection of boundaries.

3. CONCLUSIONS

clearly extracted by the web mining techniques when the techniques are used accurately based on the requirements of the users. The proposed solution has been engineered into a complete web crawling system for automatic e-commerce offers extraction, providing a working proof of the ideas proposed in this thesis. The ecommerce and web news scenario considered during the experimental

Data mining techniques used for web information extraction are incredible system and recommended for the maintenance of highly confidential data. This is affluent, most intelligent resource extractor, and useful to maintain the historical data. Vast amount of data is maintained by the web sources and can be 53

Md Barique Quamar

analysis are quite different making the proposed approach a good candidate for the more general Web Content Mining problem.

[10]

John Wiley and F. Gorunescu, "Data Mining for Business Intelligence: Concepts, Techniques, and Applications in Microsoft Office Excel with XLMiner", (2011).

[11]

J. Han and M. Kamber, "Data Mining: Concepts, Models and Techniques". Springer Science & Business Media: USA, (2011).

[12]

Jiawei Han, MichelineKamber and Jian Pei "Data Mining: Concepts and Techniques", Morgan-Kaufmann Academic Press: San Francisco, (2012).

[13]

K.L. Haynes, "Data Mining: Concepts and Techniques", The Morgan Kaufmann Series in Data Management Systems, Third Edition, published by Elsevier Inc., (2006).

[14]

J. Hilbe, "Object Recognition Using Rapid Classification Trees". Florida State University: USA, (2009).

[15]

M. F. Hornick, E. Marcade and S. Venkayala, "Logistic Regression Models". CRC Press INC: USA, (2010).

[16]

Alexander Furnas "Java Data Mining: Strategy, Standard, and Practice: A Practical Guide for architecture, design, and implementation". (2012).

[17]

"Everything you want to know about Data Mining but we are afraid to ask", http://www.theatlantic.com/ technology/archiv e/2012/04/ everything-youwanted-to-nowabout-data-mining-but-were-afraidto- ask/255388/, April 3, 2012, 11.33 am ET.

[18]

Bui ThiThuy Dung, "Program introduction to data mining" M. Sc. Candidate in Computer and Information Sciences, Tokyo University of Agriculture and

4. REFERENCES [1]

[2]

J. Abonyi and B. Feil, "Cluster Analysis for Data Mining and System Identification". Springer Science & Business Media: Germany Alhajj, R. (2007). "Advanced Data Mining and Applications: Third International Conference, ADMA 2007, Harbin, China, August 6-8, 2007 Proceedings". Springer Science & Business .

[3]

M. J. Linoff G.S. Media: Germany Berry, (2000).

[4]

C. M. Bishop, "Mastering Data Mining: The Art and Science of Customer Relationship Management". Wiley Computer Publishing: New York. (2006).

[5]

Y. Chang, "Pattern Recognition and Machine Learning". Springer: USA (2008).

[6]

S. Chatterjee and A.S. Hadi, "Robusti-fying Regression and Classification Trees in the Presence of Irrelevant Variables". ProQuest: USA (2013).

[7]

John Wiley and R. Christensen, "Regression Analysis by Example". :USA (2013).

[8]

A. Gelman and J. Hill, "Log-Linear Models and Logistic Regression". Springer New York: USA Collins English Dictionary, (2006).

[9]

G. Ghmueli, N.R. Patel and P.C. Bruce, "Data Analysis Using Regression and Multilevel/ Hierarchical Models". Cambridge University Press: USA (2006). 54

Intelligent Web Agent through Web Text Mining Techniques with Machine Learning

Technology, Education Program of IT Engineers for Advanced Manufacturing, A member of Nakagawa's Lab, Hanoi City, Vietnam, (2008).

[22]

John Silltow, "Data Mining 101: Tools and Techniques", Managing Director, Security Control and Audit Ltd, August 2006, accessed from, (2006).

[19]

A. Cavoukian, 'Tag, you're it: Privacy implications of radio frequency identification (rfid) technology', (2004).

[23]

[20]

S.Sukumaran, "A Study on Classification Techniques in Data Mining", 4th ICCCNT, Tiruchengode, India, IEEE - 31661, (2013).

Faustina Johnson and Santosh Kumar Gupta," Web Content Mining Techniques: A Survey", International Journal of Computer Applications (0975 - 888), 47(11), 44-50 (2012).

[24]

Govind Murari Upadhyay and Kanika Dhingra,"Web Content Mining: Its Techniques and Uses", International Journal of Advanced Research in Computer Science and Software Engineering, 3(11), 610613 (2013).

[21]

G. Degu and T. Yigzaw, "Research Methodology", Ethiopia Public Health Training Initiative, University of Gondar, Ethiopia, (2006).

55

GUIDELINES FOR CONTRIBUTORS Global Sci-Tech: Journal of Science & Technology is a quarterly journal published by Al-Falah Charitable Trust. Its objective is to present new knowledge/understanding of current topics in the area of Science & Technology. Some issues of the journal may be based on specific themes. It focused on original full length papers, short communications of urgent interest as well as contemporary review articles and emerging issues in Science & Technology. Requirements for acceptance include originality, breadth of scope, careful documentation of experimental results, analysis and clarity of presentation. SUBMISSION OF MANUSCRIPT - Manuscript should be in English only on one side of good quality paper, with adequate margin on all four sides. It must be complete in all respects such as abstract, illustrations, appendices etc. Manuscript for consideration may be submitted as soft copy (MS Word or PDF format) through email as an attachment to Editor ([email protected]). The manuscript must have neither be published nor be under consideration elsewhere. PREPARATION OF MANUSCRIPT - Manuscript should be presented in as concise a form as possible. Pages should be numbered consecutively and arranged in the following order: COVER SHEET - A cover sheet consisting of a short title; names, affiliation and address of all the authors. TITLE - The title should be neither too brief/general nor unnecessarily long. It should reflect the content of the paper so as to derive the maximum advantage in indexing. ABSTRACT - The abstract, usually not exceeding 200 words, should indicate the scope and significant content of the paper, highlighting the principal findings and conclusions. It should be in such a form that abstracting periodicals can use it without modification. INTRODUCTION - Long and elaborate introduction should be avoided. It should be brief and state the exact scope of the study in relation to the present status of knowledge in the field. FIGURES - Figures should be numbered consecutively with Arabic numerals in order of mention in the text; each figure should have a descriptive legend. Legends should be presented separately, double-spaced like the text. MATHEMATICAL EXPRESSIONS - Wherever possible, mathematical expressions should be typewritten, with subscripts and superscripts clearly shown. It is helpful to identify unusual or ambiguous symbols in the margin when they first occur. To simplify typesetting, please use the "exp' form of complex exponential function and use fractional exponents instead of root signs. Equations must be displayed exactly as they should appear in print and numbered in parentheses placed at the right margin. Reference to equations in the text should use the form "Eq. (5)". TABLES - Tables should be typed on separate sheets, numbered consecutively with Arabic numerals, and have short descriptive caption at the top. Extensive and/or complex tables must be typed carefully in the exact format desired. Computer printouts ill normally be reproduced as illustrations. Tables should be placed together at the end of the manuscript. REFERENCES - References must be prepared in proper format (examples of various types are given below) and numbered consecutively in the order in which they are cited in the text. Books : Author(s) name, title of the book, publisher pp. first and last page no. (year). Periodicals : Author(s) name, title of article, name of journal, vol. no., pp. first and last page no. (year). 1. M. Yeung and I. Mintzer, Invisible Watermarking for Image Verification, Journal of Electric Imaging, 7, 578 (1998). Conference records : Authors(s) name, title of article, name of conference, Place where held vol. no. pp. first and last page no. (year). Unpublished conference presentations : Author(s) name, title of article, name of conference, Place where held, (year). Technical reports : Author(s) name, title of article, report no. published by, (year). The editors and publisher of Global Sci-Tech are not in anyway responsible for the views expressed by the authors. The material published in Global Sci-Tech should not be reproduced or reprinted in any form, without the prior written permission from the Editor/Publisher.

Global Sci - Tech Subscription Rates Category

1 year

3 year

Academic Institutions

400 (40$)

1000 (100$)

Individuals

200 (20$)

500 (50$)

For subscription enquiries please mail : Saoud Sarwar Editor Global Sci-Tech 274-A, Al-Falah House Jamia Nagar, Okhla New Delhi-110025

SUBSCRIPTION ORDER FORM

Please accept the enclosed cheque/demand draft, No.________________, dated______________, drawn on ___________________________________________________________ Bank, favoring Global Sci-Tech, for Rs. ________/- US$_________ towards subscription of Global Sci-Tech, payable at Delhi, for one year/three years. Name : ______________________________________________________________________________ Organisation : ________________________________________________________________________ Mailing Address : ____________________________________________________________________ _____________________________________________________________________________________ _____________________________________________________________________________________ City __________________________ PIN/ZIP _______________ Country ______________________ E-mail________________________________________________

AL - FALAH UNIVERSITY SCHOOL OF ENGINEERING & TECHNOLOGY

SCHOOL OF PHYSICAL & MOLECULAR SCIENCE

UG Programme (B.Tech. Courses) PG Programme (M.Tech. Courses)

UG Programme (B.Sc. Courses) PG Programme (M.Sc. Courses)

Department of Mechanical Mechanical & Automation Engineering Engineering Manufacturing Process & Automation Civil Engineering Engineering Electrical & Electronic Engineering

B.Sc. (Hon.) Chemistry

M.Sc. (Chemistry)

B.Sc. (Hon.) Physics

M.Sc. (Physics)

B.Sc. (Hon.) Mathematics

M.Sc. (Mathematics)

Mechanical Engineering

Doctor of Philosophy (Ph.D.)

Electronic & Communication Engineering

Machine Design

Computer Science & Engineering

Industrial Production & Engineering

B.A. (Hon.) English

M.A. (English)

Bachelor of Architecture

Department of Electronic & Communication Engineering

B.A. (Hon.) Urdu

M.A. (Urdu)

SCHOOL OF HUMANITIES & LANGUAGES

Thermal Engineering

Doctor of Philosophy (Ph.D.)

Electronic & Communication Engineering

SCHOOL OF SOCIAL SCIENCES

VLSI Design

Bachelor of Social Work (BSW)

Master of Social Work (MSW)

Department of Computer Science & Engineering

B.A. (Hon.) Economics

M.A. (Economics)

B.A. (Hon.) History

M.A. (History)

Computer Science & Engineering

B.A. (Hon.) Geography

M.A. (Geography)

Department of Electrical & Electronic Engineering Power System

Bachelor of Education (B.Ed.)

Master of Education (M.Ed.)

Department of Civil Engineering

Diploma in Education (D.Ed.)

Doctor of Philosophy (Ph.D.)

SCHOOL OF EDUCATION & TRAINING

Structural & Foundation Engineering Comunication Technology and Management Environmental Engineering Research Programme (Ph. D. Course) Mechanical Engineering Civil Engineering

SCHOOL OF COMPUTER SCIENCE BCA

MCA

B.Sc. (I.T.) Doctor of Philosophy (Ph.D.)

UNIVERSITY POLYTECHNIC

Electrical & Electronic Engineering Electronic & Communication Engineering Computer Science & Engineering

Diploma in Civil Engineering Diploma in Mechanical Engineering Diploma in Electrical Engineering

SCHOOL OF COMMERCE & MANAGEMENT Bachelor of Business Administration Master of Business Administration (BBA)

(MBA)

B. Com.

M. Com. Master of Finance & Control Doctor of Philosophy (Ph. D.)

AL-FALAH HOSPITAL

Faridabad (Haryana), India

Abstracted/Indexed by

Scribd

Advanced Science Index (ASI)

Qwant

AL-FALAH UNIVERSITY Faridabad (Haryana), India

ResearchBib