representation of numerical data by a polynomial curve

1 downloads 0 Views 9MB Size Report
It is assumed that, in addition to the software methods, recovery of the hardware ...... Jeffreys H & Jeffreys B.S. (1988) : “Divided Differences” , “ Methods of Mathematical Physics .... p.d.f./c.d.f. of time to replacement of the unit at failed state ...... Dhruv Grewal, Yakov Bart, Martin Spann, Peter Pal Zubcsek, Mobile Advertising: A ...
CONTENTS

COMPARATIVE AVAILABILITY AND PROFIT ANALYSES OF STOCHASTIC MODELS ON HARDWARE-SOFTWARE SYSTEM Monu Kumar and Rajeev Kumar COMPARATIVE PERFORMANCE EVALUATION OF PUBLIC, PRIVATE AND FOREIGN BANKS THROUGH CAMEL RATING Dr. Kamlesh A NEW APPROACH FOR DENOISING ULTRASONOGRAPHIC IMAGES USING DUAL TREE COMPLEX WAVELET TRANSFORM Anil Dudy, Dr. Subham Gandhi ECONOMIC PRODUCTION QUANTITY MODELS FOR TWO-ECHELON SUPPLY CHAIN WITH REWORK A DEFECTIVE ITEM UNDER CAP-AND-TRADE POLICY W. Ritha and S. Poongodisathiya INVERSION OF MATRIX BY ELEMENTARY TRANSFORMATION: REPRESENTATION OF NUMERICAL DATA BY A POLYNOMIAL CURVE Biswajit Das and Dhritikesh Chakrabarty A COMPARATIVE STUDY OF TWO STOCHASTIC MODELS DEVELOPED FOR TWO-UNITS COLD STANDBY CENTRIFUGE SYSTEM Shakeel Ahmad and Vinod Kumar

1-10

TWO ECHELON SUPPLY CHAIN WITH INVENTORY SHRINKAGE & SMS ADVERTISING UNDER CREDIT PERIOD W. Ritha and S. Infancy Vimal Priya

63-68

11-18

AN INNOVATION TECHNIQUE OF DENOISING OF ULTRASONOGRAPHIC IMAGES USING DUAL TREE COMPLEX WAVELET TRANSFORM Anil Dudy , Dr. Subham Gandhi

69-72

MINIMIZING TOTAL WAITING TIME OF JOBS IN SPECIALLY STRUCTURED TWO STAGE FLOW SHOP SCHEDULING PROCESSING TIME SEPARATED FROM SET UP TIME Dr. Deepak Gupta, Bharat Goyal

73-76

OPTIMAL SEQUENCE ON MINIMIZING A LATENESS FUNCTION IN A SPECIAL CLASS OF N-JOBS MMACHINES FLOW SHOP SCHEDULING Geleta Tadele Mohammed

77-82

IMPACT OF SWITCHING OF COLD STANDBY UNIT WITHIN REDUNDANT SUBSYSTEMS OF A WATER SYSTEM: A STOCHASTIC ANALYSIS Sunita Rani and Rajeev Kumar

83-92

RELIABILITY ANALYSIS OF A TWO UNITS STANDBY SYSTEM FOR ROBOTIC MACHINES Renu, Pooja Bhatia and Vinod Kumar

93-100

19-20

21-26

27-32

33-40

PERFORMANCE AND EVALUATION OF ROUTING PROTOCOL IN MANET BY VARYING QUEUE SIZE Gautam Gupta & Komal Badhran

41-46

DEMONETISATION: NORMS, INITIATIVE AND IMPACTS Parul Rani

47-54

CONFIDENCE INTERVAL OF ANNUAL EXTREMUM OF AMBIENT AIR TEMPERATURE AT GUWAHATI Rinamani Sarmah Bordoloi , Dhritikesh Chakrabarty

55-62

STEADY-STATE ANALYSIS OF FINITE WAITING SPACE QUEUING SYSTEM HAVING MULTIPLE PARALLEL CHANNELS IN SERIES ATTACHED TO NON-SERIAL SERVERS WITH BALKING, RENEGING AND SOME URGENT MESSAGE Meenu Gupta, Man Singh, Deepak Gupta 101-110 ANALYSIS OF A TWO UNIT SYSTEM WITH TWO KINDS OF FAILURES, USING REGENERATIVE POINT GRAPHICAL TECHNIQUE Sheetal & Vanita 111-116

Dr. Jai Singh (The founder Chief Editor of JMASS) : The Man & The Mathematician I met Dr. Jai Singh about 22 years back in a conference. From then onwards we are in constant touch with each other in one way or the other. Many times we discuss research problems and mathematical concepts. In my opinion Dr. Jai Singh is hard working person, devoted to his profession, research and social obligation. He is a thorough gentleman and is always willing to help the young scholars. Academic Achievements : Dr. Jai Singh did his M.Sc. (Applied Mathematics) from Roorkee University (Presently IIT, Roorkee) and obtained Ph.D. Degree in 1976 from Kurukshetra University, Kurukshetra. His teaching career started from Samrat Ashok Instt. of Tech. (SAIT) Vidisha M.P. (3 years), then moved to Gurukul Kangri Vishwa Vidyalaya (Haridwar) for 2 years and later to Regional Engg. College (REC, now NIT) Kurukshetra in 1983 and served till his retirement (2007). After his retirement from NIT, Kurukshetra Dr. Jai Singh associated himself with institutes as a Professor and Principal in Modern Instt. of Engg. & Tech. Mohri (2 years) & Punjab College of Engg. & Technology Lalru, Pb. (4 years) respectively where he utilized his experience of teaching and research for the benet of B.Tech., M. Tech. & MBA Students. As a Human being : As a person Dr. Jai Singh is unassuming, soft spoken, simple, balanced, always conscious of the professional moral and ethical values, willing to help the needy students at all times. He is a born teacher and uses to inhale and exhale mathematics. He is still deeply connected with his roots ie rural background and spends nearly 2 to 3 months in a year serving his native village. He has contributed to publish magazines and the news letters regularly for the community development and society to create awareness of various socio-economic, agricultural issues and problems. As a Researcher : Dr. Jai Singh started his research work initially from inventory models and published 3 papers in OPSEARCH' but soon shifted to queueing theory and nally to reliability technoloy. From 1980-82 he worked on a project tilted, "Reliability study of Agro industry problem using heuristic and differential dynamic programming approach" approved by CSIR, New Delhi. Dr. Jai Singh applied reliability technology on various industries as sugar, paper, fertilizer, plywood, distillery, power transmission etc. and published a number of research papers. It is only because of his motivation, I attracted reliability analysis and published six research papers though my thrust area of research is queuing and scheduling theory and application of fuzzy logic. He has 3 books, 140 research papers, 10 scholarly articles to his credit and guided successfully 16 scholars for Ph.D. He participated in a no. of conferences at National and International level and presented papers and chaired the technical session. He organized many conferences at national level. He is referee and reviewer to a number of reputed Indian Journals. His rst research scholar Dr. Dinesh Kumar is currently Professor and head mechanical and Industrial Engg. Deptt. at IIT, Roorkee. One of his Ph.D. Scholar Dr. V. K. Gupta (just retired principal of PG College) introduced a new and simpler technique called Regenerative Point Graphical Technique (RPGT) to evaluate reliability. Starting Research Journal : His commitment to research work, made him to start a research Journal in 2005. entitled 'Journal of Mathematics and Systems Sciences' ISSN : 0975-5454. He is deeply involved with this journal. It was only due to his hard work and dedication that this journal got acceptance by research community. Recently In Nov. 2016, Dr. Jai Singh has assigned me the responsibility of a Chief Editor of this journal. I am obliged for giving me such an opportunity and the trust on my ability. I hope the cooperation of research scholars and authors will continue in the same fashion as already done for Aryabhatta Journal of Mathematics & Informatics. Dr. T.P. Singh Chief Editor

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

COMPARATIVE AVAILABILITY AND PROFIT ANALYSES OF STOCHASTIC MODELS ON HARDWARE-SOFTWARE SYSTEM Monu Kumar and Rajeev Kumar Department of Mathematics, M. D. University, Rohtak-124001, INDIA E-mail : [email protected], [email protected]

ABSTRACT The present paper deals with comparison of two stochastic models developed for a single-unit hardware-software system having various kinds of hardware, software or hardware-software interactions failures. The first model considered is given by Kumar and Kumar [7] that has been extended to a new model with consideration of the aspects that the system may transit from total hardware failure mode or fail unsafe mode to total hardware-software mode on hardware/software failures. Moreover, in the second model, various failures rates from total hardware failure mode to total hardware-software mode may be different and repair time distribution may be general. Various measures of system performance are computed using Markov process and regenerative point technique for the second model. The models are compared with respect to their availability and cost considerations as the models give same reliability of the system. Various conclusions are drawn to judge which and when a model is better than the other for the system through the graphical analysis. Keywords: Hardware-Software System, hardware-software interaction failures, availability, profit, Markov process and regenerative point technique.

INTRODUCTION These days the hardware-software systems that is the systems involving both hardware and software components such as computers, mobiles, robots, missiles, rockets, radars etc. have numerous applications and failure of the systems can be costly and hazardous. In fact for failure-free and effective performance of the system both its hardware and software sub-components must function with considerable reliability. To judge reliability of the system, the researchers in the past dealt separately with hardware reliability and software reliability. For the reliability analyses of the whole system, combined reliability models, i.e. including both hardware and software subsystems were discussed by a few researchers including Friedman and Tran [2], Hecht and Hecht [3], Welke et al.[12], Kumar and Malik [8] etc. assuming in general that these components are independent of each other. That is, the aspects of interactions between the hardware and software components were not taken up by them. However, Boyd et al. [1] discussed the difficulties in modeling hardware-software interactions. In the past, several researchers including Iyer and Velardi [5], Martin and Mathur [9], Kanoun and Ortalo-Borrel [6], Haung et al.[4] had observed that there exist remarkable interactions between hardware and software components and have significant effect on reliability of the system. Keeping this in view, Teng et al. [10] established a reliability modeling of the system by considering hardware, software and hardware based software interactions failures. Here the reliability of the combined system has been obtained by considering different models for hardware, software and hardware based software failure and not in fact in integrated way. Recently, Kumar and Kumar [7] extended the above work by considering combined reliability model for the systems having hardware, software and hardware based software failures with different types of recovery methods. In that model it is considered that hardware and software failures occurs purely due to respective hardware and software components whereas hardware based software interaction failures occurs whenever hardware degradation is not detected/repaired by the software method. It is assumed that, in addition to the software methods, recovery of the hardware and the software subsystems upon their failure may respectively be carried out by external hardware/software engineers. It is considered that the system from its initial normal mode may go either to total hardware failures mode or to partial hardware (hardware degradation) mode or to total software failure mode. The repair at total hardware failure mode is carried out by the hardware engineer and the system reaches its normal mode whereas at total software failure mode repair is done by the software engineer. It is also assumed that from partial hardware mode system may go to software fail safe mode or to unsafe mode due to hardware-software interaction. The recovery from the fail safe to normal mode is possible by re-execution of the software whereas from fail unsafe mode to partial hardware failure mode through re-installation of the software by the software engineer. In practical situations, a system may transit from fail safe mode to normal mode or to partial failure hardware mode on reexecution of the software. Further from total hardware failure or fail unsafe mode, a system may transit to total hardwaresoftware failure mode. Further the failure rates from various partial failure hardware mode states to total hardware failure mode may not necessarily be same and various repair time distribution may be arbitrary. These aspects were not taken till now.

-1-

 

JMASS 12 (1-2), 2016

Monu Kumar and Rajeev Kumar 

Keeping the above in view, in the present paper, the model given in Kumar and Kumar [7] (say, Model-1) has been extended to a new model (say, Model-2) with incorporating the aspects that the system may transit from total hardware failure mode or fail unsafe mode to total hardware-software mode on hardware/software failures. Moreover, in the Model-2, various failures rates from total hardware failure mode to total hardware-software mode may be different and repair time distribution may be general. Various measures of system performance are computed using Markov Process and regenerative point technique for the model. As the two models give same reliability of the system, the models are compared with respect to their availability and cost considerations. Various conclusions are drawn to judge which and when a model is better than the other for the system through the graphical analysis. Notations The notations in addition to the notations given in Kumar and Kumar [7] are as under: λh 4 : Hardware failure rate from fail safe mode total hardware-software failure mode.

q 3 : Probability that system transits from fail safe mode to hardware degradation mode. q 3 : Probability that system transits from fail safe mode to hardware degradation mode. gh1(t) : Hardware repair rate when degradation is detected and recovered by software methods. gh2(t) : Hardware repair rate when degradation is detected but not recovered by software methods. gh3(t) : Hardware repair from total hardware failure to normal working mode. gh4(t) : Hardware repair rate from total hardware-software failure to normal working state. g s1 (t ) : Software repair rate from total software failure to normal working mode.

g s 2 (t ) : Software repair rate from fail unsafe mode to hardware degraded mode. g s 3 (t ) : Software repair rate from fail safe mode to hardware degraded mode. States of the System HS Normal mode.

ˆ Partial (degraded) hardware mode. HS

HS Total (complete) hardware failed mode. HS Total (complete) software failed mode. ˆ Fail unsafe mode. HS ˆ ˆ Fail safe mode. HS

HS Total (complete) software-software failed mode. State Transition Diagram In fig. 1 the transition diagram shows the various states corresponding to the Model-1 whereas in fig. 2 the transition diagram shows the various states corresponding to the Model-2. The epochs of entry into all the states are regenerative points and thus every state is regenerative state. The states 4, 5, 6, 7 and 8 are the failed states. Model-1

Fig. 1 -2-

Comparative Availability and Profit Analyses of Stochastic Models on Hardware-Software System

Model-2

Fig. 2 Transition Probabilities and Mean Sojourn Time The results given in Kumar and Kumar [7] for the Model-1 are reproduced as under: For Model-1: The transition probabilities

pij are given by

p01 = p1 p2 λh1 / S , p02 = q1λh1 / S , p03 = p1q2 λh1 / S , p04 = λh 3 / S , p07 = λs1 / S , p10 = β1 / S1 , p14 = λh 2 / S1 , p 24 = λh 2 / S 2 , p 25 = λ s 2 / S 2 , p30 = β 2 / S3 , p34 = λh 2 / S 3 , p36 = λs 3 / S3 p40 = p52 = p60 = p70 = 1 Clearly, it can be verified that

p01 + p02 + p03 + p04 + p07 = 1, p10 + p14 = 1, p24 + p25 = 1, p30 + p34 + p36 = 1 where

S = λh1 + λs1 + λh3 , S1 = β1 + λh 2 , S 2 = λh 2 + λs 2 , S3 = β 2 + λs 3 + λh 2 The mean sojourn time µi in the ith state is the expected first passage time taken by the unit in ith state before transiting to any other state are given by μ0 = 1 / S , μ1 = 1 / S1 , μ 2 = 1 / S 2 , μ3 = 1 / S3 , μ 4 = 1 / β 3 , μ5 = 1 / γ 2 , μ 6 = 1 / γ 3 , μ 7 = 1 / γ 1 The unconditional mean time taken by the system to transition for any regenerative state j, when it (time) is counted from epoch of entrance into that state i is mathematically stated as: ∞



mij = tqij (t ) dt = − qij *′ (0) 0

Thus,

m01 = p1 p2 λh1 / S 2 m02 = q1λh1 / S 2 m03 = p1q2 λh1 / S 2

m04 = λh 3 / S 2

m07 = λ s1 / S 2 It is clear that m01 + m02 + m03 + m04 + m07 = μ 0 Similarly,

m10 + m14 = μ1 , m24 + m25 = μ 2 , m30 + m34 + m36 = μ 3 , m40 = μ 4 , m52 = μ5 , m60 = μ6 , m70 = μ7 For Model-2: The results obtained for Model-2 are as under: -3-

JMASS 12 (1-2), 2016

Monu Kumar and Rajeev Kumar 

The transition probabilities

p01 =

p1 p2 λh1 λh1 + λs1 + λh3

,

p10 = g *h1 (λh 21 )

p34 =

,

pij are given by q1λh1 λh1 + λs1 + λh3

p02 =

p14 = 1 − g *h1 ( λh 21 )

, p03 =

p1q2 λh1 λh1 + λs1 + λh3

p24 =

,

,

λh22 λs 2 + λh 22

p04 = , p25 =

λh3 λh1 + λs1 + λh3 λs 2 λs 2 + λh 22

, p07 =

λs1 λh1 + λs1 + λh3

, p30 = g *h 2 ( λh 23 + λs 3 )

λh23 λs3 * − λh 23 g *h 2 (λh 23 + λs3 ) , p36 = − λ g * (λ + λ ) , p40 = g h 23 (λs 4 ) p48 = 1 − g *h 23 (λs 4 ) λs3 + λh23 λs3 + λh23 s3 h2 h23 s3

p52 = g *s 3 (λh 4 ) , p58 = 1 − g *s 3 (λh 4 ) , p60 = p3 g *s 3 (0) , p63 = q3 g *s 3 (0) , p70 = 1 and p87 = 1

Clearly, it can be verified that

p01 + p02 + p03 + p04 + p07 = 1, p10 + p14 = 1, p24 + p25 = 1, p30 + p34 + p36 = 1, p40 + p48 = 1, p60 + p63 = 1 I.

The mean sojourn times ( µi ) are obtained as under:

1

μ0 = μ5 =

λs1 + λh1 + λh3 1 − g *s 3 (λh 4 )

λh 4

μ1 =

,

1 − g *h1 (λh 21 )

λh 21

μ2 =

,

1 , λs 2 + λh 22

μ3 =

1 − g *h 2 (λh 23 + λS 3 ) , λh 23 + λS 3

μ4 =

1 − g *h3 (λS 4 )

λS 4

, μ6 = − g *′s 3 (0), μ7 = − g *′s1 (0), μ8 = − g *′h 4 (0)

The unconditional mean time ( mij ) are obtained as under:

m01 =

p1 p2λh1 (λh1 + λs1 + λh3 )

2

, m02 =

q1λh1 (λh1 + λs1 + λh3 )

2

, m03 =

p1q2 λh1 (λh1 + λs1 + λh3 )

2

, m04 =

λh3 λs1 m07 = 2 (λh1 + λs1 + λh3 ) (λh1 + λs1 + λh3 )2

It is clear that

m01 + m02 + m03 + m04 + m07 = μ 0 Similarly,

m10 + m14 = μ1 , m24 + m25 = μ2 , m30 + m34 + m36 = μ3, m40 + m48 = μ4 , m52 + m58 = μ5 , m60 + m63 = μ6 , m70 = μ7 , m87 = μ8 Other Measures of System Performance By probabilistic arguments for the regenerative process, the recursive relations for various measures of the system performance are obtained for the Model-2. On solving the recursive relations using Laplace and Laplace-Stieltjes transforms, we get the following measures in steady state: For Model-1: The results for the Model-1 are taken from Kumar and Kumar [7] as under: Mean time to system failure (T0 )

= N1 / D1

Steady-State availability ( A0 )

= N11 / D11

Expected down time due to (a) Pure Hardware Failures ( DH 0 ) (b) Pure Software Failures

=

= N31 / D11

( DS0 )

(c) Pure Hardware-Software Interactio Failures

N21 / D11

( DI 0 )

= N 41 / D11

Expected number of (a) Software Repairs (b) Hardware Repairs

= N 51 / D11

( RS0 )

= N61 / D11

( RH0 )

(c) Hardware Repair by Software

(HC0 )

= N91 / D11 -4-

Comparative Availability and Profit Analyses of Stochastic Models on Hardware-Software System

(d) Software Repair by Re-execution

(SC0 )

= N101 / D11

Expected number of visits by (a) Hardware Engineer (b) Software Engineer

(VH 0 )

= N71 / D11

(VS0 )

= N 81 / D11

where

D1 = 1 − p01 p10 − p03 p30

D11 = p24[µ0 + µ1p01 + µ3p30 + µ7p07 + µ4 ( p01p14 + p03p34 + p02 + p07 ) + µ6p03p36 ] + µ2p02 + µ5p02p25 N11 = p 24 [ µ0 + µ1p 01 + µ3 p30 ] + µ2 + p 02 N 21 = p 24 µ4 [ p 02 + p 01p14 + p 03 p 34 ]

N 31 = p 24 p 07 µ7 N 41 = p 02 p 25 µ5 + p 24 p 03 p36 µ6 N 51 = p 02 p 25 p52 + p 24 p 07 p 70 N 61 = p 24 [ p 03 p 30 + p 04 p 40 ]

N 71 = p 24 [ p 01p14 + p 03 p34 + p 03 +p 04 ] + p 02 p 24 N 81 = p 02 p 25 + p 07 p 24

N 91 = p 01p10 p 24 N101 = p 03 p36 p 24 For Model-2: The results obtained for the model are as under: Mean time to system failure (T0 )

= N2 / D2

Steady-state availability ( A1 )

= N12 / D12

Expected down time due to (a) Pure Hardware Failure ( DH1 )

= N22 / D12

(b) Pure Software Failure

= N32 / D12

( DS1 )

(c)Hardware-Software Interaction Failures ( DI 1 )

= N 42 / D12

Expected number of (a) Software Repairs

( RS1 )

= N 52 / D12

(b)Hardware Repairs

( RH1 )

= N 62 / D12

(c) Hardware Repair by Software

( HC1 )

(d)Software Repair by Re-execution

( SC1 )

= N72 / D12 = N82 / D12

Expected number of visits by (a) Hardware Engineer (b) Software Engineer

(VH1 )

= N92 / D12

(VS1 )

= N102 / D12

where

N 2 = μ 0 + μ1 p01 + μ 2 p02 + μ3 p03 = N1 D2 = 1 − p01 p10 − p03 p30 = D1 D12 = [1 − p 25 p 52 ][1 − p 36 p 63 ][ µ0 + µ1p 01 + µ4 ( p 01p14 + p 04 ) + µ7 p 07 +

( µ7 + µ8 ) (p 01p14 p 48 + p 04 p 48 )]+ p 03 [1 − p 25 p 52 ][ µ3 + µ4 p 34 + µ6 p 36 + ( µ7 + µ8 ) p 34 p 48 ] + p 02 [1 − p 36 p 63 ][ µ2 + µ4 p 24 + µ5 p 25 + ( µ7 + µ8 )( p 24 p 48 + p 25 p 58 )]

N12 = [1 − p 25 p 52 ] [1 − p 36 p 63 ] [ µ0 + µ1p 01 ] + [1 − p 36 p 63 ] µ2 p 02 +[1 − p 25 p 52 ] µ3 p 03

-5-

JMASS 12 (1-2), 2016

Monu Kumar and Rajeev Kumar 

N 22 = [1 − p 25 p 52 ] [1 − p 36 p 63 ] ⎡⎣ µ4 ( p 01p14 + p 04 ) ⎤⎦ + [1 − p 36 p 63 ] µ4 p 02 p 24 + [1 − p 25 p 52 ] µ4 p 03 p 34 N 42 = [1 − p 25 p 52 ] [1 − p 36 p 63 ][ p 01 p14 p 48 + p 04 p 48 ] µ8 + [1 − p 36 p 63 ] µ7 [p 02 p 24 p 48 µ8 + p 02 p 25 p 58 µ8 + p 02 p 25 µ5 ] + N 52 =

[1 − p 25 p 52 ] [ p 03 p 34 p 48 µ8 + p 03 p 36 µ6 ] [1 − p 25 p 52 ] [1 − p 36 p 63 ][ p 01 p14 p 48 + p 04 p 48 + p 07 ] [1 − p 25 p 52 ] [ p 03 p 34 p 48 + p 03 p 36 p 63 ]

+ [1 − p 36 p 63 ] [p 02 p 24 p 48 + p 02 p 25 p 52 + p 02 p 25 p 58 ] +

N62 =[1 − p25p52 ] [1 − p36p63 ][p01p14p40 + p01p14 p48p87 + p04p48p87 + p04 p40 ] +

[1 − p36p63 ] [p02p24p40 + p02p24p48p87 + p02p25p58p87 ] +[1 − p25p52 ][p03p30 +p03p34p48p40 + p03p34p48p87 ]

N 72 = (1-p 25 p 52 )(1 − p 36 p 63 )p 01p10 N 82 = (1-p 25 p 52 )(1 − p 36 p 63 )p 06 p 36 p 60

N92 = [1− p25p52 ] [1− p36p63 ][ p01p14 + p01p14p48 + p04p48 +p03 +p04 ] + [1− p36p63 ] [p02p24 + p02p24p48 + p02p25p58 ] +

[1− p25p52 ] [ p03p34 +p03p36p63 ] N102 = [1 − p25p52 ] [1− p36p63 ][ p01p14p48p87 + p04p48p87 +p07 ] + [1 − p36p63 ] [p02p24p48p87 + p02p25p58p87 + p02p25 ] +

[1− p25p52 ] [ p03p34p48p87 +p03p36 ] Profit analysis The expected total profit incurred to the system in steady state for the models is given by For Model 1:

P0 = C0 A 0 − C1 (DH 0 + DS0 + DI 0 ) − C 2 RH 0 − C3 RS0 − C 4VH 0 − C5VS0 − C6 HC0 − C7 SC0 − Ch − Cs For Model 2: P1 = C 0 A 1 − C 1 (D H 1 + D S 1 + D I 1 ) − C 2 R H 1 − C 3 R S 1 − C 4 V H 1 − C 5 V S 1 − C 6 H C 1 − C 7 S C 1 − C h − C s

where C0 = revenue per unit up time of the system.

C1 = cost per unit down time of the system. C 2 = cost per unit of hardware repair. C3 = cost per unit of software repair. C 4 = cost per visit of hardware engineer. C5 = cost per visit of software engineer. C6 = cost per unit of the hardware repair by software.

C7 = cost per unit of the software repair by re-execution. C h = cost per unit of the hardware installation. Cs = cost per unit of the software installation. Comparative Analysis The expressions for mean times to system failure for the models are similar and hence both the models give same reliability for the system. Therefore, the Model-1and Model-2 are compared on the basis of their availability and profit to judge which and when one model is better than the other. For the comparative analyses purpose, the following particular case is considered:

g hi (t ) = β i e − β i t

for i = 1, 2, 3, 4.

g si (t ) = γ i e − γ i t

for i = 1, 2, 3.

and

λh 21 = λh 22 = λh 23 = λh 2

The values of the various failures rates and probability of hardware degradation detection, as given in Teng et al. [10] and Trivedi et al. [11], are taken, i.e. p1=0,  p2=.95,  q1=1,  q2=.05,  p3=.4,  q3=.6, λh1=.000526,λh2=.000432,  λh3=.000112, λh4=.000784, λs1=.000002, λs2=.000263,λs3=.00001, λs4=20,     Various costs and repair rates are also assumed as under:    C0 =25000, C1 =1000, C2 =500, C3 = C4 = C5=200, C6 =500, C7 =200, Ch =50000,     Cs =150000, β1=2, β2=.19, β3=.18, γ1=0.8, γ2=0.9, γ3=5, β4=1.  Several graphs are plotted for difference of availabilities and profits of the two models with respect to various failure rates and costs. -6-

Comparative Availability and Profit Analyses of Stochastic Models on Hardware-Software System

Fig. 3, 4 and 5 depict the behavior of the difference of availabilities of the models (A0-A1) with respect to various failure rates λh1, λh2 and λs2. It can be concluded from the graphs that the difference of availabilities of the models (A0-A1) decreases with the increase in the values of hardware failure rate λh1 whereas it increases with the increase in the values of λh2 and λs2. Moreover it can also be observed that the values of availability (A0) of the Model-1 are greater than the values of availability (A1) of Model-2 for fixed value of the hardware and software failure rates.

-7-

JMASS 12 (1-2), 2016

Monu Kumar and Rajeev Kumar 

The behaviors of the difference of profits of the models (P0 - P1) with respect to various failure rates λh1, λh2 and λs2 are shown, respectively in fig. 6, 7, and 8. From the graphs, following conclusions are made: Fig. 6 depicts the behavior of the difference of profits (P0 - P1) with respect to hardware failure rate λh2 for different values of hardware failure rate λh1. It is concluded from the graph that the difference of profits (P0 - P1) decreases with the increase in the values of hardware failure rate λh1. From the graph, it can also be observed that for λh1 = 0.001, the values of (P0 - P1) is positive or equal to or negative according as λh2 > or = or < .0008119. Therefore, the Model-1 is better than the Model-2 whenever λh2 > .0008119. Similarly, for λh1 = 0.002 and 0.003, the Model-1 is better than the Model-2 respectively whenever λh2 > .0009066 and .000985.

-8-

Comparative Availability and Profit Analyses of Stochastic Models on Hardware-Software System

The behavior of the difference of profits (P0 - P1) with respect to revenue per unit up time of system (C0) for different values of λh2 is presented in fig.7. It can be concluded from the graph that the difference of profits (P0 - P1) increases with the increase in the values of λh2. From the graph, it can also be observed that for λh2 = 0.0006, (P0 - P1) is positive or negative according as C0 > or = or < Rs.3785.875. Therefore, the Model-1 is better than the Model-2 whenever C0 > Rs. 3785.875. Similarly for λh2 = 0.0018 and .005, the Model-1 is better than the Model-2 respectively whenever C0 > Rs.3189.294 and Rs.5217.106.

Fig. 8 depicts the behavior of the difference of profits (P0 - P1) with respect to revenue per unit up time of system (C0) for different values of λs2. It can be interpreted from the graph that the difference of profits (P0 - P1) increases with the increase in the values of λs2 . It can also be observed from the graph that for λs2 = 0.0001, (P0 - P1) is positive or negative according as C0 > or = or < 3785.875. Therefore, the Model-1 is better than the Model-2 whenever C0 > Rs. 3785.875.Similarly for λs2 = 0.005 and 0.0018, the Model-1 is better or than the Model-2 respectively whenever C0 > Rs. 3189.294 and Rs. 5217.106.

Conclusion From the comparative analyses of the models given in Kumar and Kumar [7] and the extended new model, we concluded that both the models give same reliability for the hardware-software system. Also the model given in Kumar and Kumar [7] is -9-

JMASS 12 (1-2), 2016

Monu Kumar and Rajeev Kumar 

better than the new model in terms of their availability for fixed value of the hardware/software failure rates this because in the new model practical situations of possibility of total hardware-software failure is also incorporated. However, in terms of profits, either of the model may be better than the other model depending on cut- off values of failures rates λh1 λh2, λs2 , and revenue per unit up time of system (C0). These results may be very significant and useful for the system developers and engineers. References 1. Boyd, M. A. and Monahan, C. M., 1995. Developing integrated hardware-software system reliability models: difficulties and issues [For Digital Avionics]. Proceeding of the digital avionics systems conference, 14th DASC, pp 193-198, Cambridge, USA. 2. Friedman, M.A. and Tran, P., 1992. Reliability techniques for combined hardware/software systems, Proceeding Annual Reliability and Maintainability Symposium, pp.290-293 3. Hecht, H. and Hecht, M., 1986. Software Reliability in the System Context, IEEE Transactions on Software Engineering, Vol.12, pp.51-58. 4. Huang, B., Li, X., Li, M., Bernstein, J. and Smidts, C., 2005. Study of the Impact of Hardware Fault on Software Reliability, Proc. 16th IEEE International Symp. Software Reliability Engineering. 5. Iyer, R.K., 1985. Hardware related Software Errors: Measurement and Analysis. IEEE Transactions on Software Engineering, Vol.11, pp.223-230. 6. Kanoun K. and Ortalo-Borrel M., 2000. Fault-Tolerant Systems Dependability-Explicit Modeling of Hardware and Software Component-Interactions, IEEE Trans. Reliability, vol. 49, no. 4,pp. 363-376. 7. Kumar, R. and Kumar, M. 2012. Performance and Cost Benefit Analysis of a Hardware-Software System Considering Hardware based Software Interactions Failures and Different types of recovery, International Journal of Computer Applications, Vol. 53, No.-17, pp.25-32 8. Kumar, A. and Malik, S.C., 2012. Stochastic Modeling of a Computer System with Priority to PM over S/W Replacement Subject to Maximum Operation and Repair, International Journal of Computer Applications, Vol.43, No.3, pp. 27-34. 9. Martin R. and Mathur A.P., 1990. Software and Hardware Quality Assurance: Towards a Common Platform for Increase the Usability of this Methodology, Proc. IEEE Conf. Comm. 10. Teng, X., Pham, H. and Jeske, D.R., 2006. Reliability Modeling of Hardware and Software Interactions and its Applications, IEEE Trans. Reliability, Vol.55, pp.571-577. 11. Trivedi, K. S., Vasireddy, R., Trindade, D., Nathan,S., and Rick Castro., 2006. Modeling High Availability Systems. Proceeding Pacific Rim Dependability Conference PRDC. 12. Welke, S.R., Johnson, B.W. and Aylor, J.H., 1995. Reliability Modeling of Hardware/Software Systems, IEEE Trans. Reliability, Vol.44, pp.413-418.

-10-

  Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

COMPARATIVE PERFORMANCE EVALUATION OF PUBLIC, PRIVATE AND FOREIGN BANKS THROUGH CAMEL RATING Dr. Kamlesh Assistant Professor (Commerce), G.B.N. Kanya Mahavidyalaya, Anjanthali (Karnal)–132001 E-mail : [email protected]

ABSTRACT The CAMEL rating system is a recognized international rating system that bank supervisory authorities use in order to rate financial institutions including commercial banks according to five factors represented by the acronym "CAMEL." These parameters are capital adequacy, asset quality, management capacity, earnings ability, and liquidity of the financial institutions Supervisory authorities assign each bank a score on a scale, and a rating of one is considered the best and the rating of five is considered the worst for each factor. The time period selected for the study is 2002-2015. In this paper, an effort has been made to compare the performance of public, private and foreign banks operating in India. The study found that all the Indian banks are financially solvent. The findings of the study revealed that foreign banks has performed well and are financially more sound as compared to public and private sector banks. Key words : Commercial Banks, Capital Adequacy, Asset Quality, Management Capability, Earnings Analysis, Financial Performance.

I. INRODUCTION The banking sector is one of the most important instrument of the national development, occupies a unique place in a nation’s economy. Its importance as the “lifeblood” of economic activity, in collecting deposits and providing credits to states and people, households and businesses is undisputable. Indian banks are the dominant financial intermediaries in India and have made good progress during the global financial crisis. The improvement in the performance of has been achieved despite several hurdles. The Indian banking sector has undergone structural changes during the post liberalization era with the implementation of prudential norms for income recognition, provisioning and asset classification (Siraj and Pillani, 2011). The banking sector is ready to implement Basel III accord in the phase April 2013 and March 2018. Basel III aims to build robust capital base for banks and ensure sound liquidity and leverage ratios in order to weather away any banking crises in the future and ensure financial stability. In this paper, an effort has been made to compare the performance of public, private and foreign banks operating in India. Majority of the studies examine the productivity of public and private banks. A few research works have found which compare the all 3 sectors (public, private and foreign banks). Therefore the present study will humbly examine the productivity of all public, private and foreign banks. The present study is divided into six sections as introduction, literature review, research methodology, analysis and interpretation of data, conclusion and references. II. LITERATURE REVIEW Bhattacharya et al (1997) found that during the post liberalization era efficiency of public sector banks declined whereas that of private and foreign banks has improved overtime. Barr et al. (2002) viewed that “CAMEL rating criteria has become a concise and indispensable tool for examiners and regulators”. The results of Kumbhakar and Sarkar (2003) revealed that the performance of private sector banks but not the public sector banks have improved in response to deregulation measures. Nurazi and Evans (2005) investigated whether CAMEL(S) ratios can be used to predict bank failure in Indonesia. The results found that logistic regression in tandem with multiple discriminant analysis could function as an early warning system for identifying bank failure and as a complement to on-site examination. Satish and Bharathi (2006) undertook a study for the year 2005-06 using CAMEL. Study suggested that ongoing developments in theIndian economy should excel the size and quality of service of banks. Grier (2007) recommended that management is considered to be the single most important element in the CAMEL rating system because it plays a substantial role in bank’s success; however, it is subject to measure as the asset quality examination. Kumar and Gulati (2008) exhibited that the overall technical inefficiency of Indian commercial banks was due to both poor input utilization and failure to operate at most productive scale size. The enhanced profitability and efficiency has become vital for survival and growth of the banks in the era of globalization and significantly affected by asset quality, capital adequacy and liquidity of the banks (Manoj 2010).Siva and Natarajan (2011) empirically tested the applicability of CAMEL norms and its consequential impact on the performance of SBI Groups. The study concluded that annual CAMEL scanning helps the commercial bank to diagnose its financial health and alert the bank to take preventive steps for its sustainability. Chaudhry and Singh (2012) analyzed the impact of the financial reforms on the soundness of Indian Banking through its impact on the asset quality. The study identified the key players as risk management, NPA levels, effective cost -11-

 

JMASS 12 (1-2), 2016

Dr. Kamlesh 

management and financial inclusion. Anita and Shveta (2013)investigate a comparative analysis of public sector banks and private banks for the period from 2006-07 to 2010-11. The study concluded that on an average, there is no statistically significant difference in the financial performance of the public and private sector banks in India. Ruchi (2014) attempts to evaluate the performance of public sector banks in India using CAMEL approach for a five year period from 2009-13. The results concluded that there is a statistically significant difference between the CAMEL ratios of all the Public Sector Banks in India, thus, signifying that the overall performance of Public Sector Banks is different.Subha and Vishal (2015) attempt to evaluate the financial performance of new age private sector banks operating in India for the period of 2009-2014. It is inferred that Kotak Mahindra Bank occupies the top position in new private sectors banks. III. RESEARCH METHODOLOGY CAMEL is basically ratio based model for evaluating the performance of banks. The components of a bank's condition that are assessed: (C)→ Capital adequacy (A)→ Assets (M)→ Management Capability (E)→ Earnings (L)→ Liquidity (also called asset liability management) Ratings are given from 1 (best) to 5 (worse) in each of the above categories. Objective of the Study The main objective of the study is to compare the performance of the public, private and foreign banks in India using CAMEL model during 2002 to 2015. Hypothesis of the Study The present study tested the following hypothesis • Null Hypothesis (H0): There is no significant difference in performance of public, private and foreign banks in India as assessed by CAMEL model during 2001-02 to 2014-15. • Alternate Hypothesis (H1) = There is a significant difference in performance of public, private and foreign banks in India assessed by CAMEL model during 2001-02 to 2014-15. Source of Data and Scope of the Study As far as scope of the study concerned, for performance analysis it covers all the 27 public sector banks, 22 private sectors banks and 47 foreign banks functioning in India. The secondary data have been collected from journals, IBA bulletin, statistics published by Reserve bank of India and annual reports published by the banks. IV. ANALYSIS AND INTERPRETATION The data are classified, tabulated and analyzed using CAMEL model. CAPITAL ADEQUACY PARAMETER It is important for a bank to maintain depositors’ confidence and preventing the bank from goingbankrupt. It reflects the overall financial condition of banks and also the ability of management tomeet the need of additional capital. The following ratios measure capital adequacy: Capital Adequacy Ratio

Advance to Assets Ratio

Return on equity

Capital Adequacy Ratio (CRAR):Capital adequacy ratios (CAR) area measure of the amount of a bank's core capital expressed aspercentage of its risk-weighted asset.Capital adequacy ratio is defined as: CAR = (Tier I Capital + Tier II Capital) / Risk weighted Assets TIER I CAPITAL - (paid up capital + statutory reserves +disclosed free reserves) - (equity investments in subsidiary +intangible assets + current andb/f losses) TIER II CAPITAL –i. Undisclosed Reserves, ii. General Lossreserves, iii. Hybrid debt capital instruments and subordinated debts where risk can either be weighted assets (a) or the respective national regulator's minimum total capital requirement. The CRAR percent threshold varies from bank to bank (10% in this case, a common requirement for regulators conforming to the BASEL accords) is set by the national banking regulator of different countries. Under Basel III norms, being implemented in phases between April 2013 and March 2018, banks need to have a core capital ratio (Tier I capital) of 8% and a total capital adequacy ratio of 11.5% against 9% now. The CRAR of public banks remained below the industry average, private banks and foreign banks were above the industry average. By comparing the mean value of ratio for the period 2002-15, it has been noted -12-

Comparative Performance Evaluation of Public, Private and Foreign Banks Through Camel Rating

that the CRAR is very high in case of foreign banks and ranked first. The next place is of private banks and it is lowest in public sector banks among the three sectors.

Advance to Assets Ratio (Adv/Ast): This is the ratio indicates a bank’s aggressiveness in lending which ultimately results in better profitability. In case of Advances to assets, public banks are at the first position with highest average of 54.7 followed by private banks and foreign banks were at the bottommost position with least average of 42.8 Return on Equity or Return on Net worth (RONW): It is a measure of the profitability of a bank. Here, PAT is expressed as a percentage of Average Net Worth. The return on equity of public is highest in comparison to public and foreign banks. On the basis of group averages of sub-parameters of capital adequacy parameter public banks were at the top position followed by private banks and foreign banks stood at the last position due to its poor performance of return on equity. TABLE 1: CAPITAL ADEQUACY OF PUBLIC, PRIVATE AND FOREIGN BANKS (Percent)

Note: SCBs-Scheduled Commercial Banks, Source:Data compiled and calculated from Statistical Table Relating to Banks in India, RBI, Mumbai, Issues of relevant years.

-13-

JMASS 12 (1-2), 2016

Dr. Kamlesh 

ASSETS QUALITY The quality of assets is an important parameter to gauge the strength of bank. The prime motto behind measuring the assets quality is to ascertain the component of non-performing assets as a percentage of total assets. The ratios necessary to assess the assets quality are: Net NPAs to Total Assets

Net NPAs to Net Advances

Percentage Change in NPAs



Net NPAs to Total Assets (NNPAs/TA): This ratio discloses the efficiency of bank in assessing the credit risk and, to an extent, recovering the debts. • Net NPAs to Net Advances (NNPAs/NA): It is the most standard measure of assets quality measuring the net nonperforming assets as a percentage to net advances. • Percentage Change in NPAs: This measure tracks the movement in Net NPAs over previous year. The higher the reduction in the Net NPA level, it’s better for the bank. The foreign banks were at the top position with an average Net NPA/ Advances of 1.1, followed by public sector banks (2) and private banks was at last position with an average (2.2). TABLE 2 : ASSETS QUALITY OF PUBLIC, PRIVATE AND FOREIGN BANKS (Percent)

Note: SCBs-Scheduled Commercial Banks, SBGs- State Bank of India & its associates, NBs- Nationalized Banks, OPSBs-Old Private Sector Banks,NPSBs- New Private Sector Banks

Source : Data compiled & calculated from Statistical Table Relating to Banks in India, RBI, Mumbai, Issues of relevant years.

-14-

Comparative Performance Evaluation of Public, Private and Foreign Banks Through Camel Rating

In case of Net NPA/ TA it’s again foreign banks was at the top position with a least average of 0.5 followed by private bank (0.9) and public banks (1.2) was at last position. The private banks was at the first position in percentage change in NPAs with an average of 15.4, followed by foreign banks (15.6) while public banks (16.2) stood at last position. On the basis of group averages of sub-parameters of assets quality the foreign banks was at the top position with group average 1.3, followed by private banks (1.7) and public banks (3) was positioned at last. MANAGEMENT EFFICIENCY Management efficiency is another important element of the CAMEL Model. The ratio in this segment involves subjective analysis to measure the efficiency and effectiveness of management. The ratios used to evaluate management efficiency are described as: Total Advances to Total Deposits

• • •

Profit per Employee

Business per Employee

Total Advances to Total Deposits (TA/TD): This ratio measures the efficiency and ability of the bank’s management in converting the deposits available with the bank excluding other funds like equity capital, etc. into high earning advances. Profit per Employee (PPE): This shows the surplus earned per employee. It is known by dividing the profit after tax earned by the bank by the total number of employees. Business per Employee (BPE): Business per employee shows the productivity of human force of bank. It is used as a tool to measure the efficiency of employees of a bank in generating business for the bank.

On the basis of group averages of sub-parameters, foreign banks was at the top most position followed by private banks and public banks positioned at last due to its poor performance in 3(TA/TD, PPE and BPE) sub parameters of management. TABLE 3: MANAGEMENT EFFICIENCY OF PUBLIC, PRIVATE AND FOREIGN BANKS (%)

Source: Data compiled and calculated from Statistical Table Relating to Banks in India, RBI, Mumbai, Issues of relevant years. EARNING QUALITY The quality of earnings is a very important criterion that determines the ability of a bank to earn consistently. It basically determines the profitability of bank and explains its sustainability and growth in earnings in future. The following ratios explain the quality of income generation: Net income/Total Assets

Burden to working fund

Operating profit/Total assets

Return on Assets

Net Interest Income to Total Asset: Net interest income is defined as the difference between interest earned and interest expended as a proportion of average total assets. It is the ratio between spread and total assets. This ratio shows how much a

-15-

JMASS 12 (1-2), 2016

Dr. Kamlesh 

bank can earn for every rupee of investments made in assets. The higher the ratio the better will be the performance of the bank. Burden to working fund: Burden to working funds ratio is calculated by comparing the non-interest expenditure and noninterest income divided by working funds. Operating profit to total assets: This ratio indicates how much a bank can earn from its operations after meeting operating expenses for every rupee of investments made in assets. This is arrived at by dividing the operating profit by total assets. The better utilization of funds will result in higher oper. profit. The higher the ratio the better will be the performance of the bank. Return on Assets: It is the ratio of net profit after tax and total assets. Higher return on assets means greater returns earned on assets deployed by the bank. The public sector banks score first position in burden to total assets while positioned at last in all remaining three ratios parameters. On the basis of group averages of sub-parameters, foreign banks was at the top most position followed by private banks and public banks positioned at last due to its poor performance in (OP/TA, NIM/TA and Return on assets) 3 sub parameters of management. TABLE 4 : EARNING QUALITY OF PUBLIC, PRIVATE AND FOREIGN BANKS (Percent)

Source: Data compiled and calculated from Statistical Table Relating to Banks in India, RBI, Mumbai, Issues of relevant years. LIQUIDITY RATIO Bank has to take a proper care to hedge the liquidity risk; at the same time ensuring good percentage of funds are invested in high return generating securities, so that it is in a position to generate profit with provision liquidity to the depositors. The following ratios are used to measure the liquidity: Cash deposit ratio

Investment to deposit

Total Investments to Total Assets

Cash deposit ratio: In order to understand the liquidity position by considering cash in hand, balance with RBI in current accounts, balance with banks in India and money at call andshort notice against total deposits. Investment to deposit: This ratio measures the ability of bank in the form of investment. It explainsthe proportion of deposit invested in investment. Total Investments to Total Assets (TI/TA): It indicates the extent of deployment of assets ininvestment as against advances. On the basis of group averages of sub-parameters, foreign banks was at the top most position followed by private banks and public banks positioned at last due to its poor performance in (Cash deposit Ratio and Investment to deposit ratio) 3 sub parameters of management.

-16-

Comparative Performance Evaluation of Public, Private and Foreign Banks Through Camel Rating

TABLE 5: LIQUIDITY PARAMETER OF PUBLIC, PRIVATE AND FOREIGN BANKS (Per cent)

Source: Data compiled and calculated from Statistical Table Relating to Banks in India, RBI, Mumbai, Issues of relevant years. OVERALL RANKING CAMEL model is used to rating the banks according to their performance. It is clear from table that foreign bank is ranked at top position with composite average 1.4,followed by private banks (2) and public banks was at the bottom most position. So the null hypothesis is rejected .There is a significant difference in performance of public, private and foreign banks in India assessed by CAMEL model during 2001-02 to 2014-15. Table 6: Overall performance of Public, Private and Foreign Banks

V.

CONCLUSION Due to radical changes in the banking sector in the recent years, the central banks all around the world have improved their supervision quality and techniques. The bank supervisory authorities use CAMEL Model in order to rate financial institutions including commercial banks according to five factors. The current study compares the performance of public, private and foreign banks in India by CAMEL model during 2001-02 to 2014-15. The study revealed that • The foreign banks dominate in the all parameters of Camel model expect capital adequacy parameters. • The private sector banks ranked at 2 followed by foreign banks. But regarding the Net NPA on average basis their performance is equal to public sectors banks. • The public sector banks score first position regarding capital parameters but lack at 4 remaining all parameter of camel rating. The commercial banks dominate the sector, comprising more than 35% of the financial system assets. The sound financial health of the banks is the guarantee to its depositors, shareholders, employees and for the whole economy as well. REFERENCES 1. Bhattacharya L. (1997). The Impact of Liberalisation on Productive Efficiency of Indian Commercial Banks. European Journal of Operational Research, Vol 98, pp- 332-345. 2. Chaudhry,Sahila and Singh, Sultan (2012).Impact of Reforms on the Asset Quality in Indian Banking. International Journal of Multidisciplinary, Vol. 5(2), pg. 17-24. 3. Grier, Waymond A (2007).Credit Analysis of Financial Institutions, Euromoney Institution Investor PLC, United Kingdom.

-17-

JMASS 12 (1-2), 2016

4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Dr. Kamlesh 

Gupta, Ruchi (2014). An Analysis of Indian Public Sector Banks Using Camel Approach, IOSR Journal of Business and Management (IOSR-JBM), Vol. 16(1), pp. 94-102. Kumar S. and Gulati R. (2008). An Examination of Technical, Pure Technical and Scale Efficiencies of Indian Public Sector Banks using DEA. European Journal of Business and Economics, Vol. 1(2), pp 33-69. Kumbahkar S.C. and Sarkar S (2003). Deregulation, Ownership and Productivity Growth in the Banking Industry: Evidence from India. Journal of Money, Credit and Banking, Vol 35, pp 403-424. Makkar, Anita and Singh, Shveta(2013): Analysis of the Financial Performance of Indian Commercial Banks: A Comparative Study, Indian Journal of Finance, Vol7(5), pp-1-10. Manoj P. K. (2010). Financial Soundness of All OPBs in India and Benchmarking Kerala Based OPB. Journal of Scientific Research, Vol. 11, pp 132-149. Nurazi, Ridwan& Evans, Michael (2005). An Indonesian Study of the Use of CAMEL(S) Ratios as Predictors of Bank Failure. Journal of Economic and Social Policy, vol. 10(1), pp. 1-23. Satish, D &Bharathi, Y Bala(2006). Indian Banking Coming of Age – Performance Snapshot 2005-2006.Charted Financial Analyst, vol. Special Issue, pp. 6-30. Siraj K. K. And Pillani P. S. (2011). Asset Quality and Profitability of Indian Scheduled Commercial Banks during Global Financial Crisis. International Research Journal of Finance and Economics, Vol 80, pp 55-65. Siva, S & Natarajan, P (2011). CAMEL Rating Scanning (CRS) of SBI Groups.Journal of Banking Financial Services and Insurance Research, vol. 1(7), pp. 1-17. Subha, M.V. and Kumar, R. Vishal (2015). Health Check Up of New Private Sector Banks in India Using Camel Model. Serials publications, vol. 12(3), pp 805-814.

Websites 1. Indian Banks’ Association (http://www.iba.org.in/). 2. Reserve Bank of India (http://dbie.rbi.org.in/DBIE/ dbie.rbi? site=home).

-18-

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

A NEW APPROACH FOR DENOISING ULTRASONOGRAPHIC IMAGES USING DUAL TREE COMPLEX WAVELET TRANSFORM Anil Dudy*, Dr. Subham Gandhi** *Ph.D Scholar Baba Mastnath University Asthal Bohar, Rohtak **Associate Professor S.B.M.N Engineering College Asthal Bohar, Rohtak E-mail : [email protected], [email protected]

ABSTRACT Ultrasound imaging is a non invasive, non destructive and low cost technique. Colour image processing system are used for a variety of purposes ranging from capturing scenes, processing of imagery for feature extraction. These systems often rely on filtering operation. It is used for imaging organs and soft tissue structures in human body. Noise suppression from images is one of the most important concerns in digital image processing. Medical images are corrupted by noise in their acquisition and transmission process. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. Ultrasound images are very noisy. In addition to the system noise, a significant noise source is the speckle phenomenon. Speckle is created by a complex interference of ultrasound echoes made by reflectors spaced closer together than the ultrasound system's resolution limit. In this paper, an efficient method based on Dual Tree Complex Wavelet Transform (DTCWT) has been proposed to denoise the ultrasound images. Testing shall be made on a set of medical images. It is proposed that results achieved with DTCWT will be better than the other existing methods like Discrete Wavelet Transform (DWT). Keywords : DTCWT, filter, medical, speckle, ultrasound

1.

INTRODUCTION

Coherent imaging systems such as ultrasound suffer from speckled noise, creating images that appear inferior to those generated by other medical imaging modalities. Speckle is a random interference pattern formed by coherent radiation in a medium containing many sub-resolution scatterers. Speckle has a negative impact on ultrasound images as the texture does not reflect the local echogenicity of the underlying scatterers. In medical image processing, for an accurate diagnosis, image denoising has become very essential exercise. Thus, the ultimate goal of any image denoising technique is to compromise between the noise suppression and preservation of image details so that image modality can provide the best possible information to clinician, to make an accurate diagnosis. The main objective of image denoising techniques is to remove such noises while retaining the maximum possible information signal. Ultrasonic imaging is a widely used medical-imaging procedure. It is economical, comparatively safe, and adaptable. One of its main shortcomings is the poor quality of images. Here, images are affected by speckle noise. Speckle filtering of medical ultrasound images represents a critical pre-processing step, providing clinicians with enhanced diagnostic ability. The adaptive weighted median filter can reduce speckle. But, it does not preserve useful detail like edges of the image properly. Conventional linear filtering methods based on the first order statistical models

Fig1: Tree diagram of Dual-tree complex wavelet transform are not optical tools for reducing the speckle noise. They tend to suppress the noise at expense of overly smoothing the image details [1], [2]. Over a period various speckle reduction techniques have been developed [3], [4]. Since speckle cannot be directly correlated with specific reflectors, or cells, in the body, it is necessary to analyse an ultrasound system to understand the origins of speckle. In this regard, dual-tree complex wavelet transform (DTCWT) is assumed as the best solution. In this paper, study of DTCWT technique has been carried out. 2.

TOOLS AND METHODOLOGY Dual tree complex wavelet transform: The DTCWT is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties like; it is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only 2d for d-dimensional signals, which is substantially lower than the undecimated DWT. The multidimensional (M-D) dual-tree CWT is non-separable. It is based on a computationally efficient, separable filter bank (FB). DTCWT calculates the complex transform of a signal using two separate DWT decompositions (tree a and tree b). It is shown in Fig. 1. If the filters used in one are specifically designed

-19-

 

JMASS 12 (1-2), 2016

Anil Dudy & Dr. Subham Gandhi 

different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary. This redundancy of the two provides extra information for analysis but at the expense of extra computational power. It also provides approximate shift-invariance (unlike the DWT). It allows perfect reconstruction of the signal. In this paper, a method based on Dual Tree Complex Wavelet Transform (DTCWT) has been proposed to denoise the ultrasound images. MATLAB is very powerful toolbox with many additional functions. Matlab as a toolbox is used for ultrasound image denoising. Testing shall be made on a set of medical images. An algorithm for the proposed method is given in table 1.

TABLE 1 : ALGORITHM OF THE PROPOSED METHOD

3. CONCLUSION In this paper for the denoising of ultrasound images, the wavelet transform has been proved a more effective tool than the Fourier transform. The discrete wavelet transform lacks the shift-invariance property. In multiple dimensions it does a poor job of distinguishing orientations, which is important in image processing. For these reasons, to obtain some applications improvements, the Separable DWT is replaced by Complex dual tree DWT which has been done by using self build function. This method not only reduces the speckle noise but also preserves the detail features of image. The Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE) parameters shall be used to compare the performance of DWT & DT-CWT.

REFERENCES 1. C.B.Burckhardt, “Speckle in ultrasound B-mode scan,” IEEE Trans. Sonics Ultrason., vol. 25(1), pp. 273-278,1978. 2. R.F.Wagner, S.W.Smith, J.M.Sandrik, and H.Lopez, “Statistics of speckle in ultrasound B-scan,” IEEE Trans. Sonics Ultrason.,vol. 30(7), pp. 156-163,1983. 3. Sindhu Ramachandran S and Dr. Manoj G Nair, “ Ultrasound Speckle Reduction using Nonlinear Gaussian filters in Laplacian Pyramid domain”, Image and Signal Processing 3rd International Congress, vol. 2, pp.771 – 776, 2010. 4. Parisa Gifani, Hamid Behnam, Ahmad Shalbaf, Zahra “Noise Reduction of Echocardiography Images Using Isomap Algorithm” Biomedical engineering 1st Middle East Conference, pp. 150 – 153, 2011. 5. Y.Yu, and S.T.Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process, vol.11, 2002, pp.1260-1270. 6. Y.Yu, J. A.Molloy and S.T. Acton, “Three-dimensional speckle reducing anisotropic diffusion,” in, Proc. 37thAsilomar Conf. Signals, Systems and Computers, 2003, vol. 2,pp. 1987-1991. 7. Deka and P. K. Bora, “Despeckling of Medical Ultrasound Images using Sparse Representation” Signal processing and Communication, International Conference, pp.1 – 5, 2010 8. Di Lai1, NavalgundRao, Chung-huiKuo, Shweta Bhatt and VikramDogra, “An Ultrasound Image Despeckling Method Using Independent Component Analysis” Biomedical Imaging: From Nano to Macro, IEEE International Symposium pp. 658 – 661, 2009. 9. Sheng Yan, Jianping Yuan, Minggang Liu, and ChaohuanHou, “Speckle Noise Reduction of Ultrasound Images Based on an Undecimated Wavelet Packet Transform Domain Non-homomorphic”, BMEI, pp. 1-5,2009 10. M.I.H. Bhuiyan, M.O. Ahmad and M.N.S. Swamy “Spatially adaptive thresholding in wavelet domain for despeckling of ultrasound images” Image processing, vol.3, pp. 147-162, 2009. 11. Arash Vosoughi and Mohammad. B. Shamsollahi “Speckle Noise Reduction of Ultrasound Images Using M-band Wavelet Transform and Wiener Filter in a Homomorphic Framework”, International Conference on Biomedical and Imformatics, vol. 2, pp. 510-515, 2008 12. R. K. Mukkavilli, J. S. Sahambi and P. K. Bora Wang “Modified Homomorphic Wavelet Based Despeckling of Medical Ultrasound Images” Electrical and Computer Engineering Canadian Conference, pp. 887-890, 2003 13. S.KotherMohideen, Dr. S. ArumugaPerumal and Dr.M.MohamedSathik “Image De-noising using Discrete Wavelet transform”, IJCSNS, International Journal of Computer Science and Network Security, vol. 8, no.1,pp. 213-215, Jan. 2008 14. Ricardo G. Dantas and Eduardo T. Costa “Ultrasound Speckle Reduction Using Modified Gabor Filters ” Ultrasonic, Ferroelectrics and Frequency Control, IEEE Transactions, vol.54, pp. 530-538, 2007 15. B. Zhang, J. M. Fadili, and J.-L.Starck “Wavelets, Ridgelets and Curvelets for Poisson Noise Removal” IEEE Transactions on Image Processing, 2007. 16. P.C lie and M.J. Chen,“Strain compounding: A new approach for speckle reduction,” IEEE Trans. Ultrason. FerroelectFreq, Contr, vol. 49, pp. 39-46, Jan.2002 17. Mandeep Singh, Dr. Sukhwinder Singh, Dr. Savita Kansal “Comparative analysis of spatial filters for speckle reduction in Ultrasound Images” 2009 World Congress on Computer Science and Information Engineering

-20-

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

ECONOMIC PRODUCTION QUANTITY MODELS FOR TWOECHELON SUPPLY CHAIN WITH REWORK A DEFECTIVE ITEM UNDER CAP-AND-TRADE POLICY W. Ritha and S. Poongodisathiya Department of Mathematics, Holy Cross College (Autonomous), Tiruchirappalli – 620002, Tamilnadu, India. E-mail : [email protected]

ABSTRACT In recent years, regulating of carbon emission becomes more challenging, Carbon tax and Cap and trade are two emission regulations used to curb the carbon emission. In this paper, we developed a cap and trade policy and prefer an Economic production quantity model of imperfect items due to faulty production, transportation and storage conditions. In this model, a firm having two separate division, defective items are returned by the buyer, it must be screened out by the vendor through a 100% screening process which results some products are reworkable beside the major good items, along carbon emission considered from production, transportation and inventory holding. A numerical study is given to illustrate the solution procedure and sensitivity analyses are performed to examine the impact of the various parameters on the total cost and total emission. Keywords: Inventory; production; cap and trade policy; EPQ; vendor-buyer model; imperfect quality.

Introduction: Nowadays, economic conditions and environmental regulations force organizations to limit Green house gas emissions. In many production-inventory systems, it is observed that two types of products namely, useful products and harmful products are produced. In any production process, waste is generated; significant quantities of fossil fuels are consumed in transporting and producing within the production system. The human activities are responsible for increase in global temperature and environmental pollution has driven governments and different regulatory bodies to take initiatives and set realistic targets to curb emission. To materialize this target regulatory bodies have imposed several laws and initiated different carbon policies. The existing carbon policies are carbon cost/tax policy, carbon cap and trade policy and strict carbon cap policy. The biggest international scheme for trading of greenhouse gases (GHG) is European Union Emissions Trading Systems (EU-ETS), which follows cap and trade concept, in which companies get an upper limit on greenhouse gas emission and beyond this limit companies get penalized. Hence, the firms that mange its emissions well can only sustain in the market under cap and trade policy. Initially all the environmental initiatives were organization centric. With the increasing environmental awareness in public and the implementation of new regulations in some countries, there is increased pressure on organizations to improve environmental performance. Integration of environmental thinking into supply chain management including product design, production process, transportation, delivery of the product and end-of-life management of the product after it can be defined as green supply chain management (GSCM). The rest of the paper is organized as follows, section 2 presents a review of related literature, Section 3 defines assumptions and notations, Section 4 mathematical model and solution techniques, Section 5 provides some numerical example and finally we conclude the paper in section 6. 2.

Literature review: Many useful modification of classical economic order quantity (EOQ) and economic production quantity (EPQ) models are

available in the literature. Porteus (1986) and Rosenblatt and Lee (1986) were the pioneers to incorporate the concept of imperfect items in production process for inventory management. Zhang and Gerchak (1990) considered a joint lot sizing and inspection policy where a random proportion of lot size was defective. Salameh and Jaber (2000) assumed that the defective items could be sold at a discounted price in a single batch by the end of the 100% screening process and found that the economic lot-size tends to increase as the average percentage of imperfect quality items increases. Goyal and Cardenas-Barron -21-

JMASS 12 (1-2), 2016

W. Ritha and S. Poongodisathiya

(2002) made some changes in the model of Salameh and Jaber (2000) to calculate order quantity more accurately. In another paper, Goyal, Huang and Chen (2003) developed optimal integrated vendor-buyer inventory policy for imperfect quality items to minimize the total joint annual cost incurred by the vendor-buyer. The cost minimization optimal policy was considered by Tsou, Hejazi and Barzoki (2012) when the produced item of imperfect production system with its being perfect, imperfect or defective items. Recently, Mukhopadhyay and Goswami(2014) considered EPQ model with imperfect items where imperfect items were reworked at a cost and learning is setup was considered. An integrated model of Mehmood Khan, Jaber and Rahim Ahmad (2013) described errors in quality inspection and learning in production of EPQ model. Managing carbon emissions in inventory handling under the carbon emission trading policy has been discussed by Hua, Cheng and Wang (2011). Bouchery et al. (2012) redesigned the classical EOQ model taking sustainability criteria into cap and carbon price and mechanism. Hoen et al. (2012 investigated the effect of different types of regulations with respect to emission on the selected transport mode. Abdallah, Diabat and Simchi-Levi (2010) to determine lot size for production and shipment for raw materials and finished products taking carbon emission constraints into consideration. Absi et al. (2012) developed multi-sourcing lot-sizing problems with periodic, cumulative, global and rolling carbon emission constraints. Jaber et al. (2013) developed a model to determine the optimal manufacturer’s production rate and vendor-buyer coordination multiplier, while considering carbon tax, emission penalty, emission allowance trading

and

combination of carbon tax and emission penalty. Benjaafar, Li and Daskin(2013) considered carbon tax, strict emission cap, cap and trade, carbon offers while determining the optimal production, inventory, back order quantity and amount of carbon traded to minimize the total supply chain cost for a single firm. Hammami, Nouira and Frein(2015) describes carbon cap and carbon tax policies with lead time constraint in finite planning horizon for a supply chain. Ghosh, J.K.Jha and S.P.Sarmah(2016) determining a two-echelon supply chain with different carbon policies. While Dong et al. (2016) developed a profit maximization multi-stage supply chain considering cap and trade policy and stochastic demand. In this paper, we develop a two-echelon supply chain of EPQ model to determine optimal order quantity and number of shipments under cap and trade policy. We assumed that vehicle moves to and fro between the buyer and vendor. Imperfect products are 100% screened by the vendor which results some products are rework able. We have assumed the vendor and the buyer are separate entities of a same firm and they work to obtain a common goal tries to reduce total cost and total emission for the entire operations. 3.

‫ܦ‬

Assumptions and Notation ܵ

ܲ

‫ܣ‬

݀

‫ݒ‬

‫݌‬ ߛ

‫ݔ‬

Demand rate on the buyer The vendor’s set-up cost per production set-up The vendor’s production rate The buyer’s ordering cost per order Distance between the vendor and the buyer Velocity of the vehicle Production cost per unit item produced Percentage of defective items supplied by the vendor

ܿ௣

Screening rate

‫݌‬ଶ

Rework cost per unit for defective items

ܿ௦

ܿ௥

ℎ௕

ℎ௩

‫ݐ‬଴

‫ݐ‬ொ

Unit purchase cost Screening cost per unit (Random) fraction of reworkable imperfect items Holding cost for the buyer per unit item per unit time Holding cost for the vendor per unit item per unit time Transportation cost per unit time when the vehicle is empty Transportation cost per unit item per unit time when the vehicle is loaded -22-

ߨ

Carbon emission per unit item due to production at the vendor

߬଴

Carbon emission per unit time due to transportation when the vehicle is empty

Economic Production Quantity Models for Two-Echelon Supply Chain with Rework A Defective Item under Cap-and-trade Policy

ߙ௕ ߙ௩

Carbon emission per unit item per unit time due to inventory at the buyer Carbon emission per unit item per unit time due to inventory at the vendor

߬ொ

‫ܥ‬መ

Carbon emission per unit item per unit time due to transportation when the vehicle is loaded

‫ݎ‬

Buying/selling price of per unit of carbon

ܳ

The buyer’s ordering quantity per order

Cap (maximum limit) on carbon emission per unit time

Decision variables ݉

Number of shipments from the vendor to buyer per production cycle (a positive integer)

Assumptions 1)

We have considered a two-echelon supply chain comprising a single vendor single buyer and they deal with a single The buyer orders a lot of size ܳ and the vendor produces ݉ܳ units with a finite production rate ܲ (ܲ > ‫ )ܦ‬in one

product. 2)

production set-up but ships in quantity ܳ to the buyer ݉ times

3)

Defective products are returned by the buyer. Imperfects products are due to faulty production, transportation and storage

4)

Demand is deterministic.

conditions, which are screened out by the vendor through 100% screening which results some products are reworkable. The screening process and demand proceeds simultaneously, screening rate is higher than demand rate. i.e. ‫ݔ‬: screening

5)

Carbon emission is considered from production, inventory holding and transportation.

6)

rate; ‫ܦ > ݔ‬.

4.

Mathematical models and solution techniques

A mathematical model named as cap-and-trade model in a two-echelon supply chain. Under cap-and –trade policy firms are allowed to emit carbon within a specified level, which is called cap. If the firm crossed the cap during its operations, then the firm has to buy carbon credits from other firms. If the firm emits lower than the specified level, it earns carbon credit that can When the buyer orders a lot size ܳ, the vendor begins production with a constant production rate . The vendor produces the be sold to other firms.

item in a lot of size ݉ܳ in each production cycle of length ݉ܳ/‫ܦ‬. Carbon emission due to production per unit time = ߨ‫ܦ‬

The expression for finding the emission per unit time as found as below: Carbon emission due to inventory, at the buyer and vendor per unit time = ݉ ߙ௕ ቄ

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠

ቅ + ߙ௩

ொ ଶ

ቂ݉ ቀ1 − ቁ − 1 + ஽ ௉

ଶ஽ ௉



Carbon emission due to transportation per unit time =

ఛೂ ௗ ஽ ௩

+

ఛబ ௗ ஽ ௩ொ

The total emission per unit time is given by ܶ‫ܳ(ܧ‬, ݉) = ߨ‫ ܦ‬+ ݉ߙ௕ ቄ =

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠

ቅ + ߙ௩

ொ ଶ

ቂ݉ ቀ1 − ቁ − 1 +

+ ‫ܦ݌‬

஽ ௉

ଶ஽ ௉

ቃ +

ఛೂ ௗ ஽ ௩

+

ఛబ ௗ ஽ ௩ொ

The vendor's production setup and production costs per unit time ௌ஽

௠ொ

The vendor's inventory holding cost per unit time = ℎ௩

ொ ଶ

ቂ݉ ቀ1 − ቁ − 1 + ஽

The buyer's ordering and inventory holding costs per unit time =

஺஽ ொ

+ ݉ℎ௕ ቄ

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠



Screening cost per unit time

= ܿ௦ ‫ܦ‬ -23-



ଶ஽ ௉



........... (1)

= ܿ௣ ‫ܦ‬

JMASS 12 (1-2), 2016

Rework cost per unit time

=

The transportation cost per unit time The total emission cost per unit time = ‫ ݎ‬ቂߨ‫ ܦ‬+ ݉ߙ௕ ቄ

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠

ቅ + ߙ௩

W. Ritha and S. Poongodisathiya

= ܿ௥ ‫݌[ܧ‬ଶ ]‫ܦ‬

Purchasing cost per unit time

ொ ଶ

+

௧ೂ ௗ ஽ ௩

௧బ ௗ ஽ ௩ொ

ቂ݉ ቀ1 − ቁ − 1 + ஽ ௉

ଶ஽ ௉

ቃ+

ఛೂ ௗ ஽ ௩

+

ఛబ ௗ ஽ ௩ொ

− C෠ቃ

The objective function to minimize the total cost of the supply chain can be formulated as follows: ࢀ࡯(ࡽ, ࢓) =

+ ‫ ݎ‬ቂߨ‫ ܦ‬+ ݉ߙ௕ ቄ

஺஽ ொ

+ ݉ℎ௕ ቄ +ℎ௩

+

ொ (ଵିఊ) ଶ

ఊொ஽ ௫௠

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠

ቅ+

ௌ஽

௠ொ

+ ‫ܦ݌‬

ܳ ‫ܦ‬ 2‫ܦ‬ ܿ௦ ‫ܿ ܦ‬௣ ‫ܿ ܦ‬௥ ‫݌[ܧ‬ଶ ]‫ݐ ܦ‬ொ ݀ ‫ݐ ܦ‬଴ ݀ ‫ܦ‬ ൤݉ ൬1 − ൰ − 1 + ൨ + + + + + 2 ܲ ܲ ݉ ݉ ݉ ‫ܳݒ‬ ‫ݒ‬

ቅ + ߙ௩

ொ ଶ

ቂ݉ ቀ1 − ቁ − 1 + ஽ ௉

ଶ஽ ௉

ቃ+

ఛೂ ௗ ஽ ௩

+

ఛబ ௗ ஽ ௩ொ

− C෠ቃ

.................. (2)

Computation of optimal order quantity:Differentiating w.r.to ܳ for fixed ݉ and can be obtained as follows −

஺஽ ொమ

− ݉ℎ௕ ቄ

+ ‫ ݎ‬ቂ݉ߙ௕ ቄ

(ଵିఊ)

(ଵିఊ) ଶ



+

+

ఊ஽

௫௠

ఊ஽

௫௠

ቅ+

ቅ− ఈೡ ଶ

ௌ஽

௠ொమ

+

௛ೡ ଶ

ቂ݉ ቀ1 − ቁ − 1 +

ቂ݉ ቀ1 − ቁ − 1 + ஽ ௉

Therefore the optimal order quantity of ܳ is

஽ ௉

ଶ஽ ௉

ቃ−

ଶ஽

ఛబ ௗ ஽ ௩ொమ



ቃ−

ቃ=0

௧బ ௗ ஽ ௩ொమ

ܵ ‫ݐ‬଴ ݀ ‫߬ݎ‬଴ ݀ + + ) ݉ ‫ݒ‬ ‫ݒ‬ ܳ=ඩ 2ߛ‫ܦ‬ ‫ܦ‬ 2‫ܦ‬ ݉(ߙ௕ ‫ ݎ‬+ ℎ௕ ) ቄ(1 − ߛ) + ቅ + (ℎ௩ + ‫ߙݎ‬௩ ) ቄ݉ ቀ1 − ቁ − 1 + ቅ ‫݉ݔ‬ ܲ ܲ 2‫ ܣ(ܦ‬+

Ignoring the terms which are independent of ܳ and ݉ in (2), we get ܶ‫ܳ(ܥ‬, ݉) = + ‫ ݎ‬ቂ݉ߙ௕ ቄ

஺஽ ொ

+ ݉ℎ௕ ቄ

ொ (ଵିఊ) ଶ

+

ఊொ஽ ௫௠

ொ (ଵିఊ) ଶ

+ℎ௩

ቅ + ߙ௩

+

ఊொ஽ ௫௠

ቅ+

ܳ ‫ܦ‬ 2‫ܦ‬ ܿ௦ ‫ܿ ܦ‬௣ ‫ܿ ܦ‬௥ ‫݌[ܧ‬ଶ ]‫ݐ ܦ‬଴ ݀ ‫ܦ‬ ൤݉ ൬1 − ൰ − 1 + ൨+ + + + ݉ ݉ ‫ܳݒ‬ 2 ܲ ܲ ݉ ொ ଶ

ቂ݉ ቀ1 − ቁ − 1 +

After substituting ܳ in above equations, we get ܶ‫= )݉(ܥ‬

ቄ(1 − ߛ) +

ଶఊ஽ ௫௠

௖ೞ ஽ ௠

+

௖೛ ஽ ௠

+

௖ೝ ா[௣మ ]஽ ௠

ௌ஽

௠ொ ஽

+ [2‫ ܦ‬ቀ‫ ܣ‬+

ቅ + (ℎ௩ + ‫ߙݎ‬௩ ) ቄ݉ ቀ1 − ቁ − 1 + ஽ ௉

ଶ஽ ௉







+

ቅ )]మ భ

௧బ ௗ ௩

ଶ஽

+



ቃ+

௥ఛబ ௗ ௩

ఛబ ௗ ஽ ௩ொ



ቁ (݉(ߙ௕ ‫ ݎ‬+ ℎ௕ )

Taking square of above equation with neglecting higher powers and ignoring independent of ݉ and simplifying further, we get,

‫ݐ‬଴ ݀ ‫߬ݎ‬଴ ݀ 2ߛ‫ܦ‬ ‫ܦ‬ + ൰ ൤(ߙ௕ ‫ ݎ‬+ ℎ௕ ) ൜(1 − ߛ) + ൠ + (ℎ௩ + ‫ߙݎ‬௩ ) ൬1 − ൰൨ ‫ݒ‬ ‫ݒ‬ ‫݉ݔ‬ ܲ 2ߛ‫ܦ‬ ܵ 2‫ܦ‬ +ܵ(ߙ௕ ‫ ݎ‬+ ℎ௕ ) ൜(1 − ߛ) + ൠ − (ℎ௩ + ‫ߙݎ‬௩ ) ൬1 − ൰ ‫݉ݔ‬ ݉ ܲ Therefore, the optimality of number of shipments(݉) is

total cost (݉) is

݉ ൬‫ ܣ‬+

݉=ඩ 5.

൬‫ ܣ‬+

ܵ ቂ(ߙ௕ ‫ ݎ‬+ ℎ௕ ) ቄ

2ߛ‫ܦ‬ 2‫ܦ‬ ቅ − (ℎ௩ + ‫ߙݎ‬௩ ) ቀ1 − ቁቃ ‫ݔ‬ ܲ

‫ݐ‬଴ ݀ ‫߬ݎ‬଴ ݀ ‫ܦ‬ + ൰ ቂ(ߙ௕ ‫ ݎ‬+ ℎ௕ )(1 − ߛ) + (ℎ௩ + ‫ߙݎ‬௩ ) ቀ1 − ቁቃ ‫ݒ‬ ‫ݒ‬ ܲ

.................. (3)

Numerical examples

To illustrate the proposed models and solution procedures, a two stage supply chain is considered with the following data:

‫ = ܦ‬600 units / year, ‫ = ݌‬2000 units / year, ‫ = ܣ‬$200/ order, ܵ = $1500/ setup, ݀ = 100 km, ‫ = ݒ‬50 km/ ℎ , ‫ = ݌‬$200/ unit, ߨ =

3 ton / unit, ߛ = 0.02 , ܿ௦ = 0.5 , ܿ௣ = 25 , ܿ௥ = 1.5 , ‫݌[ܧ‬ଶ ] = 0.00375 , ℎ௕ = $5/ unit / year, ℎ௩ = $2/ unit / year, ‫ݐ‬଴ =$10 / h, ‫ݐ‬ொ = -24-

$5/unit/h,ߙ௕ = 0.80ton/unit/year, ߙ௩ = 0.80ton/unit/year,߬଴ = 0.03ton/h,߬ொ = 0.004ton/unit/h, ‫ = ݎ‬$37/ton,‫ܥ‬መ = 2000/unit Economic Production Quantity Models for Two-Echelon Supply Chain with Rework A Defective Item under Cap-and-trade Policy

time Using the above parameters, the optimal order quantity ܳ = 174.787 and number of shipments(݉) = 1.2337

Subsequently, the value of ܶ‫ ܥ‬and ܶ‫ ܧ‬can be determined from equations (1) and (2), respectively.

i.e.,ܶ‫ܳ(ܥ‬, ݉) = 141,054.9639 and ܶ‫ܳ(ܧ‬, ݉) = 1921.95096 6.

Conclusion:

In this paper, we consider an Economic production quantity model for two-echelon supply chain with reworking a defective item under cap and trade policy. Moreover, many organizations prefer cap and trade policy, since it gives easily to trade carbon in the market and hope to emit less. Also we present numerical examples to optimizing a vendor-buyer model.

References: 1.

Abdallah, T., A. Diabat, and D. Simchi-Levi. 2010. “A Carbon Sensitive Supply Chain Network Problem with Green Procurement.” Proceedings of the 40th International Conference on Computers and Industrial Engineering (CIE), Awaji, Japan, July 25–28, 2010.

2.

Absi, N., S. Dauzère-Pérès, S. Kedad-Sidhoum, B. Penz, and C. Rapine. 2012. “Lot Sizing with Carbon Emission Constraints.” European Journal of Operational Research 227 (1): 55–61.

3.

Arindam Ghosh, J.K.Jha and S.P.Sarmah. 2016. “Optimizing a two-echelon serial supply chain with different carbon policies.” International Journal of Sustainable Engineering.

4.

Arindum Mukhopadhyay & A.Goswami. 2014. “Economic production quantity models for imperfect items with pollution costs.”Systems Science and Control Engineering: An Open Access Journal, 2:1, 368-378.

5.

Benjaafar, S., Y. Li, and M. Daskin. 2013. “Carbon Footprint and the Management of Supply Chains: Insights from Simple Models.” IEEE Transactions on Automation Science and Engineering 10 (1): 99–116.

6.

Bouchery, Y., A. Ghaffari, Z. Jemai, and Y. Dallery. 2012. “Including Sustainability Criteria into Inventory Models.” European Journal of Operational Research 222 (2): 229–240.

7.

Goyal,S.K.,&Cardenas-Barron,L.E.(2002).Note on: Economic production quantity model for items with imperfect quality – A practical approach. International Journal of Production Economics, 77, 85–87.

8.

Goyal, S. K., Huang, C.-K., & Chen, K.-C. (2003). A simple integrated production policy of an imperfect item for vendor and buyer. Production Planning & Control: The Management of Operations, 14(7), 596–602.

9.

Diabat, A., and D. Simchi-Levi. 2009. “A Carbon-capped Supply Chain Network Problem.” Proceedings of the IEEE International Conference of on Industrial Engineering and Engineering Management, Hong Kong, December 8–11.

10. Dobos, I. 2005. “The Effects of Emission Trading on Production and Inventories in the Arrow-Karlin Model.” International Journal of Production Economics 93–94: 301–308. 11. Dong, C., B. Shen, P. S. Chow, L. Yang, and C. T. Ng. 2016. “Sustainability Investment under Cap-and-Trade Regulation.” Annals of Operations Research 240 (2): 509–531. 12. Khan, M., Jaber, M. Y., Guiffrida, A. L., & Zolfaghari, S. (2011). A review of the extensions of a modified EOQ model for imperfect quality items. International Journal of Production Economics, 132, 1–12. 13. Hammami, R., I. Nouira, and Y. Frein. 2015. “Carbon Emissions in a Multi- Echelon Production-Inventory Model with Lead Time Constraints.” International Journal of Production Economics 164: 292–307. 14. Hoen, K. M. R., T. Tan, J. C. Fransoo, and G. J. van Houtum. 2014. “Effect of Carbon Emission Regulations on Transport Mode Selection under Stochastic Demand.” Flexible Services and Manufacturing Journal 26 (1): 170–195.

-25-

JMASS 12 (1-2), 2016

W. Ritha and S. Poongodisathiya

15. Hua, G., T. C. E. Cheng, and S. Wang. 2011. “Managing Carbon Footprints in Inventory Management.” International Journal of Production Economics 132 (2): 178–185. 16. Jaber, M. Y., C. H. Glock, M. A. Ahmed, and E. I. Saadany. 2013. “Supply Chain Coordination with Emissions Reduction Incentives.” International Journal of Production Research 51 (1): 69–82. 17. Joglekar, P. N. 1988. “Comments on ‘a Quantity Discount Pricing Model to Increase Vendor Profits’.” Management Science 34 (11): 1391–1398. 18. Labatt, S., and R. White. 2007. Carbon Finance: The Financial Implications of Climate Change. Hoboken: John Wiley & Sons. 19. Mitra, S., and P. P. Datta. 2013. “Adoption of Green Supply Chain Management Practices and Their Impact on Performance: An Exploratory Study of Indian Manufacturing Firms.” International Journal of Production Research 52 (7): 1–23. 20. Mehmood Khan, Mohammad Y.Jaber, Abdul-Rahim Ahmad. 2014.“ An integrated supply chain model with errors in quality inspection and learning in production.” Elsevier: sciencedirect 21. Mehmood Khan, Matloub Hussain, Hussein Saber. 2015. “ A vendor-buyer supply chain model with stochastic lead times and screening errors.” International Journal of Operational Research. 22. Salameh, M. K., & Jaber, M. Y. (2000). Economic production quantity model for items with imperfect quality. International Journal of Production Economics, 64, 59–64. 23. Swami, S., and J. Shah. 2013. “Channel Coordination in Green Supply Chain Management.” Journal of the Operational Research Society 64 (3): 336–351. 24. Zhang, B., and L. Xu. 2013. “Multi-item Production Planning with Carbon Cap and Trade Mechanism.” International Journal of Production Economics 144 (1): 118–127.

-26-

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

INVERSION OF MATRIX BY ELEMENTARY TRANSFORMATION: REPRESENTATION OF NUMERICAL DATA BY A POLYNOMIAL CURVE Biswajit Das* and Dhritikesh Chakrabarty** *Research Scholar, Assam Down Town University, Panikhaiti, Guwahati, Assam ** Department of Statistics, Handique Girls’ College, Guwahati, Assam, India & Research Guide, Assam Down Town University, Panikhaiti, Guwahati, Assam, India. E-mail : [email protected], [email protected], [email protected]

ABSTRACT Due to the necessity of a method in interpolation by approach which consists of the representation of numerical data by a polynomial and then to compute the value of the dependent variable from the polynomial corresponding to any given value of the independent variable, a method has been composed for representing a given set of numerical data on a pair of variables by a polynomial curve. The method has been developed with the help of inversion of matrix by Gauss Jordan method which is based on elementary row transformation. This paper describes the method developed with numerical example in order to show the application of the method to numerical data. Key Words: Numerical data, polynomial curve, matrix inversion, Gauss Jordan method

1.

Introduction: In the existing approach of interpolation {Hummel (1947), Erdos & Turan (1938) et al}, where a number of interpolation formulae are available {Bathe & Wilson (1976), Jan (1930), Hummel (1947) et al}, if it is wanted to interpolate the values of the dependent variable corresponding to a number of values of the independent variable by a suitable existing interpolation formula then it is required to apply the formula for each value separately and thus the numerical computation of the value of the dependent variable based on the given data are to be performed in each of the cases. In order to get rid of these repeated numerical computations from the given data, one can think of an approach which consists of the representation of the given numerical data by a suitable polynomial and then to compute the value of the dependent variable from the polynomial corresponding to any given value of the independent variable. However, a method is necessary for representing a given set of numerical data on a pair of variables by a suitable polynomial. Das & Chakrabarty (2016a, 2016b, 2016c & 2016d) derived four formulae for representing numerical data on a pair of variables by a polynomial curve. They derived the formulae from Lagranges Interpolation Formula, Newton’s Divided Difference Interpolation Formula, Newton’s Forward Interpolation Formula and Newton’s Backward Interpolation respectively. In another study, Das & Chakrabarty (2016e) derived one method for representing numerical data on a pair of variables by a polynomial curve. The method derived is based on the inversion of a square matrix by Caley-Hamilton theorem on characteristic equation of matrix [Cayley (1858 , 1889) & Hamilton (1864a , 1864b , 1862)]. In this study, another method has been composed for the same purpose. The method has been composed here with the help of inversion of matrix by Gauss Jordan method [Grcar & Joseph (2011a, 2011b); Kaw, Autar, Kalu & Egwu (2010)] which is based on elementary row transformation. This paper describes the method developed with numerical example in order to show the application of the method to numerical data. 2. Representation of Numerical Data by Polynomial Curve: Let ‫ݕ‬ଵ , ‫ݕ‬ଶ , ………… ‫ݕ‬௡ be the values assumed by the function y = f(x) corresponding to the values ‫ݔ‬ଵ , ‫ݔ‬ଶ , ………… ‫ݔ‬௡ of the independent variable X. Now the problem is to interpolate the values of the function corresponding to some value of x which values of the function is not available. Now the interpolation is based on the mathematical fact that if n points ‫ݔ‬଴ , ‫ݔ‬ଵ , ………… ‫ݔ‬௡ are given then they can be suitably represented by a polynomial of degree n−1. In the present case thus the function y = f(x) can be represented suitably by a polynomial in x of degree n. Suppose that the polynomial is y= f(x) = ܽ଴ + ܽଵ x +ܽଶ x2 + ……………………+ an−1 xn-1, an -1≠ 0 Since the n points lie on the polynomial curve describe above so we have ‫ݕ‬଴ = ܽ଴ + ܽଵ ‫ݔ‬଴ + ܽଶ ‫ݔ‬଴ 2 + ……………………+ ܽ௡−ଵ ‫ݔ‬଴ ௡−ଵ ‫ݕ‬ଵ = ܽ଴ + ܽଵ ‫ݔ‬ଵ + ܽଶ ‫ݔ‬ଵ 2 + ……………………+ ܽ௡−ଵ ‫ݔ‬ଵ ௡−ଵ ‫ݕ‬ଶ = ܽ଴ + ܽଵ ‫ݔ‬ଶ + ܽଶ ‫ݔ‬ଶ 2 + ……………………+ ܽ௡−ଵ ‫ݔ‬ଶ ௡−ଵ ‫ݕ‬ଷ = ܽ଴ + ܽଵ ‫ݔ‬ଷ + ܽଶ ‫ݔ‬ଷ 2 + ……………………+ ܽ௡−ଵ ‫ݔ‬ଷ ௡−ଵ ……………………………………………………… -27-

JMASS 12 (1-2), 2016

Biswajit Das and Dhritikesh Chakrabarty

………………………………………………………. ‫ݕ‬௡ = ܽ଴ + ܽଵ ‫ݔ‬௡ + ܽଶ ‫ݔ‬௡ 2 + ……………………+ ܽ௡−ଵ ‫ݔ‬௡ ௡−ଵ

→ (2.1)

In order to obtain the polynomial we are to solve this n equations for the n unknown coefficients (parameters) ܽ଴ , ܽଵ , …….., ܽ௡ . Now solving these equations for the parameters, the polynomial can be obtained. In order to solve the equations it is to be noted that the equations in (2.1) can be expressed as AC = B → (2.2) where ܽ଴ ‫ݕ‬଴ 1 ‫ݔ‬଴ ‫ݔ‬଴ଶ … … … … … . ‫ݔ‬଴௡ିଵ ܽଵ ‫ݕ‬ଵ ଶ ௡ିଵ ‫ۇ‬ ‫ۊ‬ ‫ݔ‬ଵ ‫ݔ‬ଵ … … … … … . ‫ݔ‬ଵ ‫ۊ‬ ܽଶ ‫ۇ‬1 ‫ ۇ‬. ‫ۊ‬ ‫ۈ‬ ‫ۋ‬ … … … … … … … … … … … … … … . ‫ۋ‬ A=‫ۈ‬ , C= ‫ ۈ‬. ‫ۋ‬ & B = ‫ ۈ‬. ‫( → ۋ‬2.3) ‫ۋ ۈ‬ ‫ۋ……………………………………ۈ‬ . ‫ۋ‬ ‫ۈ‬ . ଶ ௡ିଵ 1 ‫ݔ‬௡ ‫ݔ‬௡ … … … … … . ‫ݔ‬௡ . ‫ݕ ۉ‬௡ ‫ی‬ ‫ۉ‬ ‫ی‬ ‫ܽ ۉ‬௡ିଵ ‫ی‬ → (2.4) which gives C = ‫ܣ‬−ଵ ‫ܤ‬ In order to find out C, it is required to find out ‫ܣ‬−ଵ .

To find out ‫ܣ‬−ଵ one can apply Gauss Jordan method which is based on elementary row transformation (operations) of matrix. Elementary Transformations (Operations) are the following operations on matrix: (i) Interchange of two rows & two columns The interchange of ith & jth rows is denoted by ܴ௜௝ The interchange of ith & jth column is denoted by ‫ܥ‬௜௝

(ii) Multiplication of (each element of) a row or column by a non zero number k. The multiplication of ith row by k is denoted by kܴ௜ The multiplication of jth column by k is denoted by k‫ܥ‬௜ (iii) Addition of k times the elements of a row(or column) to the corresponding elements of another row (or column) , k ≠ 0 The addition of k times the jth row to the ith row is denoted by ܴ௜ + kܴ௝ The addition of k times the jth row to the ith row is denoted by ‫ܥ‬௜ + k‫ܥ‬௝

If a matrix B is obtained from a matrix A by one or more E operation then B is said to equivalent to A . Denoted by A ~ B The matrix obtained from a unit matrix I by applying it to any one of the E-operations (elementary operations) is called an elementary matrix. Gauss Jordan method of finding of ‫ܣ‬−ଵ can be summarized as follows: The elementary row (not column) operations which reduce a square matrix A to the unit matrix , give the inverse matrix ‫ܣ‬−ଵ Working rule: To find the inverse of A by E-row operations, we write A and I side by side and the same operations are performed on both. As soon as A is reduced to I , I will reduce to ‫ܣ‬−ଵ 3. An Example of Application of the Formula: The following table shows the data on total population of India corresponding to the years: Year Total Population

1971 548159652

1981 683329097

1991 846302688

2001 1027015247

Taking 1971 as origin and changing scale by 1/10, one can obtain the following table for independent variable x (representing time) and f(x) (representing total population of India): Year

xi f(xi)

1971 0 548159652

1981 1 683329097

1991 2 846302688

2001 3 1027015247

Now here ‫ݔ‬଴ = 0, ‫ݔ‬ଵ = 1, ‫ݔ‬ଶ = 2, ‫ݔ‬ଷ = 3 f(x଴ ) = 548159652, f(xଵ ) = 683329097, f(xଶ ) = 846302688, f(xଷ ) = 1027015247, -28-

ܽ଴ ‫ݕ‬଴ 1 0 0 0 ‫ݕ‬ଵ ܽଵ 1 1 1 1 Let C = ൮ ܽ ൲ , A =൮ ൲ , B = ൮‫ ݕ‬൲ ଶ ଶ 1 2 4 8 ܽଷ ‫ݕ‬ଷ 1 3 9 27 We know that AC = B ⇒ C = Aିଵ B ………………………. (1) Now, A = IA 1 0 0 0 1 0 0 0 1 1 1 1 0 1 0 0 ⇒൮ ൲=൮ ൲A 1 2 4 8 0 0 1 0 1 3 9 27 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 1 −1 1 0 0 ⇒൮ ൲=൮ ൲ A, R ଶ → R ଶ − Rଵ 1 2 4 8 0 0 1 0 1 3 9 27 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 1 −1 1 0 0 ⇒൮ ൲=൮ ൲ A, R ଷ → R ଷ − Rଵ 0 2 4 8 −1 0 1 0 1 3 9 27 0 0 0 1 1 0 0 0 1 0 0 0 0 1 1 1 −1 1 0 0 ⇒൮ ൲=൮ ൲ A, R ସ → R ସ − Rଵ 0 2 4 8 −1 0 1 0 0 3 9 27 −1 0 0 1 1 0 0 0 1 0 0 0 −1 1 0 0 ଵ 0 1 1 1 ଵ ൲ A, Rଷ → Rଷ ⇒൮ ൲=൮ ଵ ଶ −ଶ 0 ଶ 0 0 1 2 4 0 3 9 27 −1 0 0 1 1 0 0 0 1 0 0 0 −1 1 0 0 0 1 1 1 ଵ ⇒൮ ൲=൮ ଵ ൲ A, Rଷ → Rଷ− Rଶ −1 ଶ 0 0 0 1 3 ଶ 0 3 9 27 −1 0 0 1 1 0 0 0 1 0 0 0 ‫ ۇ‬−1 1 0 0 ‫ۊ‬ ଵ 0 1 1 1 Rସ → Rସ ⇒൮ ൲ = ‫ ۈ‬ଵ −1 ଵ 0 ‫ ۋ‬A, ଷ 0 0 1 3 ଶ ଶ ଵ ଵ 0 1 3 9 − 0 0 ‫ ۉ‬ଷ ଷ‫ی‬ 1 0 0 0 1 0 0 0 ‫ ۇ‬−1 1 0 0 ‫ۊ‬ 0 1 1 1 ⇒൮ ൲ = ‫ ۈ‬ଵ −1 ଵ 0 ‫ ۋ‬A, Rସ → Rସ− Rଶ 0 0 1 3 ଶ ଶ ଶ ଵ 0 0 2 8 −1 0 ‫ۉ‬ ‫ی‬

Inversion of Matrix by Elementary Transformation : Representation of Numerical Data by a Polynomial Curve

⇒ ൮

1 0 0 0

1 0 ⇒ ൮ 0 0





1 0 0 0 0 0 0 − ‫ ۇ‬1 1 0 0‫ۊ‬ 1 1 1 ൲ = ‫ ۈ‬ଵ −1 ଵ 0 ‫ ۋ‬A, 0 1 3 ଶ ଶ ଵ ଵ ଵ 0 1 4 − 0 ‫ی‬ ‫ۉ‬ 0 1 0 0

0 1 1 0

1 0 0 0 1 1 ଵ ⇒ ൮ 0 0 ଷ





0 0 0 1

‫ۉ‬−

ଵ ଺

1 1 0 0 0 − 0 1 1 1 ‫ ۇ‬1 ଵ ⇒ ൮ ൲=‫ ۈ‬−ଵ 0 0− 0 ଷ ଷ



1 0 0 0 0 − ‫ ۇ‬1 1 0 0‫ۊ‬ 1 ൲ = ‫ ۈ‬ଵ −1 ଵ 0 ‫ ۋ‬A, ଶ ଶ 3 ଵ ଵ ଵ ଵ 1 − − ‫ ଺ ۉ‬ଶ ଶ ଺‫ی‬ 1 0 0 0 0 1 ‫ ۇ‬−ଵ1 1ଵ 0ଵ 0 ‫ۊ‬ ൲=‫ۈ‬ − ଷ ଶ 0 ‫ ۋ‬A, 1 ଺

0 0 0 1

‫ ۉ‬−

ଵ ଺

ଵ ଶ





− ଶ ଺‫ی‬ 0 0 0 1 0 0 ‫ۊ‬ ହ ଶ ଵ − ଷ ଺‫ ۋ‬A, ଺ ଵ ଶ



ଵ ଶ

ଵ ଺

‫ی‬

Rସ → Rସ ଵ ଶ

Rସ → Rସ− Rଷ Rଷ → Rଷ ଵ ଷ

Rଷ → Rସ− Rଷ

-29-

JMASS 12 (1-2), 2016

1

0



଺ ଵ

ହ 1 0 0 0 ‫ ۇ‬−଺ 0 1 1 0 ଵ ൲=‫ۈ‬ ⇒ ൮ ‫ ۈ‬−ଵ 0 0− 0 ଷ ଷ

0 0 0 1

1 0 ⇒ ൮ 0 0



0 1 1 0

‫ ۉ‬−







ଶ ହ







0

0



Biswajit Das and Dhritikesh Chakrabarty

− ଺‫ۊ‬

ଶ ଶ



1



0



ଷ ଵ

଺ ଵ





0

‫ ۋ‬A, ‫ۋ‬ ‫ی‬

Rଶ → Rଶ− Rସ

ହ ଵ ଵ ଵ 0 ‫ ۇ‬− ଺ ଶ ଶ − ଺‫ۊ‬ 0 ൲=‫ۈ‬ R ଷ → − 3R ଷ ‫ ۈ‬−ଵ ହ −ଶ ଵ‫ۋ‬ ‫ ۋ‬A, 0 ଷ ଺ ଷ ଺ ଵ ଵ ଵ ଵ 1 ‫ ۉ‬−଺ ଶ −ଶ ଺ ‫ی‬ 1 0 0 0 ଵଵ ଷ ଵ − 3 −ଶ ଷ‫ۊ‬ ‫ۇ‬ ଺ ⇒ I=‫ۈ‬ Rଶ → Rଶ− Rଷ ‫ ۈ‬1 − ହ 2 − ଵ‫ۋ‬ ‫ ۋ‬A,

A ିଵ

0 1 0 0

‫ۉ‬

0



1

ଶ ଵ ଶ

ଵଵ

‫ ۇ‬− ଺ =‫ۈ‬ ‫ ۈ‬1 − ‫ ۉ‬−

∴(1) ⇒ X = Aିଵ B

ଵ ଺

ܽ଴ ‫ۇ‬ ܽଵ ⇒൮ ܽ ൲=‫ۈ‬ ‫ۈ‬ ଶ ܽଷ ‫ۉ‬

=

ଵ ଶ

0 0 ଷ 3 −





ଶ ଵ ଶ

1



ିଵଵ

1

ିଵ







‫ی‬ 0 ଵ

ଷ ‫ۊ‬ ଵ ‫ۋ‬ 2 − ‫ۋ‬ ଶ

ଶ ଵ

ଵ ଶ

0 3

ିହ





ିଵଵ

ଶ ଵ





‫ی‬

0

ିଷ ଶ

2

ିଵ ଶ

0 ଵ

ଷ ିଵ ଶ ଵ ଺

548159652 ‫ ۊ‬683329097 ‫ ۋ‬൮ 846302688 ൲ ‫ۋ‬

548159652 + 0 + 0 + 0 ିଷ × 548159652 + 3 × 683329097 + × 846302688 +

‫଺ ۇ‬ ‫ ۈ‬1 × 548159652 + ‫ۈ‬ ‫ۉ‬

ିଵ ଺

1027015247

‫ی‬

ିହ

× 548159652 +





× 683329097 + 2 × 846302688 +





× 683329097 +

ିଵ ଶ

× 846302688 +

548159652 − 1004959362 + 2049987291 −1269454032 + 342338415.66 =൮ ൲ 548159652 − 1708322742.5 + 1692605376 − 513507623.5 − 91359942 + 341664548.5 − 423151344 + 171169207.83

548159652 117912312.66 =൮ ൲ 18934662 −1677529.67

∴ ܽ଴ = 548159652 , ܽଵ = 117912312.66, ܽଶ = 18934662, ܽଷ = −1677529.67 ∴ the polynomial representing the given numerical data is y = 548159652 + 117912312.66 x + 18934662 ‫ ݔ‬ଶ − 1677529.67‫ ݔ‬ଷ Polynomial equations are ‫ݕ‬଴ = ܽ଴ + ܽଵ ‫ݔ‬଴ + ܽଶ ‫ݔ‬଴ ଶ + ܽଷ ‫ݔ‬଴ ଷ ‫ݕ‬ଵ = ܽ଴ + ܽଵ ‫ݔ‬ଵ + ܽଶ ‫ݔ‬ଵ ଶ + ܽଷ ‫ݔ‬ଵ ଷ ‫ݕ‬ଶ = ܽ଴ + ܽଵ ‫ݔ‬ଶ + ܽଶ ‫ݔ‬ଶ ଶ + ܽଷ ‫ݔ‬ଶ ଷ ‫ݕ‬଴ = ܽ଴ + ܽଵ ‫ݔ‬ଷ + ܽଶ ‫ݔ‬ଷ ଶ + ܽଷ ‫ݔ‬ଷ ଷ Now, putting the value of ܽ଴ , ܽଵ , ܽଶ , ܽଷ in polynomial equations, we have ‫ݕ‬଴ = 548159652 ‫ݕ‬ଵ = 548159652 +117912312.66 + 18934662 −1677529.67 = 685006626.66 −1677529.67 -30-



ଷ ିଵ







× 1027015247 ‫ۊ‬ ‫ۋ‬ × 1027015247‫ۋ‬

× 1027015247 ‫ی‬

Inversion of Matrix by Elementary Transformation : Representation of Numerical Data by a Polynomial Curve

= 683329096.99 = 683329097 ‫ݕ‬ଶ = 548159652 +117912312.66 × 2 + 18934662 × 4 − 1677529.67 × 8 = 548159652 +235824625.32 + 75738648 − 13420237.36 = 859722925.32 − 13420237.36 = 846302687.96 = 846302688 ‫ݕ‬ଷ = 548159652 +117912312.66 × 3 + 18934662 × 9 − 1677529.67 × 27 = 548159652 +353736937.98 + 170411958 − 45293301.09 = 1072308547.98 − 45293301.09 = 1027015246.89 = 1027015247 5. Conclusion: The method composed here can be suitably used to represent a given set of numerical data on a pair of variables by a polynomial. The degree of the polynomial is one less than the number of pairs of observations. The polynomial that represents the given set of numerical data can be used for interpolation at any position of the independent variable lying within its two extreme values. The approach of interpolation, described here, can be suitably applied in inverse interpolation also. It has already been mentioned that in case of the interpolation by the existing formulae, if it is wanted to interpolate the values of the dependent variable corresponding to a number of values of the independent variable by a suitable existing interpolation formula then it is required to apply the formula for each value separately and thus the numerical computation of the value of the dependent variable based on the given data are to be performed in each of the cases. The method developed here has rescued from these repeated numerical computations from the given data. References: 1. Abramowitz M (1972) : “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables”, 9th printing. New York: Dover, p. 880. 2. Bathe K. J. & Wilson E. L. (1976): “Numerical Methods in Finite Element Analysis”, Prentice-Hall, Englewood Cliffs, NJ. 3. Biswajit Das & Dhritikesh Chakrabarty (2016a): “Lagranges Interpolation Formula: Representation of Numerical Data by a Polynomial Curve”, International Journal of Mathematics Trend and Technology (ISSN (online): 2231-5373, ISSN (print): 2349-5758), 34 part-1 (2), 23-31. 4. Biswajit Das & Dhritikesh Chakrabarty (2016b): “Newton’s Divided Difference Interpolation Formula: Representation of Numerical Data by a Polynomial Curve”, International Journal of Mathematics Trend and Technology (ISSN (online): 22315373, ISSN (print): 2349-5758), 35 part-1 (3), 26-32. 5. Biswajit Das & Dhritikesh Chakrabarty (2016c): “Newton’s Forward Interpolation Formula: Representation of Numerical Data by a Polynomial Curve”, International Journal of Applied Research (ISSN (online): 2456 - 1452, ISSN (print): 2456 1452), 1 (2), 36-41. 6. Biswajit Das & Dhritikesh Chakrabarty (2016d): “Newton’s Backward Interpolation Formula: Representation of Numerical Data by a Polynomial Curve”, International Journal of Statistics and Applied Mathematics (ISSN (online): 2394-5869, ISSN (print): 2394-7500), 2 (10), 513-517. 7. Biswajit Das & Dhritikesh Chakrabarty (2016e): “Matrix Inversion: Representation of Numerical Data by a Polynomial curve”, Aryabhatta Journal of Mathematics & Informatic (ISSN (print): 0975-7139, ISSN (online): 2394-9309), 8(2), 267276. 8. Chutia Chandra & Gogoi Krishna (2013): “NEWTON’S INTERPOLATION FORMULAE IN MS EXCEL WORKSHEET”, “International Journal of Innovative Research in Science Engineering and Technology”, Vol. 2, Issue 12. 9. Chwaiger J. (1994): “On a Characterization of Polynomials by Divided Differences”, “Aequationes Math”, 48, 317-323. 10. C.F. Gerald & P.O. Wheatley (1994): “Applied Numerical Analysis”, fifth ed., Addison-Wesley Pub. Co., MA. 11. Chwaiger J. (1994): “On a Characterization of Polynomials by Divided Differences”, “Aequationes Math”, 48, 317-323. 12. Corliss J. J. (1938): “Note on an Eatension of Lagrange's Formula”, “American Mathematical Monthly”, Jstor, 45(2), 106— 107. 13. De Boor C (2003): “A divided difference expansion of a divided difference”, “J. Approx. Theory ”,122 , 10–12. 14. Dokken T & Lyche T (1979): “A divided difference formula for the error in Hermite interpolation” , “ BIT ”, 19, 540–541 15. David R. Kincard & E. Ward Chaney (1991):” Numerical analysis”, Brooks /Cole, Pacific Grove, CA. -31-

JMASS 12 (1-2), 2016

Biswajit Das and Dhritikesh Chakrabarty

16. Endre S & David Mayers (2003): “An Introduction to Numerical Analysis”, Cambridge, UK. 17. Erdos P. & Turan P. (1938): “On Interpolation II: On the Distribution of the Fundamental Points of Lagrange and Hermite Interpolation”, “The Annals of Mathematics, 2nd Ser”, Jstor, 39(4), 703—724. 18. Echols W. H. (1893): “On Some Forms of Lagrange's Interpolation Formula”, “The Annals of Mathematics”, Jstor, 8(1/6), 22—24. 19. Fred T (1979): “Recurrence Relations for Computing with Modified Divided Differences”, “Mathematics of Computation”, Vol. 33, No. 148, 1265-1271. 20. Floater M (2003): “Error formulas for divided difference expansions and numerical differentiation”, “J. Approx. Theory”, 122, 1–9. 21. Gertrude Blanch (1954): “On Modified Divided Differences”, “Mathematical Tables and Other Aids to Computation”, Vol. 8, No. 45, 1-11. 22. Grcar & Joseph F. (2011a): "How ordinary elimination became Gaussian elimination", “Historia Mathematica”, , arXiv:0907.2397,doi:10.1016/j.hm.2010.06.003, ”, 38 (2), 163–218. 23. Grcar & Joseph F. (2011b): “Mathematicians of Gaussian elimination”, Notices of the American Mathematical Society, 58 (6), 782–792. 24. Hummel P. M. (1947): “A Note on Interpolation (in Mathematical Notes)”, “American Mathematical Monthly”, Jstor, 54(4), 218—219. 25. Herbert E. Salzer (1962): “Multi-Point Generalization of Newton's Divided Difference Formula”, “Proceedings of the American Mathematical Society”, Jstor, 13(2), 210—212. 26. Jordan C (1965): “Calculus of Finite Differences”, 3rd ed, New York, Chelsea. 27. Jeffreys H & Jeffreys B.S. (1988) : “Divided Differences” , “ Methods of Mathematical Physics ”, 3rd ed, 260-264 28. Jan K. Wisniewski (1930): “Note on Interpolation (in Notes)”, “Journal of the American Statistical Association”, Jstor 25(170), 203—205. 29. John H. Mathews & Kurtis D.Fink (2004): “Numerical methods using MATLAB”, 4ed, Pearson Education, USA. 30. James B. Scarborough (1996): “Numerical Mathematical Analysis”, 6 Ed, The John Hopkins Press, USA. 31. Kendall E.Atkinson (1989): “An Introduction to Numerical Analysis”, 2 Ed., New York. 32. Lee E.T.Y. (1989): “A Remark on Divided Differences”, “American Mathematical Monthly”, Vol. 96, No 7, 618-622. 33. Kaw, Autar; Kalu, Egwu (2010): “Deals with Gaussian elimination”, “Numerical Methods with Applications”,1st ed. 34. Mills T. M. (1977): “An introduction to analysis by Lagrange interpolation”, “Austral. Math. Soc. Gaz.”, 4(1), MathSciNet, 10—18. 35. Nasrin Akter Ripa (2010): “Analysis of Newton’s Forward Interpolation Formula”, “International Journal of Computer Science & Emerging Technologies (E-ISSN: 2044-6004)”, 12, Volume 1, Issue 4. 36. Neale E. P. & Sommerville D. M. Y. (1924): “A Shortened Interpolation Formula for Certain Types of Data”, “Journal of the American Statistical Association”, Jstor, 19(148), 515—517. 37. Quadling D. A. (1966): “Lagrange's Interpolation Formula”, “The Mathematical Gazette”, L(374), 372—375. 38. Robert J.Schilling & Sandra L. Harries (2000): “Applied Numerical Methods for Engineers”, Brooks /Cole, Pacific Grove, CA. 39. Revers & Michael Bull (2000): “On Lagrange interpolation with equally spaced nodes”, “Austral. Math. Soc MathSciNet.”, 62(3), 357-368. 40. S.C. Chapra & R.P. Canale (2002): “Numerical Methods for Engineers”, third ed., McGraw-Hill, NewYork. 41. S.D. Conte & Carl de Boor (1980): “Elementary Numerical Analysis”, 3 Ed, McGraw-Hill, New York, USA. 42. Traub J. F. (1964): “On Lagrange-Hermite Interpolation”, “Journal of the Society for Industrial and Applied Mathematics”, Jstor , 12(4), 886—891. 43. Vertesi P (1990): “SIAM Journal on Numerical Analysis”, Vol. 27, No. 5, pp. 1322-1331 , Jstor. 44. Whittaker E. T. and Robinson G (1967): “The Gregory-Newton Formula of Interpolation" and "An Alternative Form of the Gregory-Newton Formula", “The Calculus of Observations: A Treatise on Numerical Mathematics”, 4th ed, pp. 10-15. 45. Whittaker E. T. & Robinson G. (1967): “Divided Differences & Theorems on Divided Differences”, “The Calculus of Observations: A Treatise on Numerical Mathematics”,4th ed., New York, 20-24. 46. Wang X & Yang S (2004): “On divided differences of the remainder of polynomial interpolation”, www.math.uga.edu/~mjlai/pub.html. 47. Wang X & Wang H (2003): “Some results on numerical divided difference formulas”, www.math.uga.edu/~mjlai/pub.html. 48. Whittaker E. T. & Robinson G. (1967): “Lagrange's Formula of Interpolation”, “The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed.”, §17, New York: Dover, 28 − 30.

-32-

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

A COMPARATIVE STUDY OF TWO STOCHASTIC MODELS DEVELOPED FOR TWO-UNITS COLD STANDBY CENTRIFUGE SYSTEM Shakeel Ahmad and Vinod Kumar Department of Mathematics, S.B.M.N. Engg. College, Asthal Bohar, Rohtak-124001(Haryana) INDIA

ABSTRACT The present paper discuss a comparative study of two stochastic models developed for a two-units cold standby centrifuge system where faults are characterized as major and minor fault as per the data collected from Jindal Drilling and Industries Ltd. It is assumed that system leads to partial failure state on occurrence of a minor fault whereas on occurrence of a major fault it leads to complete failure. On occurrence of a failure in the system, either the repairman carry out the repair of the components involved or the unit wait for repair if the repairman is busy. First model considering minor / major faults, on-line repairs on occurrence of minor faults and complete failures of the system with auto-switching in case of major faults. While in the second model, minor / major faults, repair and replacement after inspection with a real practical possibility of failure of auto-switching of the cold standby unit on complete failure is taking in to account. Various measures of system effectiveness are obtained by using Markov processes and regenerative point technique. The comparative analysis of the system is carried out on the basis of the graphical studies and conclusions are drawn regarding the profitability of the system. Keywords: Centrifuge System, MTSF, Expected Uptime, Profit, Markov Process, Regenerative Point Technique

INTRODUCTION In the present scenario filtration and purification plays a very important role in the modern society pertaining to the health of the human being and the qualities of the products used by their. A large number of equipments or systems of equipments are involved in the industries to meet out the requirements of such products. One such system is a centrifuge system used for separation of two objects having different type of density. Centrifuge system is being used in Refineries for oil purification, in milk plants to extract the fats, in laboratories for blood fractionation and wine clarification etc. The centrifuge system works using the sedimentation principle, where the centripetal acceleration causes more dense substances to separate out along the radial direction and lighter objects will tend to move outward direction. Thus the reliability and cost of the centrifuge system plays a very significant role in the situations wherever these are used and hence need to be analyzed. The working of a centrifuge system in Jindal Drilling and Industries Ltd., BKC Bandra East Mumbai was observed and the real data on failure/faults, inspection, maintenance and repairs, etc. for the system was collected. It was found that this system can have various kinds of minor and major faults including as motor burnt, gear damage, bearing damage, alignment etc. that leads to failure/degradation of the system. Further some major faults are repairable and others non repairable. Many researchers in the field of reliability modeling including Gupta and Kumar (1983), Gopalan and Murlidhar (1991), Tuteja et al (2001), Taneja et al (2004), Taneja and Parashar (2007), Gupta et al (2008), Kumar et al (2010), etc. analyzed a large number of one unit/ two unit systems. Kumar and Bhatia (2011, 2012, 2013) discussed the behaviour of the single unit centrifuge system considering the concepts of inspections, degradation, minor/major faults, neglected faults, online/offline maintenances, repairs of the faults etc. As far as we concern with the research work on reliability modeling, none of the researchers have analyzed such a comparative study of two-unit cold standby centrifuge system considering various faults. To fill up this gap, the present paper gives a comparative study of two stochastic models developed for a two-units cold standby centrifuge system where faults are characterized as minor / major faults with repair and replacement after an inspection. Whereas faults such as leakage of seal, motor overheating, alignment etc. are considered as minor faults and faults such as motor burnt, gear damage, bearing fault etc. are considered as major faults. It is assumed that system leads to partial failure state on occurrence of a minor fault whereas on occurrence of a major fault it leads to complete failure. On occurrence of a failure in the system, the single repairman reaches to the system in negligible time and either the repairman first inspect the fault to judge is it repairable or non-repairable and accordingly carry out the repair or replacement of the components involved or the unit has to be wait for repair if the repairman is busy. First model considering minor / major faults, on-line repairs on occurrence of minor faults and complete failures of the system with auto-switching in case of major faults. While in the second model, minor / major faults, repair and replacement after inspection with a real practical possibility of failure of auto-switching of the cold standby unit on complete failure is taking in to account. Various measures of system effectiveness such as mean sojourn time, MTSF, expected up time, expected down time of the system and busy period of the repairman are obtained using Markov processes and -33-

 

JMASS 12 (1-2), 2016

Shakeel Ahmad and Vinod Kumar 

regenerative point technique. The comparative analysis of the system is carried out on the basis of the graphical studies and conclusions are drawn regarding the profitability of the system. ASSUMPTIONS OF THE MODELS 1. The system consist two identical units. 2. Each unit of the system has three modes i.e. Operative, Partially failed and Failed. 3. Initially the system starts operation from zero (0th) state in which both the units are operative mode. 4. Faults are self- announcing on occurring in the system. 5. There is a single repairman facility with the system to repair the fault. 6. After each repair the system is as good as new. 7. The failure time distributions are exponential while other time distributions are general. 8. All the random variables are mutually independent. NOTATIONS λ1/λ2 a/b i1(t)/I1(t) g1(t)/G1(t) g2(t)/G2(t) h1(t)/H1(t) Or/ Ow / Ocs Fi / Fr / Frp / Fw

: : : : : : : :

Rate of occurrence of a major/minor faults Probability that a fault is to be repairable/ non-repairable p.d.f./c.d.f. of time to inspection of the unit at failed state p.d.f./c.d.f. of time to repair the unit at down state p.d.f./c.d.f. of time to repair the unit at failed state p.d.f./c.d.f. of time to replacement of the unit at failed state Operative unit under repair/ waiting/ cold standby Failed unit under repair/ replacement/ waiting

FIRST MODEL – M1

Fig. 1 State Transition Diagram The transition probabilities are given by

dQ 01 ( t ) = λ 2 e

− ( λ1 + λ 2 ) t

dQ 02 ( t ) = λ 1e

dt

− ( λ1 + λ 2 ) t

dt

dQ10 ( t ) = g1 ( t ) dt

dQ 20 ( t ) = e

dQ 23 ( t ) = λ 2 e −( λ1 +λ2 ) t G 2 ( t ) dt

dQ24 ( t ) = λ1e−( λ1 +λ2 )t G 2 ( t ) dt

(

dQ422 ( t ) = λ1e Taking L.S.T

Q∗∗ ij ( s )

p01 =

−( λ1 +λ 2 ) t

and pij

λ2 λ1 + λ 2

)

©1 g 2 ( t ) dt

− ( λ1 + λ 2 ) t

g 2 ( t ) dt

dQ31 ( t ) = g 2 ( t ) dt

= limQ∗∗ ij ( s ) , the non-zero elements pij, are obtained as under: s→0

p02 = -34-

λ1 λ1 + λ 2

p10 = g ( 0 )

A Comparative Study of Two Stochastic Models Developed for Two-Units Cold Standby Centrifuge System

p 20 = g*2 ( λ1 + λ 2 )

* 1

p 23 =

λ 2 ⎡⎣1 − g *2 ( λ 1 + λ 2 ) ⎤⎦ λ1 + λ 2

p 422 =

λ 1 ⎡⎣1 − g *2 ( λ 1 + λ 2 ) ⎤⎦ λ1 + λ 2

p31 = g ( 0 ) * 2

By these transition probabilities, it can be verified that

p 01 + p 02 = 1

p20 + p23 + p422 = 1

p 20 + p 23 + p 24 = 1

p10 = p31 = p 42 = 1

The unconditional mean time taken by the system to transit for any regenerative state ‘j’, when it is counted from epoch of entrance into that state ‘i’, is mathematically stated as∞

mij = ∫ tdQij ( t ) = − q*ij′ ( 0 ) , 0

Thus we have-

m01 + m 02 = μ 0

m10 = μ1

m20 + m23 + m422 = k1

m31 = μ3

m 20 + m 23 + m 24 = μ 2

where /

k 1 = − g *2 (0) The mean sojourn time in the regenerative state ‘i’ (µi) is defined as the time of stay in that state before transition to any other state. If Ti be the sojourn time in state ‘i’, then we have∞

μi = ∫ P ( Ti > t )dt 0

which gives the values of (µi) as follows

μ0 =

1 λ1 + λ 2

μ1 = −g1* ( 0)

μ2 =

1 − g*2 ( λ1 + λ 2 ) ( λ1 + λ 2 )

μ3 = −g*2 ( 0)

OTHER MEASURES OF SYSTEM EFFECTIVENESS Using the arguments of the theory of regenerative processes, various measures of the system effectiveness obtained in steady state are as under: = N/D Mean Time to System Failure (MTSF) (T1) = N1/D1 Expected Up-Time of the System with Full Capacity (AF0) Expected Up-Time of the System with Reduced Capacity (AR0) = N2/D1 Busy Period of Repairman (Repair time only) (Br) = N3/D1 where

N = μ0 + μ1 ( p01 + p02p23 ) + p02μ2 + p20p23μ3

D = 1 − p 01p10 − p 02 ( p 20 + p 23 p 31p10 ) 4 N1 = µ0 (1- p22 ) + p02µ2

N 2 = p01 (1- p422 ) µ1 + p02 p23p31µ1 + p02 p 23µ3 N3 = p01 (1- p422 ) µ1 + p02 p23p31µ1 + p02µ2 + p02 p23µ3 -35-

JMASS 12 (1-2), 2016

D1 = µ0 + p01µ1 + p02k1 + p02p23 (µ1 + µ3 ) - p

4 22

Shakeel Ahmad and Vinod Kumar 

(1 + p01µ1 )

PROFIT ANALYSIS The expected profit incurred of the system isP1 = C0AF0 + C1AR0 − C2Br − C3 where C0 = Revenue per unit up-time of the system with full capacity. C1 = Revenue per unit up-time of the system with reduced capacity. C2 = Cost per unit repair of the failed unit C3 = Cost of installation SECOND MODEL - M2 OTHER ASSUMPTIONS OF THE MODEL 1. Switching is manual and not perfect. (for two-unit model, it is to be observed that after passing a long time the cold standby unit may not come in the operative state immediately and system works as single-unit)

Fig. 2 State Transition Diagram The transition probabilities are given by:

dQ01 ( t ) = λ2e ( 1

- λ +λ2 ) t

dQ02 ( t ) = λ1e ( 1

- λ +λ2 ) t

dt

dQ10 ( t ) = g1 ( t ) dt

dQ23 ( t ) = ai1 ( t ) dt

dQ24 ( t ) = bi1 ( t ) dt

dQ30 ( t ) = g2 ( t ) dt

dt

dQ40 ( t ) = h1 ( t ) dt The non-zero elements pij obtained as

p01 =

λ2 λ1 +λ 2

p23 = ai1* ( 0 )

pij = limQ∗∗ ij ( s) s→0

and are given by

λ1 λ1 +λ 2

p10 = g1* ( 0)

p24 = bi1* ( 0)

p30 = g*2 ( 0)

p 02 =

p40 = h1* ( 0) By these transition probabilities, it can be verified that

p 01 + p 02 = 1

p 23 + p 24 = 1

p10 = p30 = p40 = 1 -36-

A Comparative Study of Two Stochastic Models Developed for Two-Units Cold Standby Centrifuge System

The unconditional mean time taken by the system to transit for any regenerative state ‘j’, when it is counted from epoch of entrance into that state ‘i’, is mathematically stated as∞

mij = ∫ tdQij ( t ) = − q*ij′ ( 0) , 0

Thus we have-

m01 + m 02 = μ 0

m10 = μ1

m 30 = μ 3

m 40 = μ 4

m 23 + m 24 = μ 2

The mean sojourn time in the regenerative state ‘i’ (µi) is defined as the time of stay in that state before transition to any other state. If Ti be the sojourn time in state ‘i’, then we have∞

μi = ∫ P ( Ti > t )dt 0

which gives the values of (µi) as follows

μ0 =

1 λ1 + λ 2

μ3 = −g*2′ ( 0)

μ2 = −i1*′ ( 0)

μ1 = −g1*′ ( 0)

μ4 = −h1*′ ( 0)

OTHER MEASURES OF SYSTEM EFFECTIVENESS Using the arguments of the theory of regenerative processes, various measures of the system effectiveness obtained in steady state are as under: Mean Time to System Failure (MTSF) (T2)

= N/D

Expected Up-Time of the System with Full Capacity (AF0)

= N1/D1

Expected Up-Time of the System with Reduced Capacity (AR0)

= N2/D1

Busy Period of Repairman (Inspection time only) (Bi)

= N3/D1

Busy Period of Repairman (Repair time only) (Br)

= N4/D1

Busy Period of Repairman (Replacement time only) (Brp)

= N5/D1

where

N = μ0 + p01μ1

D = 1 − p01p10

N1 = µ 0 N3 = p01µ1

N 2 = p01µ1 N 4 = p 01µ1 + p 02 p 23µ3

N5 = p02 p 24µ4

D1 = µ0 + p01µ1 + p02 (µ2 + p 23µ3 + p 24µ4 ) PROFIT ANALYSIS The expected profit incurred of the system is given by: P2 = C0AF0 + C1AR0 − C2Bi − C3Br − C4Brp − C5 where C0 = Revenue per unit up-time of the system with full capacity C1 = Revenue per unit up-time of the system with reduced capacity C2 = Cost per unit inspection of the failed unit C3 = Cost per unit time of repair of the failed unit C4 = Cost per unit time of replacement of the failed unit C5 = Cost of installation

-37-

JMASS 12 (1-2), 2016

Shakeel Ahmad and Vinod Kumar 

GRAPHICAL INTERPRETATION AND DISCRIPTION OF THE GRAPHS The graphical analysis of the system at different operational modes has been carried out by considering the following particular cases:

g1 (t) = β1e −β1t

g 2 (t) = β 2 e −β2 t

h1 ( t ) = γ1e−γ1t

i1 ( t ) = α1e−α1t

Various graphs are plotted for MTSF, expected up-time with full/ reduced capacity and profit of the system by taking different values of both the rates of occurence of minor and major faults (λ1 & λ2), repair rates (β1 & β2), replacement rate (γ1) and inspection rate (α1).

Fig. 3 Fig.3 shows the graph of difference between the mean times to system failure of Model-M1 and Model-M2, i.e. T2 – T1 with respect to the rate of occurrence of minor faults (λ2) for the different values of rate of occurrence of major faults (λ1). It is concluded from the graph that the MTSF of the Model-M2 is lower than the MTSF of the Model-M1 for fixed values of rate of occurrence of minor faults (λ2). It is also concluded from the graph that the difference of mean times to system failure (T2 - T1) decreases with increase in the values of rate of occurrence of minor faults and has higher values for higher values of the rate of occurrence of major faults. The curves in the Fig.4 reveals the behaviour of difference between the mean times to system failure corresponding to Model-M2 and Model-M1 i.e. T2-T1 with respect to repair rate of the system (β1) for the different values of rate of major faults (λ1). It is evident from the graph that difference of mean times to system failure decreases with increase in the values of repair rate and has lower values for higher values of the rate of major faults. It is also concluded that the Model-M2 is better than the Model-M1 in terms of mean time to system failure for the given fix values of other parameters.

Fig. 4 -38-

A Comparative Study of Two Stochastic Models Developed for Two-Units Cold Standby Centrifuge System

Fig. 5 Fig. 5 shows the behaviour of the difference of the profits of the Model-M2 and Model-M1, i.e. P2–P1 with respect to the rate of occurrence of minor faults (λ2) for the different values of rate of occurrence of major faults (λ1). It is evident from the graph that the difference of profits (P2–P1) increases with the increase in the rate of occurrence of minor faults and the difference of profits has higher values for higher values of the rate of occurrence of major faults when other parameters remain fixed. From the Fig.5 it may also be observed that for λ1 = 0.01, the difference of profits (P2–P1) is negative or zero or positive according as λ2 is < or = or > 0.1824. Hence, in this case, the Model-M1 is better or equally good or worse than the Model-M2 whenever λ2 < or = or > 0.1824. Similarly, for λ1 = 0.015 and λ1 = 0.02 respectively the difference of profits (P2–P1) is negative or zero or positive according as λ2 is < or = or > 0.1883 and 0.1997 respectively. Hence, in this case, the Model-M1 is better or equally good or worse than the Model-M2 whenever λ2 < or = or > 0.1883 and 0.1997 respectively.

Fig. 6 The graph in Fig. 6 shows the pattern of difference of profits of the Model-M1 and Model-M2, i.e. P2-P1 with respect to the rate of occurrence of major faults (λ1) due to repair for the different values of repair rate (β1). It is evident from the graph that the difference of profits (P2-P1) decreases with increase in the values of the rate of occurrence of major faults and has lower values for higher values of repair rate when other parameters remain fixed. From the Fig.6 it may also be observed that for β1 = 0.0001, the difference of profits (P2-P1) is positive or zero or negative according as λ1 is < or = or > 0.001895. Thus, in this case, Model-M2 is better or equally good or worse than Model-M1 if λ1 is < or = or > 0.001895. Similarly, for β1 = 0.0003 and β1 = 0.0005 respectively the difference of profits (P2-P1) is positive or zero or negative according as λ1 is < or = or > 0.001695 and 0.001495 respectively. Thus, in this case, the Model-M2 is better or equally good or worse than Model-M1 if λ1 is < or = or > 0.001695 and 0.001495 respectively. -39-

JMASS 12 (1-2), 2016

Shakeel Ahmad and Vinod Kumar 

The graph in Fig.7 shows the pattern of difference between the profits of the Model-M2 and Model-M1, i.e. P2-P1 with respect to the revenue per unit up-time of the system with full capacity (C0) for the different values of rate of occurrences of major faults (λ1). It is evident from the graph that the difference of profits (P2-P1) decreases with the increase in the values of the revenue per unit up-time with full capacity and has higher values for higher values of the rate of occurrence of major faults when other parameters remain fixed.

Fig. 7 From the Fig.7 it may also be observed that for λ1 = 0.0001, the difference of profits (P2-P1) is positive or zero or negative according as C0 is < or = or > 708.25. Hence, in this case, the Model-M2 is better or equally good or worse than the Model-M1 whenever C0 < or = or > Rs.708.25. Similarly, for λ1 = 0.0051 and λ1 = 0.0101 respectively the difference of profits (P2-P1) is positive or zero or negative according as C0 is < or = or > 739.35 and 770.45 respectively. Hence, in this case, the Model-M2 is better or equally good or worse than the Model-M1 whenever C0 < or = or > Rs.739.35 and Rs. 770.45 respectively. CONCLUSION The comparative analysis of the models discussed for different conditions/ situations of the system shows which model is better than the other. The comparison between Model-M1 and Model-M2 shows both the models is better than each other in terms of mean time to system failure and profit with respect to the different parameters, considered in different situations for some fixed values of parameters. The cut-off points help to decide in what situation which model is better than the other one in which situation. Further the above study also helps in deciding the values of different parameters of the models which are responsible for making the system profitable to the plant. REFERENCES 1. R.C. Garg and A. Kumar (1977) A complex system with two types of failure and repair, IEEE Trans. Reliability, 26, 299-300. 2. L.R. Goel, G.C. Sharma and R. Gupta (1986) Reliability analysis of a system with prevention maintenance and two types of repairing, Micro electron Reliability, 26, 429-433. 3. M.N. Gopalan and N.N. Murlidhar (1991) Cost analysis of a one unit repairable system subject to on-line prevention maintenance and/or repair, Micro electron Reliability, 31(2/3), 233-228. 4. A. Goyel, G. Taneja and D.V. Singh (2010) ) Reliability modeling and analysis of a sulphated juice pump system comprising three identical units with two types of working capacity and rest period, Pure and Applied Mathematical Sciences,71,(12),133-143. 5. M. L. Gupta and A. Kumar (1983) On profit consideration of a maintenance system with minor repair, Microelectron Reliability, 23, 437-439. 6. K. Murari and V. Goyal (1983) Reliability system with two types of repair facilities, Microelectron Reliability, 23(6), 10151025. 7. G. Taneja, V.K Tayagi and P. Bhardwaj (2004) Programmable logic controller, Pure and Applied Mathematical Sciences, LX (1-2), 55-71. 8. R. Kumar and P. Bhatia (2011) Impact of Ignored faults on reliability and availability of centrifuge system that undergoes periodic rest, Journal of International Academy of Physical Sciences, Vol.15, 47-57. 9. R. Kumar and P. Bhatia (2011) Reliability and cost analysis of one unit centrifuge system with single repairman and inspection, Pure and Applied Mathematika Sciences, 74 (1-2), 113-121. 10. V. Kumar, P. Bhatia and S. Ahamd (2014) Profit Analysis of a Two-Unit Cold Standby Centrifuge System with Single Repairman, International Journal of Scientific & Engineering Research, Volume 5, Issue 6, June-2014, 509-513.

-40-

  Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

PERFORMANCE AND EVALUATION OF ROUTING PROTOCOL IN MANET BY VARYING QUEUE SIZE Gautam Gupta* & Komal Badhran** *Dept. of Electronic Communication Engg., JMIETI, Radaur (Yamuna Nagar) **Dept. of Electronic Communication Engg., JMIT ,Radaur(Yamuna Nagar) E-mail : [email protected], [email protected]

ABSTRACT In MANET nodes are mobile and disseminate with other nodes buttoned on wireless connection. The nature of MANET is progressive so route discovery is quite challenging task. In MANET congestion will be occur at nodes and it is not an easy bother to control the congestion. In this paper we provide analysis and technique for selecting the most appropriate routing protocol to cooperate with congestion control .We also evaluate the chosen routing protocol to verify their cooperation with the AODV,DSR, AOMDV. The experimental results show that the most suitable routing protocol with congestion control can improve network performance which are DSR and AODV. Cooperation between DSR and AODV has better performance than DSR because AODV has the ability to detect network condition and congestion control more accurately. Routing protocols and congestion control cooperation still face uncertainty to ensure quality of connection, which cause frequent path break due to mobility. To deal with dynamic changes in routing protocols network conditions, the developments of protocols require more precise and accurate additional parameters such as efficiency, throughput, queue length. By varying queue length is a promising method to improve the cooperation routing protocol and congestion control mechanisms. Keywords- MANET, Quality of service ,throughput, delivery ratio, AODV,DSR,AOMDV

INTRODUCTION Mobile Ad-Hoc Network A MANET is self-assured of the mobile nodes without any framework. The goal of MANET to prolong mobility into the field of the self-governing, mobile and wireless domains, where as set of nodes form the network routing footing of an ad-hoc network. The superiority applications of MANET are used where express development and dynamic reconfiguration are more significant and wired network is not available. The MANET include military operations, army operations, emergency search, delivery sites, classrooms and conventions etc. where participants share their information intensely with advice of their mobile devices. These applications help in operating various multicast operations. Due to mobility of nodes in MANET, it is not possible to establish proper paths for message deliver and packets through the network. Hence congestion takes place and it is the key problem for MANET. Routing protocols tested first because the routing algorithm becomes a critical success. • prior to the data transmission connection before congestion control mechanism. Furthermore, the routing protocol has the best suitable selected and combined with modified congestion control mechanisms to be tested in various scenarios. The paper is organized as follows. Section II provides an overview of the mechanism of congestion control in MANET and section III Routing Protocols in MANET. Section IV describes selecting techniques and analyzing the performance of routing protocols under network mobility scenario V shows proposed work and implementation of routing protocol. Finally, section VI is the conclusion and future research opportunities.

Figure 1: The General Representation of Mobile-Ad-Hoc Network -41-

 

JMASS 12 (1-2), 2016

Gautam Gupta & Komal Badhran 

II CONGESTION CONTROLIN MANET Congestion is a problem that occurs in a communication network when too much traffic is found and the Congestion occur on MANET conduct to delay, packet losses, bandwidth degradation, wasting of time, high overhead. So many routing protocols have been established to proposed and to overcome the congestion in MANET. Packet failure in MANET is basically caused due to obstruction. Congestion control over a mobility and failure of flexible routing protocol at the network layer can coagulate the packet loss. The congestion of non-adaptive routing protocols conduct to the following difficulties: (a)Extensive delay: Maximum of the congestion control performance takes too much time for identify congestion. Sometimes the operation of new routes is more critical situations. The main problem occur delay existing for route observant in on-demand routing protocol. (b)More Overhead: Congestion control mechanism takes more attempt for processing and communication in new routes. It also takes more and more extensive in maintaining the multipath routing protocol. (c)Heavy packet losses: Once the congestion is detected and packets may be stray. Now congestion control is perpetuate by decreasing the sending rate towards the sender or collapsed packets at the intermediate nodes. There is further methods is to decrease the traffic load. Due to high packet loss on congestion control slight throughput may be occurred III. ROUTING PROTOCOLS IN MANET One of the popular routing protocol is AODV which is used to dispatch the messages and packets over MANET and also is used to remove the problem of congestion in MANET. But it depends on the different receivers to detect congestion control their receiving rates. In this paper the investigation the performance of horizontal routing protocol AODV, AOMDV, and DSR which are frequently used in mobile adhoc network. Routing is a purpose of the network layer to determine the route from the source to the destination node. In wired networks route failures are infrequent whereas in mobile ad hoc networks often occur. When a new route has been assembled and longer or shorter than the old route .The congestion control will profile the fluctuations in the round trip time .Routing protocols that frequently used in ad hoc networks are AODV (Ad hoc On-Demand Distance Vector), DSR (Dynamic Source Routing) and AOMDV (Ad hoc Modified Demand Distance Vector). AODV and DSR represent reactive routing protocol, while AOMDV is a proactive routing protocol . IV. SELECTINGTECHNIQUES AND ANALYZING THE PERFORMANCE OF ROUTING PROTOCOLS. In this section, we perform experiments and testing scenarios using three parameters as selection technique. In this experiment we used three routing protocols. In mobile adhoc networks, increasing the speed of the node will cause temporary disconnection and topology changes frequently While AODV and DSR is a reactive routing protocol, which perform a new search when the node will communicate Further, we investigated on AODV and DSR to choose the best performance. To see the level of performance and efficiency, we use end-to-end delay AODV is better than DSR. V.

PROPOSED WORK AND IMPLEMENTATION OF ROUTING PROTOCOL

If the number of series a router would have to handle is fixed at the branch, routers could be carried with parameters set to achieve a objective tradeoff between loss rate and queuing delay. An ideal router would automatically prepare its queuing configuration to the load. This problem divides into three parts. First a innards to count active flows. Second a choice of object queue length and drop rate based on the flow count. Third a mechanism to compel the targets on a FIFO queue. The evaluation will be done through duplication on various network parameter such as changing topology, increased usage, speed divergence and number of sender increased.

-42-

Performance and Evaluation of Routing Protocol In Manet by Varying Queue Size

Figure .2 AODV routing protocol mechanism AODV does not require any nurture of nodes from source to destination. This minimize the number of active routers that must be maintained. VI RESULTS & DISCUSSION The objective of our discourse is to design a set of congestion control mechanisms in wireless network. This appraisal will be done through mirroring on various network parameters such as alternating queue length and number of sender increased. Check the achievement of congestion control mechanisms and how mechanism operate when we increase number of sender and usages. Performance Metrics In appraise a MANET routing protocol different census or performance metrics are used. In this chapter we discuss the essential metrics required to appraise performance. a) Packet send b) Packet drop c) Packet request Packet send Packet Send is an open source expediency to allow sending and receiving TCP and UDP . It is accessible for Windows, Mac, and Linux. It is licensed General Public License versus and is unpaid software Packet. Ideal applications of Packet Sender include:



Trouble blasting network devices that use network servers such as it send a packet and then analyze the respond.



Trouble blasting network devices that use network applicant devices that "phone home" via UDP or TCP—Packet Sender can abduction these requests.



Verification and development of new network protocols such as it send a packet, see if device behaves appropriate.

Packet Sender comes with a made-in TCP and UDP server on a harbor number a user specifies. This remnants running listening for packets during sending other packets. Packet Drop Packet drop is the time lag it takes a network source to handover a packet to its destination. Thus the end-to-end delay of packets is the total amount of delays rendezvous in the whole network at every hurdle going to its destination. In MANET this kind of delay is regularly caused by certain connection frazzle or/and the signal strength among nodes have become low or congestion occur. The accuracy of a routing protocol can be determined by its end-to-end lag on a network. Packet request This assign to the ratio of the total number of data wrapping that scope the receiver at destination node .Here number of packets per sender is 70 parcels and maximum confess senders are 4. The maximum number of packets that can be buffer in queue is 280. It shows the duplication results. Here red line shows buffering request from 50 senders is impending to maximum size of approximately 420 packets. This is because of no congestion control appliance is applied. Green line shows buffering request from 4 senders is accessing to maximum size of approximately 220 packets which is less correlatedto buffering capacity

-43-

JMASS 12 (1-2), 2016

Gautam Gupta & Komal Badhran 

of queue 280 packets in our case. The result shows a mechanism to toll active flows target queue length and drop relation to compel the targets on a FIFO queue.

-44-

Performance and Evaluation of Routing Protocol In Manet by Varying Queue Size

VII CONCLUSION Congestion loss in breast networks depends on the number of alive flows and the total storage in the network. Total cache included both router buffer memory and packets in shuttle on long links. In this thesis a simple flow counting principle is presented. This method takes a few instructions per sender and uses one bit of case per flow. The algorithm provides congestion feedback by flexible the number of packets per sender in breadth to the queue length. This approach has the adorable effect of reducing queuing delay however it yield high loss rate as the number of flows increases, effect long and unfair timeout delays. VIII FUTURE SCOPE The proposed joint augmentation model will be simulated and implemented for various TCP spinoff and analyzed interms of performance criterion. Synchronize of the two protocols in the future should better correlative. Cross-layer method is one of the satisfy coordination models to solve the problems. For better certainty this method should have skill to read the conditions of the network from the physical layer. REFERENCES 1.

XinYu ,’’Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross Layer Information Awareness ‘’Sept. 26Oct.1, 2004, Philadelphia, Pennsylvania, USA.

2.

Copyright 2004.

3.

YE TIAN ,’’TCP in Wireless Environments Problems and Solutions’’ IEEE Radio Communications • March 2005.

4.

John Papandriopoulos,’’ Optimal and Distributed Protocols for Cross-Layer Design of Physical and Transport Layers in MANET’’ October 25, 2006; revised June 12, 2007. First published February 25, 2008; current version published December 17, 2008.

5.

Chai Keong Toh,’’ Load Balanced Routing Protocols for Adhoc Mobile Wireless Networks ’’IEEE Communications Magazine August 2009.

-45-

JMASS 12 (1-2), 2016

6.

Gautam Gupta & Komal Badhran 

S. R. Biradar,’’ Analysis QoS Parameters for MANETs Routing Protocols ’’International Journal on Computer Science and Engineering Vol. 02, No. 03, 2010, 593-599.

7.

Manish Bhardwaja,’’ Problem Analysis of Routing Protocols in MANET in Constrained Situation ’’Vol. 3 No. 7 July 2011.

8.

S. Subburam,’’ Predictive congestion control mechanism for MANET ’’Vol. 3 No.5 Oct-Nov 2012.

9.

Soundararajan, S., ’’Multipath Rate based Congestion Control for Mobile Ad Hoc Networks ’’International Journal of Computer Applications (0975 – 8887) Volume 55– No.1, October 2012.

10. AparnaShrivastva,’’ Enhanced Recovery scheme for TCP New Reno in MANET ’’International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com Volume 3, Issue 8 (September 2012). 11. S.Rajeswari,’’ Congestion Control and QOS Improvement for AEERG protocol in MANET ’’International Journal on AdHoc Networking Systems (IJANS) Vol. 2, No. 1, January 2012. 12. Prof. S.A. Jain, ’’An Improvement In Congestion Control Using Multipath Routing In Manet ’’International Journal of Engineering Research and Applications Vol. 2, Issue 3, May-Jun 2012. 13. A. A. Chari,’’ECDC: Energy Efficient Cross Layered Congestion Detection and Control Routing Protocol ’’International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-2, May 2012. 14. Michael Menth,’’ Performance of PCN-Based Admission Control Under Challenging Conditions ’’IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 20, NO. 2, APRIL 2012. 15. K.SrinivasaRao, ’’Development of Energy Efficient and Reliable Congestion Control Protocol for Multicasting in Mobile Adhoc Networks compare with AODV Based on Receivers ’’International Journal of Engineering Research and Applications Vol. 2, Issue 2,Mar-Apr 2012. 16. Lin-huang Chang,’’ QoS-aware path switching for VoIP traffic using SCTP ’’Department of Computer and Information Science, National Taichung University , 28 June 2012. 17. Senthil Kumaran, T,’’ Congestion Free Routing in Adhoc Networks ’’Journal of Computer Science 8 (6): 971-977 2012. 18. KhuzairiMohdZaini,’’ An Interaction between Congestion-Control Based Transport Protocols and Manet Routing Protocols’’ Journal of Computer Science 8 (4): 468-473, 2012. 19. Kaushik R. Chowdhury,’’ TCP CRAHN: A Transport Control Protocol for Cognitive Radio Ad Hoc Networks ’’Department of Electrical and Computer Engineering, Northeastern University 2012 IEEE. 20. Md. Imran Chowdhury,’’ An Energy Efficient and Cooperative Congestion Control Protocol in MANET ’’International Journal of Computer Applications (0975 – 8887) Volume 58– No.17, November 2012. 21. S. Sheeja,’’ Effective Congestion Avoidance Scheme for Mobile Ad Hoc Networks’’ I. J. Computer Network and Information Security, 2013, 1, 33-40 Published Online January 2013.

-46-

  Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

DEMONETISATION: NORMS, INITIATIVE AND IMPACTS Parul Rani Assistant Professor, Department of Commerce, Guru Brahmanand Kanya Mahavidyalaya, Anjanthali, District, Karnal, Haryana) e-mail: [email protected]

ABSTRACT Demonetisation is a bold move taken by the Union Govt to tackle the menace of growing amount of black money, corruption, fake currency which is an obstacle in the way of growth of a country’s economy. On this move various responses are being given by the rating agencies, political parties, stock market, etc., which shows the impact of this step on different sector differently. In this paper a study on demonetisation, its background, exchanging facility for residents as well as NRIs, task force by RBI, norms for disclosure of money, tax provisions, incentives by the Govt, its impact on various sectors, etc. is being done to present an overview about the situation. Keywords: Demonetisation, Cashless economy, Currency, Digital means, Norms, Provisions.

INTRODUCTION The demonetisation of ` 500 and ` 1000 Banknotes was a policy enacted by the Govt. Of India on 8th November 2016.All ` 500 and

1000 currency banknotes of the Mahatma Gandhi Series are ceased to be the legal tender in India from 9th November

2016.This overnight declaration was an astonishing one in Indian History. The announcement was made by the Prime Minister of India Mr. Narendra Modi in a live spontaneous Television address at 20:15 Indian Standard Time (IST) on 8th November 2016. In the announcement Modi declared that the use of all ` 500 and ` 1,000 currency banknotes of the Mahatma Gandhi Series would be invalid from the midnight of the same day i.e. 8th November 2016.Now these banknotes cannot be used for any trade purpose or/and store of value for future usage. With that announced the issuance of new ` 500 and ` 2,000 currency banknotes of the Mahatma Gandhi New Series in exchange for the old currency banknotes. However, the banknotes denomination of ` 100, ` 50, ` 20, ` 10 and ` 5 will remain the legal tender and unaffected by this policy. So continues to be accepted in the circulation. LITERATURE REVIEW The current topic of demonetisation is getting a lot of attention due to its direct relation with the life of common man. Demonetisation is not entirely a new topic, since this has already been taken place before in the years 1946 and 1978. So, study has to be done on this topic earlier from time to time in this effect. Demonetisation will bring the Indian economy to a new equilibrium with low tax regime and lower interest rates. Bank deposit will go up and would be the end of a parallel economy. it will take India to a cashless economy (NITI Ayog CEO, 2016) OBJECTIVES OF THE STUDY The main objectives of this study are as under: 1.

To understand the norms and policies framed by the Govt for demonetisation.

2.

To have a look on the initiatives taken by the Govt to make a path for the cashless economy through digital means.

3.

To evaluate the impact of this demonetisation on different sectors respectively.

RESEARCH METHODOLOGY The study is primarily based upon the secondary data, which has been collected from various sources such as website of RBI, newspapers, and some other website to present a brief and concrete picture. The study is basically undertaken between 8 November and 30 November.

-47-

 

JMASS 12 (1-2), 2016

Parul Rani 

HISTORICAL BACKGROUND OF INDIAN CURRENCY Years back people used to exchange goods for acquiring the goods they need. This system is called “Barter System”. But the problem there arise was the coincidence of wants .i.e. what they had was really needed by the person to whom they want to exchange their goods. So to remove this biggest problem of Barter System, Money was introduced. In 16th Century, Rupee was first issued by Sultan Sher Shah Suri and later on by Mugals. The Indian Rupee abbreviated as INR is the official currency of India. Which is accepted as a medium of exchange to eliminate the drawbacks of the Barter System. This currency is issued and managed by the apex bank of India i.e. The Reserve Bank of India which is governed by The Reserve Bank of India Act, 1934.Initially the symbol adopted to denote the currency was “Rs.”, but this was replaced by a new one “`” in the year 2010. It is a combination of the Devnagari Script consonant “र” and the Latin capital letter “R” without its vertical lines. The currency inscribing this new symbol was first introduced in circulation on 8th July 2010. The move of P.M. Modi led BJP Govt to demonetise ` 500 and ` 1,000 is not a thoroughly new one. But this demonetisation was also taken place in earlier years by previous governments. It was in the years 1946 and 1978. •

The highest denomination note ever printed by the Reserve Bank of India was ` 10,000 currency banknotes in 1938. But these currency banknotes of ` 1,000 and ` 10,000 were demonetised in January 1946, according to RBI data.



The highest denomination banknotes of Rs. 1,000, Rs. 5,000 and Rs. 10,000 were reintroduced in 1954. But the then ruling Janta Party Coalition Govt. in order to curb the counterfeited money and black money had again demonetised banknotes of Rs. 1,000, Rs. 5,000 and Rs. 10,000 on 16th January 1978, under the “High Denomination Banknote (Demonetisation) Act,1978”



After that the Rs. 1,000 currency banknotes made a comeback in November 2000.



Rs. 500 currency banknotes were introduced in India in 1987 for the very first time so as to cope with up the situation of high inflation.



But this is for the first time when new currencies of ` 2,000 banknotes are going to be introduced in India from 11th November 2016.



A thing which cannot be forgotten is the symbol of Indian currency. The Indian currency banknotes contain symbols representing Science and Technology, Progress and orientation to Indian art and culture.



In the year 1980, the fable “Satyameva Jayate” was incorporated under the national emblem for the first time.



In October 1987, the banknotes of Rs. 500 were introduced with the portrait of Mahatma Gandhi, which was later on introduced for the other denominations also, and called as MG Series Banknotes.

EXCHANGE FACILITY •

Deposit all old currency banknotes of ` 500 and ` 1,000 denomination to bank and get new currency banknotes in exchange.



Maximum ` 4,000 can be exchanged per day, so a special bank team will be organised for the overtime work.



An individual can withdraw a total of ` 10,000 daily and ` 20,000 weekly either from banks or ATMs or both in Totality.



Exchange all your old ` 500 currency banknotes and ` 1,000 currency banknotes before 30th December 2016.



ATM operations get closed on 9th and 10th November and same for all the banks in India. Table 1: Policy for Exchanging the Currency Actions Old Notes Exchange Bank Note Withdrawal Bank Note Withdrawal ATM Notes Withdrawal ATM Notes Withdrawal Old Notes Deposit Cashless Electronic Shopping

Maximum limit

`4,000 `10,000 `20,000 weekly `2,000 daily `4,000 daily No Limit No Limit

-48-

Duration 10th Nov- 24th Nov 10th Nov onwards 10th Nov onwards 10th Nov- 18th Nov 19th Nov onwards 10th Nov- 30th Dec 8th Nov onwards

To exchange the old banknotes, one should have to follow the following steps:

Demonetisation: Norms, Initiative and Impacts



First of all One should have to show his/her Valid ID proof as Aadhar card, Voter card, Driving licence or PAN card etc.



After that visit to your Bank or Post Office.



You can have the exchange facility up to ` 4,000 from the financial institution on daily basis.



The bank or Post Office executives will make validation of your identity and give you cash in exchange.



You can only receive banknotes of ` 2,000, ` 500, ` 100, ` 10 and ` 5 denominations for your request.



This whole exchange facility will be valid till 30th December 2016.

WAIVER OF ATM CHARGES Reserve Bank of India issued instructions for all the banks to waive levying ATM charges, irrespective of the number of transactions from 10th November to 30th December 2016. APPS LAUNCED TO LOCATE CASH IN ATMS AND BANKS Due to shortage of cash in banks as well as ATMs, the common man will have to face a great difficulty. So a handful of new apps and services have been popped up, these include: 1.

Walnut: Walnut is a finance management company which allow its usres to find an ATM with cash. This company tracks

the usage of over 1.8 million users and prompt its users on every ATM visit with the status of queue and share the information on social networking sites whatsApp and others. This provides the information in the form of:

2.

1.

Green pin- for a short queue

2.

Orange pin- for a long queue

3.

Grey pin- for no cash or unknown ATM.

CashnoCash: this website is not only limited to ATMs but also provide information about the banks and post offices across

the country. This website is run by Quiker and Nasscom. An alert can also be send if the email is given by you about cashpoints in your vicinity. 3.

ATM Search: This website has been built by @Wocharlog to assemble information. Under this, by entering the location

one can search for functional ATM in that location, with approximate number of people in the queue. 4.

CMS ATM Finder: it helps people to know whether the ATM is out of cash or working. Also CMS leads in driving the cash

cycle in the country, from banks currency chests to ATMs, to retail stores to wallet. PAYMENT ACCEPTED IN OLD Rs. 500 AND Rs. 1,000 BANKNOTE •

Government Hospitals- To clear the dues or paying the bills.



Pharmacies- on producing the prescription by doctor with valid ID Proof.



Toll Plazas- Petrol, Diesel, Gas Station and registered public sector oil companies.



Railway- For booking tickets, on board catering services.



Utility Bills- Like Electricity bill, Water bill, Telephone bill, etc.



Crematorium or Burial grounds



Consumer Authorities- Like Canteens



Monument tickets- Under Archaeological Survey



Fees, taxes, penalties- to the Central or State Govt., including local bodies and municipality.



Air tickets on all airports.

TASK FORCE: RECALIBRATION AND REACTIVATION OF ATMS 1.

It has become necessary to re-calibrate all ATMs/ Cash handling machines to dispense the new design notes following

introduction of Mahatma Gandhi (New) Series Banknotes including a new High Denomination (` 2000) in new designs.

-49-

JMASS 12 (1-2), 2016

2.

Parul Rani 

ATMs play a vital role in meeting the currency requirements of the public and have become a major channel for

disbursement of cash. Re-activation of ATMs extends the availability and disbursal of notes for the customers of banks at convenient time and location in judicious mix of higher and lower denominations. 3.

Re-calibration of ATMs involves multiple agencies – banks, ATM manufacturers, National Payment Corporation of India

(NPCI), Switch Operators, etc., and multiple activities making it a complex operation requiring immense coordination among these agencies. 4.

With a view to providing direction and guidance in this regard, it has been decided to set up a Task Force under the

Chairmanship of Shri S. S. Mundra, Deputy Governor, Reserve Bank of India. The Task Force will comprise: I.

Representatives from Government of India, Ministry of Finance, Department of Economic Affairs, Member.

II. Representatives from Government of India, Ministry of Finance, Department of Financial Services, Member. III. Representatives from Government of India, Ministry of Home Affairs, Member IV. Representatives from four banks with largest ATM network, viz., State Bank of India, Axis Bank, ICICI Bank and HDFC Bank, Member/s. I.

Representative from NPCI, Member.

II. Chief General Manager, Department of Currency Management, Member. III. Chief General Manager, Department of Payment and Settlement System, as Member Secretary. 5.

A representative each of ATM office equipment manufacturers (OEMs), Managed Service Providers, cash in transit (CIT)

companies and white label ATM (WLA) operators will be invited to the Task Force’s deliberations. The Task Force may also invite others as may be needed. 6.

The Terms of Reference of the Task Force would be as under:

I.

Expeditious reactivation of all ATMs in a planned manner.

II. Any other matter germane to the above. 7.

DPSS, CO will provide the secretarial support.

NORMS TO DISCLOSE MONEY DURING DEMONETISATION 1.

An individual having an amount up to ` 250,000 i.e. the exemption limit i old banknotes of

500 and

1,000 need not to

panic, he/ she can deposit it in the saving bank account without any scrutiny. 2.

But having old banknotes of ` 500 and ` 1,000 above this exception limit may attract taxation, interest as well as penalty.

3.

A new scheme “Pradhan Mantri Garib Kalyan Yojana”, 2016 was proposed in Lower House of Parliament (Lok Sabha) by the

finance Minister Arun Jaitely with the introduction of Taxation Laws (2nd amendment) Bill, 2016 to declare undisclosed income in the form of cash or deposit in the account of a person with a specified identity. 4.

Under this scheme a person will have to pay a sum total of 49.9% of the income disclosed, which is as:

i.

30% in form of tax

ii.

33.33% as Pradhan Mantri Garib Kalyan Cess

iii.

And 10% of income disclosed as penalty.

Further half of the remaining deposit or 25% of the undisclosed income is not allowed to be withdrawn. This amount should have been parked in the specified Pradhan Mantri Garib Kalyan Deposit Scheme (bonds) having a lock-in period of 4 years, bearing no interest at all. 5.

The person declaring income will not be entitle to open assessment or reassessment under the Income Tax Act, 1961 or

Wealth Tax Act, 1957, in respect of the declaration made, or claims for any set-off, rebate or relief in any appeal or any other proceedings in relation to any such assessment or reassessment.

-50-

6.

Demonetisation: Norms, Initiative and Impacts

The declaration shall have to be treated as void if the payment is to be made by making any misrepresentation or

suppression of facts, or if the imposed tax, cess and penalty have not been paid. 7.

Further, there shall not admissible any proceedings in respect of the declaration as per the proposed scheme under any

law, subject to exceptions, if any. 8.

Amendments are also made by the Govt in the provisions of existing Income Tax Act, 1961, relating to tax on unexplained

cash credits, amount of investments, money, expenditure, etc., and higher tax rates and penalties are introduced to be dealt with such cases. 9.

If a person do not avail the proposed disclosure window, he/she will be governed by the provision made by amendments

and will have to pay a sum amounts to 75% of the income undisclosed by way of: i.

Tax @60% on the undisclosed income

ii.

Plus surcharge @25% on the tax

iii.

Further, a penalty @10% of the tax shall also be levied if a person fails to report such undisclosed income in the income tax

return, or have paid the tax thereof on or before the end of the relevant previous year( i.e. by way of advance tax). 10. In the case of Jan Dhan Account, the holder may deposit up to

50,000 per account.

AMENDMENTS IN THE EXISTING PROVISION OF INCOME TAX, ACT If any, the taxpayers shall not avail the last opportunity to the scheme, and then the other related provisions of the act will be applicable with the proposed amendments, discussed in the table below: TABLE 2: PROPOSED AMENDMENTS IN THE INCOME TAX ACT THE STUDY CONDUCTED ON IMPACT OF CASHLESS ECONOMY IN NIGERIA SHOWS THAT CASHLESS POLICY WILL INCREASE EMPLOYMENT, REDUCE CASH RELATED ROBBERY THEREBY REDUCING RISK OF CARRYING CASH, REDUCES CASH RELATED CORRUPTION AND ATTRACT MORE FOREIGN INVESTORS TO THE COUNTRY (APRIL 2013).

(Source: www.pwc.com) NORMS FOR NON-RESIDENT INDIANS The NRIs also have to follow the procedure as the residents of India to deposit their old Rs. 500 and Rs. 1,000 currency banknotes. They not only have the limited option to deposit or exchange their Indian currency, but also have to comply with the provisions of additional disclosures if the need arises. • Foreign branches of Indian banks are not accepting the old currency banknotes of Rs. 500 or Rs. 1,000 either for the purpose of deposit or exchange. • To exchange the currency the NRIs have either to come India by own or authorise someone in writing to deposit the old currency banknotes into their Non Resident Ordinary(NRO) . • The authorised person will have to bring a valid ID proof with the authorisation letter in writing for this purpose. -51-

JMASS 12 (1-2), 2016

• • • • • •

Parul Rani 

The deadlines as well as the other norms are similar for the residents and NRIs. If the amount of deposit will extends the exemption limit i.e. ` 250,000, the NRI have to go through the process of scrutiny of source of income, irrespective of the fact whether he had filed a return in India or abroad. In case the transaction is a high-value one, which is deposited in an NRO account the bank need to provide a currency exchange receipt. While making deposit in an NRO account the NRI himself has to disclose the sources of income at the respective bank branch. If the documents of disclosure are not found satisfactory, the bank can file a Suspicious Transaction Report (STR) within the 7 days. The NRIs cannot take with them more than ` 25,000 outside India, under the guidelines of Foreign Exchange Management Act.

AN INITIATIVE TO CASHLESS ECONOMY India is a country which can be seen as a king in doing transactions in cash. Hardly 5% of the transactions are being taken place in electronic forms. In a survey by a U.S.A. organisation it is observed the share of cash in volume of consumer transaction is quite large in comparison with electronically. Also the number of currency notes in circulation is far higher than any other economies in the world. That stands for 76.47 billion currency notes of India in circulation in the year 2012-13 as compared to 34.5 billion in the U.S.A. FIGURE 1: COUNTRY WISE USAGE OF CASH BY CUSTOMERS

The Union Govt has announced a slew of incentives and measures for advancement of digital and cashless economy in India. Which includes the following: •

0.5% discount will be given to people, who will buy monthly seasonal tickets in the suburban railway networks through the digital payment mode from 1 January, 2017.



Petroleum companies will offer a discount of 0.75% on diesel and petrol to consumers who pay through digital means.



No service tax will be charged on digital transaction charges/MDR (Merchant Discount Rate) for transactions up to ` 2,000 per transaction.



Licensing of the Payment banks.



Promotion of e-commerce by liberalising the norms of FDI (foreign direct Investment) for this sector.



Introduction of the UPI (Unified Payments Interface) by the Union Govt to make electronic transactions much easier and simple.

IMPACT OF DEMONETISATION •

GDP:

As per former Prime Minister Dr. Manmohan Singh’s statement, withdrawing of Rs. 500 and Rs. 1,000 currency

banknotes without having a backup plan will cause India’s GDP to decline by 2% in this fiscal year.many rating agencies have also given the estimates of india’s GDP to slash by 0.5% point to 1% point.

-52-

Demonetisation: Norms, Initiative and Impacts



ECONOMIC ACTIVITY:

By withdrawing High value notes which accounted for approximate 86% of all currency in

circulation implies that people are left with no cash to hold dealings. This has affected not only the small traders, but led to a massive dip in the production of the medium size enterprises, stop the functioning of Mandis and put a force on migrant labourers to go back homes due to shutdown of factories temporarily. As per CMIE (Centre for Monitoring Indian Economy) has spelled the impact on various sectors as: •

` 61,500 crore of business lost from reduced consumption.



` 35,100 crore of cost borne by banks.



` 15,000 in foregone wages during the period.



LIQUIDITY: Liquidity is badly hit due to no cash for making purchase or sale. As per the research report of Credit Suisse, only ` 1.5 lakh crore of new banknotes are back in circulation, compared to nearly ` 14.18 lakh crore old banknotes withdrawn, as on 26 Noember 2016. FIGURE 2: VALUE OF CURRENCY IN CIRCULATION AS ON 26 NOV 2016

(Source: Credit Suisse Research Report) •

BANK DEPOSITS AND CASH EXCHANGE: Banking activity has shown a slump in last days due to demonetisation. This can be shown as under: TABLE 3: BANKING TRANSACTIONS (in crore) Exchange Deposit Withdrawals Nov 10 - Nov 18 33,006 511,565 103,316 Nov 19 – Nov 27 942 299,468 113,301 (Source: Reserve Bank of India)



MUNICIPAL AND LOCAL TAXES: As the demonetisation announced, there was seen a hefty increase in the payment of

dues and advance taxes in order to use the old 500 and 1,000. To conclude, demonetisation can be seen as a step of currency revolution in Indian history. This is an initiative by Union Govt towards Digital India.In this regard Govt announced a number of new incentives to accelerate digital transactions through various modes such as debit/credit cards, bank transfers and e-wallets and making digital payment user-friendly, e.g. on dialling “*99#”, an individual can perform banking transactions even without having a smart phone. REFERENCE: 1.

Amid disruptions & uproar, Bill to tax deposits passed in Lok Sabha (2016, November 29). Retrieved from http://www.thehindu.com/news/national/Amid-disruptions-uproar-Bill-to-tax-deposit-passed-in-Lok Sabha/ article16720245 .ece

2.

Constitution of Task Force for enabling dispensation of Mahatma Gandhi(New) Series Banknotes- Recalibration and Reactivation of ATMs. (2016, November 14). Retrieved from http://www.rbi.org.in

3.

Demonetisation will bring economy to a new equilibrium, says NITI Ayog CEO. (2016, November 26). The Hindu. Retrieved from

http://www.thehindu.com/business/demonetisation-will-bring-economy-to-a-new-equilibrium-says-NITI-Ayog-CEO/

article16707566.ece

-53-

JMASS 12 (1-2), 2016

4.

Parul Rani 

Naina Khedekar. (2016, November 16). Demonetisation: 4 Apps & Services to find ATMs, banks with cash near you. http://www.tech.firstpost.com/news-analysis/demonetisation-4-apps-services-to-find-atms-banks-with-cash-near-you347662.html

5.

Omotunde M, Sunday T, John-Dewole A.T. (2013, April) Impact of Cashless Economy in Nigeria. Greener Journal of Internet, Information and Communication Systems, 040-043. Retrieved from http://www.gjournals.org

6.

Raj, Kumari (2016, November 10). 8 New payments where you may use old 500-1000 notes. Simple Tax India. Retrieved from http://www.simpletaxindia.net/2016/11/8-new-payments-where-you-may-use-old.html

7.

Rohit, Venkataramakrishnan. (2016, November 27). This is how much rating agencies are expecting GDP to drop because of demonetisation. Retrieved from http://www.scroll.in/article/822654/this-is-how-much-rating-agencies-are-expecting-gdpto-drop-because-of-demonetisation

8.

Samarth, Bansal. (2016, November 29).RBI data shows reduced cash exchange and bank deposit following demonetisation.

The Hindu. Retrieved from http://www.thehindu.com/economy/RBI-data-shows-reduced-cash-exchange-and-bank-depositfollowing-demonetisation/article16721039.ece 9.

Tinesh, Bhasin. (2016, November 23). Demonetisation: Stricter norms for NRIs. Business Standard Times. Retrieved from http://www.business-standard.com/article/special/demonetisation-stricter-norms-for-nris-11611...

-54-

Journal of Mathematics & Systems Sciences (JMASS) Vol. 12, No. 1-2, 2016

ISSN : 0975-5454

CONFIDENCE INTERVAL OF ANNUAL EXTREMUM OF AMBIENT AIR TEMPERATURE AT GUWAHATI Rinamani Sarmah Bordoloi* , Dhritikesh Chakrabarty** *Research Scholar, Department of Statistics, Assam Down Town University, Panikhaiti, Guwahati, Assam, India. **Department of Statistics, Handique Girls' College, Guwahati, Assam & Research Guide, Assam Down Town University, Panikhaiti, Guwahati, Assam, India. e-mail : [email protected], [email protected] , [email protected]

ABSTRACT Confidence intervals (of 95%, 99% & 99.73% degrees of confidence) have beed determined for each of annual maximum & annual minimum of ambient air temperature at Guwahati. The determination is based on the data from 1969 onwards collected from the Regional Meteorological Centre at Guwahati. This paper describes the method of determination of them and the numerical findings on them. Keywords : Ambient air temperature at Guwahati, annual extremum, confidence interval.

I. INTRODUCTION Observations or data collected from experiments or survey suffer from chance error (which is unavoidable or uncontrollable) even if all the assignable (or intentional) causes or the sources of errors are controlled or eliminated and consequently the findings obtained by analyzing the observations or data which are free from the assignable errors are also subject to errors due to the presence of chance error in the observations (D.Chakrabarty2014). Determination of parameters, in different situations, based on the observations is also subject to error due to the same reason. Searching for mathematical models describing the association of chance error with the observations is necessary for analyzing the errors. There are innumerable situations/forms corresponding to the scientific experiments. The simplest one is that where observations are composed of some parameter and chance errors (D.Chakrabarty2014, 2015,2008 ). The existing methods of estimation namely least squares method, maximum likelihood method, minimum variance unbiased method, method of moment and method of minimum chi-square (Ivory1825, G.A.Barnard1949, Luien Le Cam, Birnbaum Allan 1962, Erich L. Lehmann 1990, Anders Hald 1999) provide the estimator of the parameter which suffers from some error. In other words, none of these methods can provide the true value of the parameter. However, An analytical method has been developed, by Chakrabarty [D.Chakrabarty2014] for determining the true value of the parameter from observed data in the situation where the observations consist of a single parameter chance error but any assignable error. The method has already been successfully applied in determining the central tendency of each of annual maximum and annual minimum of the ambient air temperature at Guwahati (D.Chakrabarty2014 ). This paper deals with the determination of the central tendency of each of annual maximum and annual minimum of the ambient air temperature at Guwahati by the same method. The study has carried out using the data since 1969 onwards. 2. GAUSSIAN DISCOVERY: In the year 1809, German mathematician Carl Friedrich Gauss discovered the most significant probability distribution in the theory of statistics popularly known as normal distribution, the credit for which discovery is also given by some authors to a French mathematician Abraham De Moivre who published a paper in 1738 that showed the normal distribution as an approximation to the binomial distribution discovered by James Bernoulli in 1713 {Bernoulli (1713) , Chakrabarty (2005b , 2008), De Moivre (1711 , 1718), Kendall and Stuart (1977 , 1979) , Walker and Lev (1965), Walker (1985), Brye (1995), Hazewinkel (2001), Marsagilia (2004), Stigler (1982). The normal probability distribution plays the key role in the theory of statistics as well as in the application of statistics. There are innumerable situations where one can think of applying the theory of normal probability distribution to handle the situations. The probability density function of normal probability distribution discovered by Gauss is described by the probability density function f(x : µ , σ) = { σ (2) ½}. exp [ ─ ½ {(x ─ µ)/σ}2] , (2.1) ─ ∞

Suggest Documents