Chapter 6

14 downloads 0 Views 7MB Size Report
Chapter Six: On Extended Generalized Gamma and Beta ... introduces some speculations about the relations between extended ... the comprehension by emphasizing computational skills, ideas, and problem ...... Euler's example of for integration by parts (and the answer is almost the ...... many failure data in practice.
Extended Generalized Gamma, Beta Functions and Its Distributions Applications

by Dr Bachioua Lahcene

First Edition 2012

0

Table of contents Content Dedication Acknowledgments Abstract List of Figures List of Tables Chapter One: On Gamma and Beta Functions 1-1-History of the Gamma Function 1-2-The Euler Gamma Function 1-3-Gamma Function Relationships 1-4-Properties of the Gamma Function 1-5-Incomplete gamma function 1-6-Derivatives 1-7-Relation between Beta and Gamma functions 1-8-Applications of Gamma, Beta and Zeta Functions 1-9-Exercises Chapter Two: On Gamma and Beta Distributions 2-Introduction 2-1-Continuous Random Variables and Probability Distributions 2-2-The Gamma Distribution 2-3-Exponential Distribution 2-3-1-Properties of Exponential Density Function 2-3-2-Applications 2-3-3-The Chi-Squared Distribution 2-4-Linear combinations and transformation of random variables 2-4-1-Applications Mean and variance of the chi-square random variable 2-4-2-Results 2-5-The Beta distribution 2-6-Related distributions 2-7-Examples 2-8-Exercises Chapter Three: On Generalized Gamma and Beta Functions 3-Introduction 3-1-Generalized Gamma Function 3-2-Incomplete Generalized Gamma Functions 3-3-Generalized Digamma Functions 3-4-The Generalized Beta-Function 3-5-Relation of Beta-Function to Gamma-Function 3-6-The Error-Function 3-7-Generalized error function 3-8-Examples 3-9-Exercises 1

1 3 5 6 8 15 16 16 18 27 35 36 41 42 49 58 62 62 64 67 77 79 82 84 86 88 89 89 91 94 102 110 110 115 121 124 131 137 140 142 144 154

Chapter Four: On Generalized Gamma and Beta Distributions 4-Introduction 4-1-Generalized Gamma Distribution 4-2-The Generalized Beta Distribution 4-3-The PERT Beta Distribution 4-4-Weibull Analysis 4-5-Application and Weibull Analysis 4-6-Mixtures of gamma distributions with applications 4-7- Exercises Chapter Five: On Extended Generalized Gamma and Beta Functions 5-Introduction 5-1- The Kobayashi's and Agarwal, Kalla Gamma type functions 5-2- The Extended Generalized Gamma function 5-3-Extended Generalized Beta Function 5-4-Examples with Applications 5-5-Exercises Chapter Six: On Extended Generalized Gamma and Beta Distributions 6-Introduction 6-1-Extended Generalized Gamma Distribution 6-2-Extended Generalized Function of Gamma Distribution 6-3-Same case of Extended Generalized Gamma Distribution 6-4-Extended Generalized Beta Distribution 6-5-Cases of Extended Generalized gamma and Beta Distributions 6-5-1- Weibull Distribution 6-5-2- Family of Normal Distributions 6-5-3- Exponential distribution 6-5-4- Rayleigh distribution 6-5-5-Gamma distribution 6-5-6-Truncated normal distribution 6-5-7- A Derivation of the standardized SGT 6-5-8- Superimposing of two exponential distributions 6-5-9- Combination of exponential and truncated normal distribution 6-5-10- Partially Linear Distribution 6-6- Inapplicability of the Constant Failure Rate Assumption 6-7-Examples 6-8-Applications of Extended Generalized gamma in Mixture Model 6-9-Open Problem for Extended Distributions 6-10-Exercises References

2

158 158 163 178 183 192 195 199 204 212 212 221 227 241 249 262 266 266 272 275 277 280 282 282 285 288 289 290 291 292 295 296 297 298 300 317 322 324 328

Dedication: The book discusses the centrality of Extended Generalized gamma, beta functions, and distributions in its and applications, and same applications of Eller and beta integrals. However, if the actual probability density or quantiles are needed, the Pearson System is not convenient. This inconvenience stems from the number of families included in the system and the logic needed to select a specific family and match distribution parameters. In chapter unit, we consider a form of the Extended Generalized Gamma Distribution function is characterized by six parameters defined in its original form as firstly introduced by Bachioua, in 2004. The book contains six chapters. The first chapter (On Gamma and Beta Functions), discusses general background issues.

The first provides the

philosophical and historical origins, particularly from gamma and beta functions theory and applications, the argument that such models merely transfer physics ideas to be discussed below with examples and exercises in every chapter. The second chapter (On Gamma and Beta Distributions) briefly discusses some philosophical and historical origins, particularly from gamma and beta distribution theory and applications. The third chapter (On Generalized Gamma and Beta Functions) presents the standard mathematics of the Generalized Gamma and Beta Functions. This ultimately involves for a particular examples and exercises deriving a master equation that describes the evolution of probability distributions. The fourth chapter (On Generalized Gamma and Beta Distributions) focuses on the relations between the different distributions. Chapter five (On Extended Generalized Gamma and Beta Functions) focuses on relation between Extended Generalized Gamma, Beta Functions and introduces some speculations about the relations between extended generalized gamma and beta functions discussed more lately. A first pass analysis shows different broadly possible outcomes, homogeneous integration of the family’s functions. This is one of the few places in the book where such empirical work is carried out, with the relative paucity of such being an arguable weakness of the book. After the fifth one, Chapter six (On Extended Generalized Gamma and Beta Distributions) on the rise and fall of interacting different particular cases 3

groups looks like basically a large number of parameters get talked about. Three scenarios are studied of cycles, chaos, and "delusive long-term stability".

A

problem here is that there are so many variables involved that it is hard to pin down what is really crucial and what is not. Thus, in the quality competition model, supply is made into a slaved variable while changes in demand are an order parameter that drives the system in conjunction with quality. This extended generalized gamma, beta functions and its distributions applications, has been adapted from author to be student-friendly and to maximize the comprehension by emphasizing computational skills, ideas, and problem solving as opposed to mathematical theory. Dr Bachioua Lahcene Department of Mathematics, Deanship of Preparatory year, Hail University, KSA

4

ACKNOWLEDGEMENTS I would first like to thank Prof Dr. Shawki Shaker Hussain have given me the freedom to explore many diverging areas, but drawn back my focus when necessary. He has given me sympathy and encouragement to get me through the lows, but stopped me from wallowing in self-pity. And he has identified the areas on which I needed to work, without making me feel criticized. It is impossible to overstate how much I appreciate the time, consideration and support that he has given me over the past four years (1999-2003). Finally, we thank colleagues for assistance, and to remind Dr .M. Aichouni most importantly, he given me constant unconditional and uncomplaining support both materially and emotionally. Thanks must also go to Dr. Ghenadie Braghis Acquisition Editor has been both comforting and distracting for advice concerning stable isotope work and the preparation of manuscripts and the book. He has always been there for me when I needed him: understanding, affectionate and altruistic. Finally, thanks must go to family and friends for their support throughout the last three and a half years, in particular my father, mother, and brother for support in the later stages, and my best future wife for help with some aspects of the final version of this book.

5

Abstract The continuous distribution’s samples and their historical sequences in addition to the positive results they achieved within several practical applications in various productive and planning aspects are very important. It is notable that there is negligence in shaping the form of those continuous distributions. The multiple varieties of studied aspects make those samples and some of their mixtures which proved themselves during the last decade in many productive studies. The common assumption during the last period of time which says that the samples we study might be considered pure under some conditions which are not enough and impossible to achieve in addition of being very expensive and lately became very far from reality and that was affirmed by most of the studied samples. The researcher suggested the form which produces forms and various continuous distributions mixtures within several information values, which is the idea of the researcher in order to find the most general formula for various forms used in for the continuous distribution . The general gamma distribution model is considered as one of such distributions which established their role since it is shown that it has contained many continuous distribution models familiarly, particularly the family of both Exponential distribution model and Weibull distribution model. These two families are characterized by distinguished statistical properties especially in the 19th century through the use of reliability approaches for production phenomena particularly in industry. This is because there are wide varieties in finding statistical parameters which allow the statistical researchers to choose the optimistic distribution model since the verification of assumed theoretical conditions are difficult, and also many non-pure and sensitive samples are available. The researcher tried to formulate a distribution model such that it contains, by its special cases, many of the familiar distribution models and some of its mixtures whether the sum or the product after the generalization of general gamma function and extending its parameters range. Then the study of its properties and 6

results are achieved. The results are considered important especially through the study of the behavior and reliability of product machine systems, in addition to the influence of its components by natural conditions, e.g., the iron corrosion, which is taken place through the productive operations at the work or inventory. Such suggested distributed model is formulated by six parameters and it is considered as the model function which, in turn, is formulated in terms of its familiar parameters. The researcher put this model into the subjects of reliability and mixtures of continuous distribution models. Then he found remarkable roles for this suggested model’s parameters and its benefits in interpreting some of the statistical phenomena that were unsolved and not clear. While the topic is complicated the researcher attempted to the interests of the theoretical features. However, the practical applications to show the characteristics of the suggested model that by fining uses and names for each model parameter. Through the estimating operations, the researcher showed the way to increase the accuracy of estimator’s results of parameters. These results are considered as dependent through the interpretation of the sensitive phenomena. Through the study of the assumed distributed model which is called extended generalized gamma distribution, it is found that this model doesn’t cover some aspects of the important distributed model, eg., the generalized beta distribution model that after generalization of the generalized gamma function to as the generalized Omega, which may have further investigations in a future open problem work. The reader of this book will find a satisfaction rapidly through the first five chapters with the suggested forms in that to use in applied fields of life, and to develop the time conditions in the possible areas. This is the duty of Adam.

7

List of Figures Number

Title

Figure 1.1

Factorial –function

20

Figure 1.2

Plot of Γ(x)

21

Figure 1.3

Plot of ln( x )

23

Figure 1.4

Plot of (x ) and 1 / ( x )

30

Figure 1.5

Gamma-function for 0  x  5

30

Figure 1.6

Gamma-function for 0  x  3

37

Figure 1.7

Beta-function B(x, y) and log (a, z)

43

Figure 2.1

Diagram of distribution relationships

62

Figure 2.2

Probability density function of the random variable X .

66

Figure 2.3

Page

f X ( x) gamma distributions for different values of

68

,    Figure 2.4

f X ( x) gamma distribution for different values of

68

 and  Figure 2.5

f X ( x) gamma distributions for different values of

69

 and  Figure 2.6

f ( x,  , n) gamma distribution for different values of

72

  1, n Figure 2.7

f ( x,  , n) gamma distributions for different values of

72

  1 , n  16 Figure 2.8

Densities gamma distribution of G (0.5, 2), G (1, 2) and

74

G (2, 2) Figure 2.9

Illustrative Densities for Gamma and Exponential

76

Models Figure 2.10

Exponential density functions for same values of 

79

Figure2. 11

Exponential density functions for same value of 

80

Figure2. 12

The p.d.f. and c.d.f. graphs of the exponential

80

8

distribution with   0.5 . Figure2. 13

Chi Square distributions for 2, 4, 8

85

Figure2. 14

Chi Square distributions for1, 2, 4, 6, 8

87

Figure2. 15

Beta

density

functions

for

same

values

91

B(0.1,0.8),B(0.8,0.2), and B(0.8,0.8). Figure2. 16

Beta density functions for different values.

91

Figure2. 17

Beta’s pdf and cdf are drawn above.

92

Figure2. 18

Beta density functions for same values B(1,2), B(1,1),

92

and B(2,1). Figure2. 19

Beta density functions for same values B(2,8), B(5,5),

93

and B(8,2). Figure 3.1

plot gamma function f (n)  n  1

112

Figure 3.2

plot exponential function f ( x )  e x .

113

Figure 3.3

plot exponential function f ( x )  x 3.5 .

113

Figure 3.4

plot exponential function f ( x )  x 3.5e  x .

114

Figure 3.5

Plot general function f ( x )  x 3.5e  x .

115

Figure 3.6

Plot E p (x ) , 0  x  3, 0  p  8 .

117

Figure 3.7

Plot E p ( x  iy ) ,  4  x  4,  4  y  4, p  1 / 2 .

118

Figure 3.8

plot E p ( x  iy ) ,  4  x  4,  4  y  4, p  1 .

118

Figure 3.9

plot E p ( x  iy ) ,  3  x  3,  3  y  3, p  3 / 2 .

119

Figure 3.10

Plot E p ( x  iy ) ,  3  x  3,  3  y  3, p  2 .

119

Figure 3.11

Plot Amplitude and phase of factorial of complex

120

argument. Figure 3.12

plot same cases of  ( , x )

122

Figure 3.13

Plot of  0 ( x)  ( x ) .

126

Figure 3.14

Plot of log |Γ(x)|.

126 9

Figure 3.15

Plot of Riemann zetas function.

127

Figure 3.16

Plot of 0,1  ,  

131

Figure 3.17

Plot of erf x  .

141

Figure 3.18

Plot of En x  .

143

Figure 4.1

Diagram of same continuous distribution relationships.

159

Figure 4.2

Diagram of continuous and discrete distribution

162

.

relationships. Figure 4.3

Plot of f k , ,  ; x  .

166

Figure 4.4

Weibull pdf with 0    1 ,   1 ,   1 and a fixed

169

. Figure 4.5

Weibull cdf, or unreliability vs. time, on Weibull

170

probability plotting paper with 0    1 ,   1 ,   1 and a fixed  . Figure 4.6

Weibull 1-cdf, or reliability vs. time, on linear scales

171

with 0    1 ,   1 ,   1 and a fixed  . Figure 4.7

Weibull failure rate vs. time with 0    1 ,   1 ,

171

 1 . Figure 4.8

Weibull pdf with   50, 100,200 .

173

Figure 4.9

The Weibull p.d.f. for Different  .

174

Figure 4.10 Plot of gamma density function and cumulative

177

distribution function Figure 4.11

Plot of gamma density function and cumulative

177

distribution function. Figure 4.12

Plot of beta density function and cumulative distribution

179

function. Figure 4.13

Plot of density function of PERT distribution for same

10

184

values. Figure 4.14

Plot of standard deviation of PERT distribution for same

186

values. Figure 4.15

Plot the extreme tail making the Max value estimate

187

rather meaningless. Figure 4.16

Plot of density function of Uniform distribution for

188

same values. Figure 4.17

Plot of density function of  ,  , min, max  for same

189

values. Figure 4.18 Plot of density function of Pert for minimum and

189

maximum values. Figure 4.19

Plot of density function of Pert for minimum and

190

maximum values. Figure 4.20

Plot of density function of Pert for minimum and

191

maximum values. Figure 4.21

Plot of the rate functions for the Weibull family of

193

distributions. Figure 4.22

Plot of the expected number of failures.

197

Figure 4.23

Plot of the widget as a function of time.

198

Figure 5. 1

Plot Bernoulli's Gamma function, interpolating z! , in the

212

complex plane Figure 5.2

Plot the inverse function and factorial ( x)  x! .

213

Figure 5.3

Plot Hadamard's Gamma function, interpolating n! , n

214

integer. Figure 5.4

Plot Q(x ) , the 'oscillating factor' in Hadamard's

215

Gamma functions. Figure 5.5

Arc Factorial in the complex plane.

215

Figure 5.6

Plot Luschny's factorial function L(x ) .

216

Figure 5.7

Factorial x! and the L-factorial L(x ) .

216

11

Figure 5.8

Plot Gamma(x) - H (x ) and x! - L(x ) .

217

Figure 5.9

Power function.

218

Figure 5.10

Exponential and logarithmic functions.

219

Figure5.11

Fitting the function y     0.49    exp   x  8

220

to the data set. Figure 5.12

Plot the absolute value, real part and imaginary part of

223

the gamma function. Figure 5.13

Plot the generalized gamma function for same values of

224

r, k , n,  .

Figure 5.14

Plot the absolute value of the gamma function on the

225

complex plane. Figure 5.15

Plot the Principal-branch logarithmic gamma function.

225

Figure 5.16

Plot the Extended Generalized Gamma function for

225

same values of parameters. Figure 5.17

Plot the phase of the Barnes Generalized Gamma.

226

Figure 5.18

Plot the phase of the Generalized Gamma.

226

Figure 5.19

Plot the same case of the Extended Generalized Gamma.

227

Figure 5.20

The representing of the Extended Generalized Gamma

234

set of functions. Figure 5.21

The graph of the complex extended generalized gamma

237

function. Figure 5.22

The graphs of the extended generalized gamma function

238

for same cases. Figure 5.23

The graphs of the simple extended generalized gamma

239

function cases. Figure5. 24

The graphs of the extended generalized gamma

239

function. Figure 5.25

Diagram of same extended gamma and beta functions relationships.

12

240

Figure 5.26

A graph of

0,1 ( ,  ) on the space( 0    10 ,

242

0,1 ( ,  ) on the square 0    10 ,

243

0    10 ). Figure5.27

A graph of

0    10 . Figure 5.28

Real Beta Function Plot .

Figure 5.29

A graph of

0,1 ( ,  ) on the square 0    100 ,

245 247

0    100 . Figure5.30

A graph of 0,1 (  ) on the real plan ( 0    10 ).

247

Figure5.31

A graph of 0,1 (.,  ) on the real plan ( 6    6 ).

248

Figure5.32

A graph of 0,1 ( ,  ) on the in complex plan.

248

Figure5.33

A graph of a ,b ( x, y ) on the same values of ( a, b ).

251

Figure 5.34

A graph EGGF functions in the hierarchy of diffraction

261

catastrophes. Figure.6.1

Same family distributions which can be used in various

266

areas. Figure.6.2

Relationship between same family distributions.

269

Figure 6.3

The Pearson system family distributions.

269

Figure 6.4

Johnson Distribution.

271

Figure 6.5

Illustration of the Gamma pdf .

272

Figure.6.6

Illustration of the Gamma pdf for parameter values

274

over k  10,   10,9,16,25,36,48,64,81 Figure 6.7

Illustration of the Gamma pdf for same parameter

275

Figure 6.9

values. Illustration of the generalized 3-parameters exponential distribution. Illustration of the family of Uniform Distributions.

Figure 6.10

Illustration of the Weibull pdf .

282

Figure 6.11

Illustration of the Weibull cdf .

283

Figure 6.8

13

279 281

Figure 6.12

Weibull distribution has two parameters.

284

Figure 6.13

The family of normal distributions pdf .

285

Figure 6.14

The Lognormal distributions pdf .

286

Figure 6.15

The Lognormal distributions ( pdf , cdf ).

286

Figure 6.16

Normal distribution parameters ( pdf , cdf ).

287

Figure 6.17

Plot of density and cumulative Gaussian q-distribution (

288

pdf , cdf ). Plot of density and cumulative Exponential distribution Figure 6.18

( pdf , cdf ).

Figure 6.19

Plot of density and cumulative Rayleigh distribution (

288

289

pdf , cdf ). Figure 6.20

Plot of density and cumulative Gamma distribution (

290

pdf , cdf ). Figure 6.21

Plot of density and cumulative truncated normal

292

distribution ( pdf , cdf ). Figure 6.22

The Superimposing of two exponential distributions (

295

pdf , cdf ). Figure 6.23

The combination of exponential and truncated normal

296

distribution ( pdf ). Figure 6.24

The Partially Linear Distribution

297

Figure 6.25

Human mortality rate analyzed with the exponential and

299

Weibull distribution. Figure 6.26 Figure 6.27

Empirical distribution of maternity LOS and fitted twocomponent gamma mixture model. Empirical distribution of same component beta mixture model.

14

319 321

List of Tables Number

Title

page

Table(1.1)

Basic Stirling Values

23

Table(1.2)

Basic gamma Values

24

Table (1.3)

Higher Gamma Values

30

Table (2.1)

compare the results to the bounds from

96

Chebyshev’s inequality Table (4.1)

widget failures occurred at the times

196

Table(6.1)

Two-component Beta mixture regression model

320

Table (6.2)

Fundamental constant distributions

323

15

Chapter One: On Gamma and Beta Functions

1-1-History of the Gamma Function Over the past 300 years there has been a substantial increase in the use of special functions in the formulation of solutions for scientific and engineering problems. Special functions are used as mathematical models for many and varied physical situations, and special functions also occur as reformulations of other mathematical problems. In recent years, the theory of gamma and beta functions has been used with many different areas of mathematics and applied sciences and engineering. Many problems in the field of ordinary and, scattering theory in quantum mechanics, biology viscodynamics fluids, contact problems in the theory of elasticity, mixed boundary problems in mathematical physics, physical chemistry and engineering can be formulated as special functions. The history of the gamma function is primarily due to Euler (1707-1783) who developed the analytical formulation of this function, since the famous work of J. Stirling (1730) who first used series for log( n! ) to derive the asymptotic formula for n! . As a matter of fact, it was Daniel Bernoulli who gave in 1729 the first representation of an interpolating function of the factorials in form of an infinite product, later known as gamma function (x ) . The correspondence between Goldbach, Daniel Bernoulli and Euler which undoubtedly gave birth to the gamma function is well documented in Paul Heinrich Fuss's "Correspondance mathématique et physique de quelques célèbres géomètres du XVIIIeme siècle "(St. Pétersbourg, 1843). The gamma function (x ) is the most important function not on a calculator. It comes up constantly in math. In some areas, such as probability and statistics, you will see the gamma function more often than other functions that are on a typical calculator, such as trig functions. A gamma function is the solution to a specific integral. Though the gamma functions, it is useful for 16

physical

applications

(has

very

little

theoretical

interest

value

for

mathematicians). It is occasionally related to the "error functions". Its simplest expression is at positive integer values, where it is the same as the factorial function. Which is factorial is the product of the integer in question with all positive integers smaller than that integer, and many of its other forms are recursive as well. The gamma function extends the factorial function to real numbers, since factorial is only defined on non-negative integers, there are many ways you could define factorial that would agree on the integers and disagree elsewhere. But everyone agrees that the gamma function is "the" way to extend factorial. Actually, the gamma function (x ) does not extend factorial, but ( x  1) does. Shifting the definition over by one makes some equations simpler, and that’s the definition that has caught on. In a sense, ( x  1) is the unique way to generalize factorial Harald Bohr and Johannes Mollerup year proved that it is the only log-convex function that agrees with factorial on the non-negative integers. That’s somewhat satisfying, except why should we look for log-convex functions? Log-convexity is very useful property to have, and a natural one for a function generalizing the factorial. Leonhard Euler is considered one of the top ten mathematicians in human history. He was an extremely prolific mathematician and a very ingenious one. In 1729

Euler

proposed

a

generalization

of

the

factorial

function

n!  n(n  1)(n  2)...3.2.1 from integers to any real number. His generalization is called the gamma function (x ) , which is defined as:

  m x m! ( x )  lim   ……… (1) m x ( n  1)( x  2)...( x  m )   Investigators of mention include: C. Siegel, A. M. Legendre, K. F. Gauss, C. J. Malmstén, O. Schlömilch, J. P. M. Binet (1843), E. E. Kummer (1847), and G. Plana (1847). M. A. Stern (1847) proved convergence of the Stirling's series for the derivative of log( ( x)) . C. Hermite (1900) proved convergence of the Stirling's series for log( ( x  1)) if z is a complex number. 17

During the twentieth century, the function log( ( x)) was used in many research works where the gamma function was applied or investigated. The appearance of computer systems at the end of the twentieth century demanded more careful attention to the structure of branch cuts for basic mathematical functions to support the validity of the mathematical relations everywhere in the complex plane. This lead to the appearance of a special log‐gamma function

log( ( x)) , which is equivalent to the logarithm of the gamma function log( ( x)) as a multivalued analytic function, except that it is conventionally defined with a different branch cut structure and principal sheet. The log‐gamma function

log( ( x)) was introduced by J. Keiper (1990) for Mathematica. The importance of the gamma function and its Euler integral stimulated some mathematicians to study the incomplete Euler integrals, which are actually equal to the indefinite integral of the expression t n e  t . They were introduced in an article by A. M. Legendre (1811). Later, P. Schlömilch (1871) introduced the name "incomplete gamma function" for such an integral. These functions were investigated by J. Tannery (1882), F. E. Prym (1877), and M. Lerch (1905) (who gave a series representation for the incomplete gamma function). N. Nielsen (1906) and other mathematicians also had special interests in these functions, which were included in handbooks of special functions and current computer systems like Mathematics.

1-2-The Euler Gamma Function The Euler gamma function is the only function described here that is available whether or not you set –enable-specfun, but if that flag is not set then you will be able to use the gamma function only for real arguments; to be able to use it for complex arguments you will need to set that flag. The Gamma function is initially known as an extension of the factorial function; however, it goes beyond this definition as its use extends various disciplines such as combinatorics, physics, statistics, etc.

18

Despite its uniqueness and ubiquity in mathematics, the gamma function unfortunately has the characteristic of possessing singularities at negative integer values and zero. It is used by the factorial function to compute non-integral values for the factorial as well. The Euler gamma function“gamma function” is an integral relationship that is defined as follows: 

(n )   e x x n 1dx ; 0

This integral is convergent for n  0 . The definition works for most negative values of x and even for complex values of x , but we only care about positive real values. Note that this function is automatically called implicitly whenever you do fact (arg) where arg is not an integer. The Euler gamma function should not be evaluated for negative integer arguments. The gamma function (represented by the capital Greek letter  ) is an extension of the factorial function, with its argument shifted down by 1, to real and complex numbers. The gamma function can be seen as a solution to the following interpolation problem: "Find a smooth curve that connects the points ( x, y ) given by y  x! at the positive integer values for x". A plot of the first few factorials makes clear that such a curve can be drawn(Figure 1.1), but it would be preferable to have a formula that precisely describes the curve, in which the number of operations does not depend on the size of n. The simple formula for the factorial; n!  1  2  3  ...  n,

This formula cannot be used directly for fractional values of n, since it is only valid when n is a natural number (i.e., a positive integer). There is, in fact, no such simple solution for factorials; any combination of sums, products, powers, exponential functions, or logarithms with a fixed number of terms will not suffice to express n!. However, it is possible to find a general formula for factorials using tools such as integrals and limits from calculus. A good solution to this is the gamma function. It is easy graphically to interpolate the factorial function to noninteger values, but is there a formula that describes the resulting curve?.

19

The points nearly fall on a straight line, but if you look closely, the points bend upward slightly. If you have trouble seeing this, imagine a line connecting the first and last dot. This line would lie above the dots in between. This suggests that a function whose graph passes through all the dots should be convex.

Figure 1.1: Factorial –function The gamma function is defined by the improper integral define by; 

( x )   e  t t x 1dt ; 0

Integration by parts readily reveals that (n)  (n  1)(n  1) . We may write the previous result as ( x  1)  x( x) , where x is any real number. The gamma function is finite except for non-positive integers. It goes to +∞ at zero and negative even integers and to -∞ at negative odd integers (Figure 1.2). The gamma function can also be uniquely extended to an analytic function on the complex plane. The only singularities are the poles on the real axis. That is, if x is a positive integer:

20

Figure 1.2: Plot of Γ(x) The gamma function is finite except for non-positive integers. It goes to +∞ at zero and negative even integers and to -∞ at negative odd integers. The gamma function can also be uniquely extended to an analytic function on the complex plane. The only singularities are the poles on the real axis. Here’s a plot of the absolute value of the gamma function over the complex plane Figure 1.2 represents. Let’s consider the following case:

21



For

1 1 x  :     e t t 1 / 21dt 2 2 0 

1

  e t t 1 / 2 dt

(Let u  t 2 , then du 

0

1 1 / 2 t dt ) 2



 2  e u du   2

0

The value of the previous integral requiree’s multivariable calculus, we may use this value to evaluate the following values of gamma:

 3 1 1       2 2 2 2 5 3 3 3        4 2 2 2  7  5  5  15  .       8 2 2 2 The following is a possible definition of the Gamma function. Definition: The gamma function  is a function  : IR   IR  satisfying the following equation; 

( x )   t x 1e t dt ; 0

while the domain of definition of the gamma function can be extended beyond the set IR  of strictly positive real numbers (for example to complex numbers), the somewhat restrictive definition given above is more than sufficient to address all the problems involving the gamma function that are found in these lectures. If we have a basic table of gamma values, we can use this to calculate other gamma values with the same increment or step value. I chose 0.05, but using a spreadsheet, it is easy to use a much smaller step than this, if we had need of it. This function ln( x ) is convex on (0,). As the value of the gamma function increases rapidly with increasing argument values, it is often convenient to work with a plot the log of the function as in the graph presented by figure 1.3: 22

n 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

Table(1.1): Basic Stirling Values 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20

20.05 20.10 20.15 20.20 20.25 20.30 20.35 20.40 20.45 20.50 20.55 20.60 20.65 20.70 20.75 20.80 20.85 20.90 20.95 21.00

11.223986 11.237972 11.251941 11.265893 11.279827 11.293744 11.307644 11.321527 11.335393 11.349242 11.363074 11.376890 11.390688 11.404470 11.418235 11.431984 11.445716 11.459432 11.473131 11.486814

 2817967749445040000 3278005144267250000 3813606825253560000 4437258676401800000 5163521590693230000 6009378240916120000 6994638116543560000 8142410668407310000 9479658073805080000 11037841090658000000 12853673759298200000 14970005391687600000 17436851427548500000 20312598413771700000 23665412669932000000 27574887247081000000 32133967696265600000 37451204086402800000 43653384823073700000 50888617325509700000

Figure 1.3: Plot of ln( x )

23

(n) 19.470091784 9.513510838 6.220274911 4.590845205 3.625611078 2.991569946 2.546147787 2.218160244 1.968137017 1.772454402 1.616124768 1.489192705 1.384795523 1.298055725 1.225417070 1.164230060 1.112484066 1.068629016 1.031453618 1.000000289

Table(1.2): Basic gamma Values X

0

1

2

3

4

5

6

7

8

9

1.00

1.00000

.999424

.998850

.998277

.997707

.997139

.996572

.996008

.995445

.994885

1.01

.994326

.993769

.993214

.992661

.992110

.991561

.991014

.990469

.989925

.989384

1.02

.988844

06 .988

.987236

.986704

.986174

.985645

.985119

.984594

.984071

1.03

.983550

.983031

.982513

.981997

.981484

.980972

.980461

.979953

.979446

.978941

1.04

.978438

.977937

.977437

.976940

.976444

.975949

.975457

.974966

.974477

.973990

1.05

.973504

.973020

.972538

.972058

.971579

.971103

.970627

.970154

.969682

.969212

1.06

.968744

.968277

.967812

.967349

.966887

.966427

.965969

.965512

.965057

.964604

1.07

.964152

.963702

.963254

.962807

.962362

.961918

.961476

.961036

.960598

.960161

1.08

.959725

.959292

.958859

.958429

.958000

.957573

.957147

.956723

.956300

.955879

1.09

.955459

.955042

.954625

.954211

.953797

.953386

.952976

.952567

.952160

.951755

X

0

1

2

3

4

5

6

7

8

9

1.10

.951351

.950948

.950548

.950148

.949750

.949354

.948959

.948566

.948174

.947784

1.11

.947396

.947008

.946623

.946238

.945856

.945474

.945095

.944716

.944339

.943964

1.12

.943590

.943218

.942847

.942477

.942109

.941743

.941378

.941014

.940652

.940291

1.13

.939931

.939574

.939217

.938862

.938508

.938156

.937805

.937456

.937108

.936761

1.14

.936416

.936072

.935730

.935389

.935049

.934711

.934374

.934039

.933705

.933372

1.15

.933041

.932711

.932382

.932055

.931729

.931405

.931082

.930760

.930440

.930121

1.16

.929803

.929487

.929172

.928858

.928546

.928235

.927925

.927617

.927310

.927004

1.17

.926700

.926397

.926095

.925794

.925495

.925197

.924901

.924606

.924312

.924019

1.18

.923728

.923438

.923149

.922862

.922575

.922290

.922007

.921724

.921443

.921164

1.19

.920885

.920608

.920332

.920057

.919783

.919511

.919240

.918970

.918702

.918435

X

0

1

2

3

4

5

6

7

8

9

1.20

.918169

.917904

.917640

.917378

.917117

.916857

.916599

.916341

.916085

.915830

1.21

.915576

.915324

.915073

.914823

.914574

.914326

.914080

.913834

.913590

.913348

1.22

.913106

.912865

.912626

.912388

.912151

.911916

.911681

.911448

.911216

.910985

1.23

.910755

.910526

.910299

.910072

.909847

.909623

.909401

.909179

.908959

.908739

1.24

.908521

.908304

.908088

.907873

.907660

.907447

.907236

.907026

.906817

.906609

1.25

.906402

.906197

.905992

.905789

.905587

.905386

.905186

.904987

.904789

.904593

1.26

.904397

.904203

.904009

.903817

.903626

.903436

.903247

.903060

.902873

.902688

1.27

.902503

.902320

.902137

.901956

.901776

.901597

.901419

.901242

.901067

.900892

1.28

.900718

.900546

.900375

.900204

.900035

.899867

.899700

.899533

.899368

.899204

1.29

.899042

.898880

.898719

.898559

.898401

.898243

.898086

.897931

.897776

.897623

0.98771

X

0

1

2

3

4

5

6

7

8

9

1.30

.897471

.897319

.897169

.897020

.896872

.896724

.896578

.896433

.896289

.896146

1.31

.896004

.895863

.895723

.895584

.895446

.895310

.895174

.895039

.894905

.894772

1.32

.894640

.894510

.894380

.894251

.894123

.893997

.893871

.893746

.893623

.893500

1.33

.893378

.893257

.893138

.893019

.892901

.892784

.892669

.892554

.892440

.892327

1.34

.892216

.892105

.891995

.891886

.891778

.891671

.891565

.891460

.891356

.891253

1.35

.891151

.891050

.890950

.890851

.890753

.890656

.890560

.890464

.890370

.890277

1.36

.890185

.890093

.890003

.889913

.889825

.889737

.889650

.889565

.889480

.889396

1.37

.889314

.889232

.889151

.889071

.888992

.888914

.888836

.888760

.888685

.888611

1.38

.888537

.888465

.888393

.888323

.888253

.888184

.888116

.888049

.887983

.887918

24

1.39

.887854

.887791

.887729

.887667

.887607

.887548

.887489

.887431

.887375

.887319

X

0

1

2

3

4

5

6

7

8

9

1.40

.887264

.887210

.887157

.887105

.887053

.887003

.886953

.886905

.886857

.886810

1.41

.886765

.886720

.886676

.886633

.886590

.886549

.886509

.886469

.886430

.886393

1.42

.886356

.886320

.886285

.886251

.886217

.886185

.886153

.886123

.886093

.886064

1.43

.886036

.886009

.885983

.885958

.885933

.885910

.885887

.885865

.885844

.885824

1.44

.885805

.885787

.885769

.885753

.885737

.885722

.885708

.885695

.885683

.885672

1.45

.885661

.885652

.885643

.885635

.885628

.885622

.885617

.885612

.885609

.885606

1.46

.885604

.885603

.885603

.885604

.885606

.885608

.885611

.885616

.885621

.885626

1.47

.885633

.885641

.885649

.885658

.885668

.885679

.885691

.885704

.885717

.885732

1.48

.885747

.885763

.885780

.885798

.885816

.885836

.885856

.885877

.885899

.885922

1.49

.885945

.885970

.885995

.886021

.886048

.886076

.886104

.886134

.886164

.886195

X

0

1

2

3

4

5

6

7

8

9

1.50

.886227

.886260

.886293

.886328

.886363

.886399

.886436

.886474

.886512

.886551

1.51

.886592

.886633

.886675

.886717

.886761

.886805

.886850

.886896

.886943

.886990

1.52

.887039

.887088

.887138

.887189

.887241

.887293

.887346

.887400

.887455

.887511

1.53

.887568

.887625

.887683

.887742

.887802

.887863

.887924

.887986

.888049

.888113

1.54

.888178

.888243

.888309

.888376

.888444

.888513

.888582

.888653

.888724

.888796

1.55

.888868

.888942

.889016

.889091

.889167

.889244

.889321

.889400

.889479

.889559

1.56

.889639

.889721

.889803

.889886

.889970

.890055

.890140

.890226

.890313

.890401

1.57

.890490

.890579

.890669

.890760

.890852

.890945

.891038

.891132

.891227

.891323

1.58

.891420

.891517

.891615

.891714

.891814

.891914

.892015

.892117

.892220

.892324

1.59

.892428

.892533

.892639

.892746

.892854

.892962

.893071

.893181

.893292

.893403

X

0

1

2

3

4

5

6

7

8

9

1.60

.893515

.893628

.893742

.893857

.893972

.894088

.894205

.894323

.894441

.894561

1.61

.894681

.894801

.894923

.895045

.895169

.895292

.895417

.895543

.895669

.895796

1.62

.895924

.896052

.896182

.896312

.896443

.896574

.896707

.896840

.896974

.897109

1.63

.897244

.897381

.897518

.897655

.897794

.897933

.898074

.898215

.898356

.898499

1.64

.898642

.898786

.898931

.899076

.899223

.899370

.899518

.899666

.899816

.899966

1.65

.900117

.900269

.900421

.900574

.900728

.900883

.901039

.901195

.901352

.901510

1.66

.901668

.901828

.901988

.902149

.902310

.902473

.902636

.902800

.902965

.903130

1.67

.903296

.903464

.903631

.903800

.903969

.904139

.904310

.904482

.904654

.904827

1.68

.905001

.905176

.905351

.905527

.905704

.905882

.906060

.906240

.906420

.906600

1.69

.906782

.906964

.907147

.907331

.907515

.907701

.907887

.908074

.908261

.908450

X

0

1

2

3

4

5

6

7

8

9

1.70

.908639

.908829

.909019

.909211

.909403

.909596

.909789

.909984

.910179

.910375

1.71

.910572

.910769

.910967

.911166

.911366

.911567

.911768

.911970

.912173

.912376

1.72

.912581

.912786

.912991

.913198

.913405

.913613

.913822

.914032

.914242

.914453

1.73

.914665

.914878

.915091

.915306

.915521

.915736

.915953

.916170

.916388

.916607

1.74

.916826

.917046

.917267

.917489

.917712

.917935

.918159

.918384

.918609

.918835

1.75

.919063

.919290

.919519

.919748

.919978

.920209

.920441

.920673

.920906

.921140

1.76

.921375

.921610

.921846

.922083

.922321

.922560

.922799

.923039

.923279

.923521

1.77

.923763

.924006

.924250

.924494

.924740

.924986

.925233

.925480

.925728

.925977

1.78

.926227

.926478

.926729

.926981

.927234

.927488

.927742

.927997

.928253

.928510

25

1.79

.928767

.929026

.929285

.929544

.929805

.930066

.930328

.930591

.930854

.931119

X

0

1

2

3

4

5

6

7

8

9

1.80

.931384

.931650

.931916

.932184

.932452

.932720

.932990

.933261

.933532

.933804

1.81

.934076

.934350

.934624

.934899

.935175

.935451

.935728

.936006

.936285

.936565

1.82

.936845

.937126

.937408

.937691

.937974

.938258

.938543

.938829

.939115

.939402

1.83

.939690

.939979

.940269

.940559

.940850

.941142

.941434

.941728

.942022

.942317

1.84

.942612

.942909

.943206

.943504

.943803

.944102

.944402

.944703

.945005

.945308

1.85

.945611

.945915

.946220

.946526

.946832

.947139

.947447

.947756

.948066

.948376

1.86

.948687

.948999

.949311

.949625

.949939

.950254

.950570

.950886

.951203

.951521

1.87

.951840

.952160

.952480

.952801

.953123

.953446

.953769

.954093

.954419

.954744

1.88

.955071

.955398

.955726

.956055

.956385

.956715

.957047

.957379

.957711

.958045

1.89

.958379

.958714

.959050

.959387

.959725

.960063

.960402

.960742

.961082

.961424

X

0

1

2

3

4

5

6

7

8

9

1.90

.961766

.962109

.962453

.962797

.963142

.963488

.963835

.964183

.964531

.964881

1.91

.965231

.965582

.965933

.966286

.966639

.966993

.967347

.967703

.968059

.968416

1.92

.968774

.969133

.969492

.969853

.970214

.970576

.970938

.971302

.971666

.972031

1.93

.972397

.972764

.973131

.973499

.973868

.974238

.974609

.974980

.975352

.975725

1.94

.976099

.976473

.976849

.977225

.977602

.977980

.978358

.978738

.979118

.979499

1.95

.979881

.980263

.980647

.981031

.981416

.981802

.982188

.982576

.982964

.983353

1.96

.983743

.984133

.984525

.984917

.985310

.985704

.986098

.986494

.986890

.987287

1.97

.987685

.988084

.988483

.988883

.989285

.989687

.990089

.990493

.990897

.991302

1.98

.991708

.992115

.992523

.992931

.993341

.993751

.994162

.994573

.994986

.995399

1.99

.995813

.996228

.996644

.997061

.997478

.997896

.998315

.998735

.999156

.999578

2.00

1.00000

1.00042

1.00085

1.00127

1.00170

1.00212

1.00255

1.00298

1.00341

1.00384

Figure 1.4: Plot of (x ) and 1 / ( x ) 26

.

1-3-Gamma Function Relationships: The remainder of this subsection formally develops three relationships and gives a simple application of their use. The Gamma function is a generalization of the factorial function for any positive integer n. Theorem (1): For any positive n .

(n  1)  n(n) For a positive integer

(n  1)  n! For example, (6)  (5)!  120 . Proof: for any positive n , we have from equation (1) that; 

(n  1)   e x x n 1dx 0

Now

integrating

by

;

with  udv  uv   vdu ,

parts,

we

let

u  x n , dv  e x dx, v  e x , therefore;



(n  1)   x n e x



 x 0





0

0

  (nx n 1 )e x dx  (0  0)  n  x n 1e x dx  n(n)

Proof that, for a positive integer, (n  1)  n! .If n is a positive integer, then; 



(1)   x11e x dx   e x 0



 x 0

 (0  1)  1

(2)  (1  1)  1(1)  1 (3)  (2  1)  2(2)  2 (4)  (3  1)  3(3)  3  2 1  3! or, in general, (n  1)  n! [where (n  1) is sometimes referred to as the generalized factorial function]. Remark: There are a number of notational conventions in common use for indication of a power of a gamma functions. While authors such as Watson (1939) use  n (x ) (i.e., using a trigonometric function-like convention), it is also common to write ( x) . n

27

Result: The gamma function can be defined as a definite integral for x  0 (Euler's integral form); 

( x )   e t

t x 1

0



dt  2 e t

t 2 2 x 1

0

 1 dt   ln( ) 0  t  1

x 1

dt   .

Example (1): The gamma function is a generalized factorial function. a) . Show that (1)  1 . b). Show that ( x  1)  x( x) . (Hint: Use integration by parts) c). Conclude that (n)  (n  1)! for n = 1, 2, 3, … 1 d). Use the fact that ( )   to show that, if n is an odd integer, 2

n  (n  1)! ( )  n 1 2 2n 1( )! 2 

Solution: The Gamma function (t )   e y yt 1dy 0



  (b) ( x  1)   e y y x dy =  e 0

  (a) (1)   e y dy   e y 0  1 0



y x  y 0



  (e y ) xy x 1dy 0



= x   e y y x 1dy = x  (x) 0

(c) (n)  (n  1)(n  1) = (n  1)(n  2)(n  2) = (n  1)(n  2)3  2  1  (1) = (n  1)! 1 (d) ( )   , n is an odd integer. 2

n  (n  1)! Claim: ( )  n 1 2 2 n 1 ( )! 2 1 Pf: (1) For n  1 , ( )   . 2

(2) For n  k ( k is an odd integer);

28

k  (k  1)! ( )  . 2 k 1 k  1 2 ( )! 2 (3) n  k  2, k 1 by (b) k   (k  1)! k2 k k k    (k  1)! ( )  (  1)  ( )  x 2 = k 1 k 1 k 1 2 2 2 2 2k 1  ( )! 2  2k 1  ( )! 2 2 2

Example (2): a). Show that (2) = 1. b). Show that (3) = 2. d). Graph (x ) (roughly) for 0  x  3 .(Hint: (x ) has a unique minimum for x  0 ). 

Solution: The gamma function (t )   e y yt 1dy , then; 0





  (1)   e y dy   e y 0  1 0

a)-By recurrence formula (n)  (n  1)(n  1) , then ;

(2)  (2  1)(2  1)  (1)  1 , or by integration formula ; (2)  



0

y 2 1e y dy  



0



y e y dy   ye  y

   ( e  0 

0

y

)dy  1, (by parts) .

b) - By recurrence formula (3)  (3  1)(3  1)  2(2)  2(1)  2 , or by integration formula; (3)  



0

y 31e  y dy  



0

 2



0



y 2 e  y dy   y 2 e  y

   (2 y )(e  0 

0

y

)dy, (by parts)

ye  y dy  2

c) -The graph (x ) for 0  x  5 is: As the value of the gamma function increases rapidly with increasing argument values, it is often convenient to work with plot the log of the function as in the graph below:

29

Table (1.3): Higher Gamma Values x 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00

(x ) infinity 19.47 9.5135 6.2203 4.5908 3.6256 2.9916 2.5461 2.2182 1.9681 1.7725 1.6161 1.4892 1.3848 1.2981 1.2254 1.1642 1.1125 1.0686 1.0315 1

x 1 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00

(x ) 1 0.973504589 0.951351084 0.933041237 0.918169041 0.90640277 0.897470984 0.891151725 0.887264098 0.885661658 0.886227201 0.888868622 0.893515623 0.90011709 0.908639007 0.919062803 0.931384048 0.945611456 0.961766114 0.979880937 1.000000289

x 2 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 2.55 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00

(x ) 1 1.022179819 1.046486192 1.072997422 1.101802849 1.133003462 1.166712279 1.203054829 1.242169737 1.284209404 1.329340802 1.377746365 1.429624997 1.485193199 1.544686313 1.608359904 1.676491287 1.749381194 1.827355617 1.910767827 2.000000579

x 3.00 3.05 3.10 3.15 3.20 3.25 3.30 3.35 3.40 3.45 3.50 3.55 3.60 3.65 3.70 3.75 3.80 3.85 3.90 3.95 4.00

(x ) 2 2.0954686 2.197621 2.3069445 2.4239663 2.5492578 2.6834382 2.8271788 2.9812074 3.146313 3.323352 3.5132532 3.717025 3.935762 4.170653 4.4229897 4.6941756 4.9857364 5.2993313 5.6367651 6.0000017

Tables (2), (3) and Figure 1.5 represent the gamma-function for 0  x  5 .

Figure 1.5: gamma-function for 0  x  5 Example (3): As a simple example of the use of the gamma function, consider the following integral,

30



I



y e  y dy 3

0

Letting x  y

3

2 3

and dx  3 y dy  3x dy, this becomes; 2





2

1  1 1 1 1  I   ( x 6 )e  x ( x 3 )dx   ( x 2 )e  x dx  ( )  3 30 3 2 3 0 1

Thus, with the use of the gamma function, evaluating this integral is quite straight forward. Solution: The Gamma-function is defined as: 

( x)   e t t x 1dt 0

Where x  0 and x  R the (x) can not be explicitly expressed by elementary functions because;





( x  1)   e t t x dt   et t x 0





0





0

0

 x  e t t x 1dt  x  e t t x 1dt  x( x ).

when x  1 , Equation can be written as

( x)  ( x  1)( x  1)  ( x  1)( x  2)( x  2)  ( x  1)( x  2)    ( x  n)( x  n). When 0  x  1 and x is not a negative integer, ( x ) 

( x ) 

( x ) 

( x  1) , ( 1  x  0 ) x

( x  2) x( x  1)

( 2  x  1)

( x  n) (n  x  n  1) x( x  1)    ( x  n  1)

The gamma function is not defined when x  0 or x is a negative integer. Theorem (2): For any positive real number 0  x  1 , then the reflection formula; ( x )(1  x )   x( x )(  x ) 

 x 1

t

 1  t dt; 0

Proof: Left as exercises or see the reference book. Theorem (3): Use the following result from calculus: If x  0 and y  0 , then; ( x ) ( y )  t x 1 (1  t ) y 1 dt ( x  y ) 0 1

31

Proof: Finally, applying the definition and theorems 1, we have obtained the following result. Theorem (4): For any x  0 one has;

x

( x  1) 1  x 1 2   x   2 

Proof: Left as exercises or see the reference book. Theorem (5): For any x>0 the following inequality;

 2 ( x  n  1) 2 n  2  1 x  xn    1 2x  k  2   2  x   k 1  2  Holds. Proof: Applying (4), we have;

x

( x  1) 1  x 1 2   x   2 

3 1 ( x  2 ) x   x 1 2   x  1

………

xn 

( x  n  1) 1  xn 1 2   x  n   2 

Multiplying up the preceding expressions and after rearranging terms, we get; 1 ( x  n  1) 1 1   x  x   ...( x  n)    x   ...( x  n)( x  n  ) 1 2 2 2     x   2 

or equivalently,

x

 2 ( x  n  1) 1  xn , 1  1 2   2  x   x   ...( x  n) 2  2 

And this completes the proof. 32

Example (4): Define (using 1a) n!  (n  1) and compute ½! Solution: As it was explained in the above remark, the natural definition is 

(1 / 2)!   e x x dx . It is quite difficult to believe that there is an elementary 0

function, whose derivative is e x x . So, we need a trick. Substitute K  M  1 / 2 in the formula 1(b): 

(2)!.  t 1  t dt  (1 / 2)! 

2

.

0

1

So. It is enough to compute

 t 1  t dt . This integral is related to a circle. 0

Really, 1



1 u t  . 2

let

1/ 2

t (1  t )dt 

0



Then

1  1  1 t  t  1    u   u    u 2 and 2  2  4

1 / 4  u 2 dt   / 8

1 / 2

This is precisely the are between diameter of a circle of radius ½ and its arc, which is half a circle of radius ½ so it is  / 8 . So, we have 2 !   / 8  (1 / 2)!  and ; 2

(1 / 2) !  2!  / 8   / 4  (  ) / 2 .

Example (5): 

(a) Prove that

e

x

x N dx  N ! for any natural N.

0 1

(b) Prove that

K  x 1  x  dx  M

0

K !M ! for natural K, M.  K  M  1!

Solution: (a) Induction. For N=0 You get integral = 0. The step of induction: proving that; 

e

x



x dx  N  e  x x N 1dx . N

0

0

33

Apply integration by parts. First function is xN, the derivative of the second is

e x . Minus (of deriving e x ) times minus (of integration by parts) is plus. (b) I think that the first solution (I saw it in Euler’s book) was integration by parts (integrate x , differentiate 1  x  ). However, there is another K

M

classical solution I like better. Consider double integral over quadrant ( 1 / 4 plane): 

I (K , M ) 

x

y M e  x  y dxdy

K

0 0

We shall compute it in two ways. First way: split it into product of two integrals. 

I (K , M )    x y e K

M

 x y





dxdy   x e dx  y M e  y dy K !M ! K

0 0

x

0

Second way: substitute z  x  y, t  

K

0

x which is x  zt , y  z  zt ; x y 

M

M  x  y I ( K , M )       z K  M e x y dxdy   t K 1  t  z K  M e  z dxdy z  z 0 0 0 0

To continue this computation we need to convert dxdy into dzdt which is done by the Jacobian;  x z J  y  z

x

 t   t  y  1  t t 

z  t z    z  z  1 0 

So dxdy = |J|dzdt = zdzdt, hence; 

I ( K , M )   t

K

1  t 

M

z

K M

0 0



 z 0

e dxdy   t K 1  t  z K  M 1e  z dzdt  M

0 0

1

e dz  t

K  M 1  z

1

z

0

K

1  t 

1

M

dt   K  M  1!   t K 1  t  dt M

0

The two answers for the same questions that we have computed in two ways 1

should be equal, hence  K  M  1!  t K 1  t  dt  K !M !.

 0

34

M

1-4-Properties of the Gamma Function: The Euler definition (Infinite limit) of gamma function is ( x  1)  x( x) ;

 (1) = 1.  (0) = 1  (2) = 1.  (1) = 1  (3) = 2.  (2) = 2.1 = 2  (4) = 3.  (3) = 3.2.1 = 6  (5) = 4.  (4) = 4.3.2.1 = 24 ………………………………...

 (n+1) = n.  (n) = (n-1)! The Euler definition(Definite Integral). With the change of variable  = t1/2 this becomes; 

(x) = 2  e -t t 2x-1 dt , x > 0 2

0



(1/2) = 2  e -t dt  2 erf( )   , 2

0

erf(x) is known as the error function.

lim 1.2.3.  n n x x  0, - 1, - 2 , - 3  n   x(x  1)(x  2) (x  n) lim 1.2.3.  n (x  1)  n x 1 x  - 1, - 2 , - 3  n   (x  1)(x  2)(x  3) (x  1  n) lim nx 1.2.3.  n  n x x  0, - 1, - 2 , - 3  n   x  n  1 x(x  1)(x  2) (x  n) (x) 

This can be solved numerically to give x  1.46163 (Sloane's; Wrench 1968), which has continued fraction [1, 2, 6, 63, 135, 1, 1, 1, 1, 4, 1, 38, ...] at x0 , (x 0 ) achieves the value 0.8856031944..., which has continued fraction

[0, 1, 7, 1, 2, 1, 6, 1, 1, ...]. The Euler limit form is: (x) 

1 1 x [(1  ) x (1  ) 1 ]  x n 1 n n

So

35

lim (n  1) x n   x(x  1)(1  x )  (1  x ) 2 x x lim (n  1) n!  n   x(x  1)(x  2)  (x  x ) lim n! lim n!  (n  1) x  (n  ) x . n   (x) n 1 n   (x) n 1

(x) 

1-5-Incomplete gamma function: The incomplete gamma function is defined by generalising the Euler definite integral definition of the gamma function to; x

 (z, x) =  e- t t z -1 dt Re(z) > 0 0



(z, x) =  e- t t z -1 dt Re(z) > 0 x

(z)  (z, x)   (z, x) (z, x)  (z, x) 1  (z) (z) The utility of the incomplete gamma function lies in the fact that multiplying an expression by (  (z,x)+  (z,x))/  (z) leaves it unchanged and the two parts generated in this way may be, rapidly convergent, e.g., when the original expression is not.

Functions such as the gamma functions are

generally computed numerically using either asymptotic series (which may converge rapidly for large or small arguments of the function) or recursion relations. By expanding the exponential in the incomplete gamma function and integrating term by term we obtain; z 1

xs s  0 s!

 (z, x) =  e- t t z -1 dt Re(z) > 0  (z  1)! (1  e  x ) x

0

z 1

xs s  0 s!



(z, x) =  e- t t z -1 dt Re(z) > 0  (z  1)! e  x  x

36

For the particular case z=1/2 x

 (1/2, x) = 2 e -t dt Re(z) > 0 2

0

2

erf(x) 



erfc(x) 

2





x

e - t dt  2

0





x

e - t dt  2

1

 (1/2, x 2 )

 1



(1/2, x 2 )

erfc(x) is known as the complementary error function and for all arguments erf(x) + erfc(x) = 1. Figure 1.6: represent the incomplete gamma-function for 0  x  3 . 2 1.5 1 0.5

-4

-2

2

4

-0.5 -1

Figure 1.6: Gamma-function for 0  x  3 We can say that the Euler's gamma function is a generalization of a factorial. It has a property that for x  0. ( x  1)  x( x) and for a positive integer k  0 , and (k  1)  k! .

x 1 ( ) 1 2 Theorem (1): The function F ( x )  is a bounded function. It  x x ( ) 2

has a limit 0 as x tends to 0 and a limit 1 when x tends to infinity. Proof:

Of course this will be done, if we prove that P( x)  log( S ( x))

P(x)=log(S(x)) is an increasing function for x  0 . We'll need the following formula:

37

 d 1 1 (log( ( x ))     (  ) Using this formula we compute the dx x  p 1 p 1 p

derivative of ; 1 1 P( x )  log( S ( x ))   log( x )  log( ( x  ))  log( ( x )) We have: k k

P ( x )  

1 1  1  (  k x p 1 x  p  1

1

) ( k  1) x p k   1 1 1 1  (  )  (  k p 1 x  p x  p  1 p 1 x  p  1 

 (

1 x p

1 1 1 1 1    k x  p k x  p 1 x  p 1

( k  1) k

1

( k  1) x p k  1 1 1 1 1  (  (1  )  ) ( k  1) k x  p 1 p 1 k x  p x p k p 1

)

)

Each term of the last summation is positive. To see this set

y  x  p, a  1/ k Then 0  a  1 and p-th term is ; a

1 1 1 ;  (1  a )  y y 1 y 1 a

and is positive because of convexity of the function

1 . We showed that the y

derivative of the function p(x ) is positive, which means that p(x ) is increasing, which means that the function S (x ) is increasing. Theorem (2): Let 0  a  1 be a positive real number and;

S ( x) 

1 ( x  a ) x a ( x )

Theorem (3): Let 0  a  1 be a positive real number an;

S ( x) 

1 ( x  a ) x a ( x )

then ; lim S ( x )  1 x 

38

Sketch of proof: We assume that a  1 / k . The general form will be easily deduced. Since S(x) is increasing, it must have a limit at the infinity - denote it by h. Hence S(x)

h (it means S(x) is close to h) for large x. Since; 1 ( x  ) k  S ( x)  k x  h  k x ( x )

We have;

k 1 1 ( x  ) ( x  ) ( x  1) k  ....  k  k 1 k 2 ( x ) ( x  ) ( x  ) k k  k 1   k k 2   h  x    ....  h  k x   h  k x     k k    

1 ( x  ) k  x ( x )





For large numbers x, this can hold only if h=1. Use this to prove theorem for the case a is rational. Just split corresponding ratio into a product of finite number of ratios of the form written previously. Plot of erf(x) (rising from -1 to 1), erfc(x) (falling from 2 to zero) and erf(x) + erfc(x) (constant value of 1). The series expansions for erf(x) and erfc(x) are;

erf(x) 

 2  x3 x5 x 2n1 n  x    ...   1  ...  3 5.2! (2n  1)n! π  2

e -x  1 1.3 1.3.5  n (2n - 1)!! erf(x)  1  ... 1  2  2 4  3 6  ...   1 n 2n 2 x 2 x 2 x π x  2x  The first expansion is obtained simply by expanding e-t^2 and integrating erf(x) term by term. The second is obtained by writing; 2 1 2 1 d  1 -t 2  e -t  te-t  - e ; t t dt  2 

And integrating by parts, the first expansion is good for small x and the second for large x. The Recurrence relations for the incomplete gamma function of the incomplete gamma functions satisfy the following upwards and downwards recursion relations;

39

(z, x)  (z  1)(z  1, x)  e  x x z 1

 (z, x)  (z  1) (z  1, x)  e  x x z 1 Γ(z  1, x)  e  x x z Γ(z, x)  z  (z  1, x)  e  x x z  (z, x)  z The first two are upwards and the last two downwards and the relations can be obtained using integration by parts. The starting points for the downwards recurrences are; e  x , for z  0  γ(1  z, x)    ( 1) m x1 z  m   m! (1  z  m) , for 0  z  1 m  0 1  e  x , for z  0 (1  z, x)   (1  z, x) -  (1  z, x), for 0  z  1

See paper by Guseinov and Mamedov year provid further details about this section. Theorem (4): Using identity;

n  k  1 k z k  , k 0 

1  z  n    

|z| < 1

and the relation; 

e

at b 1

t

dt  (b) a b , a, b  0, t  0

0 

Where

(b)   e t t b 1dt 0

Proof: we obtain,   n  k  1  k  e d d   e n   y    k  k 0  0 0    n  k  1 n   k  ( y)  n  k  1 1   e     d  y    1   k k    n  k y  1 k 0  0 k 0  

e

n     y  



1  e 

 n



  k  1 1   y    1  y  1 k  n  n  1   k 

40

 

Result: and similarly, 



n    y   1   e  1 e 0



n

  k  1 1  d  y    y  k  n  n  1   k 

Substituting the equation of theorem (4) and using relations (b  1)  b (b) and;

 k  1  k  1  k  1   n   n , k  n  1  n   n  1 1-6-Derivatives: The derivative of the upper incomplete gamma function ( s, x ) with respect to x is well known. It is simply given by the integrand of its integral definition:

( s, x ) x s 1  x x e The derivative with respect to its first argument s is given by; ( s, x )  ln x( s, x )  xT (3, s, x ) s

and the second derivative by;

 2 ( s, x )  ln 2 x( s, x )  2 xln xT (3, s, x )  T (4, s, x ) 2 s Where the function T(m ,s , x) is a special case of the Meijer G-function;

 0,0,0,.....,0 T (m, s, x )  Gmm,01,m  x  s  1,1,....,1



This particular special case has internal closure properties of its own because it can be used to express all successive derivatives. In general, m 1  m ( s, x ) m  ln x  ( s , x )  mx Pnm1 ln mn 1 xT (3  n, s, x )  m s n 0

n n! Where Pjn    j!  (n  j )!  j All such derivatives can be generated in succession from: 41

T ( s, x )  ln xT (m, s, x )  (m  1)T (m  1, s, x ) s

and T ( s, x ) 1   T (m  1, s, x )  T (m, s, x ) x x

These derivatives and the function T(m,s,x) provide exact solutions to a number of integrals by repeated differentiation of the integral definition of the upper incomplete gamma function.

1-7-Relation between Beta and Gamma functions: In mathematics, the beta function, also called the Euler integral of the first kind, the beta function was studied by Euler and Legendre year and was given its name by Jacques Binet year; its symbol Β is a Greek capital β rather than the similar Latin capital b. As the gamma function is defined as an integral, the beta function can also be written in an integral form. The name of the curve is given by Binet (1839). But it has been studied before, by Euler (1771) and Legendre (1809). So that the curve is also called the Euler beta function or the first Euler function. The Beta function was first studied by Euler and Legendre and was given its name by Jacques Binet. Just as the gamma function for integers describes factorials, the beta function can define a binomial coefficient after adjusting indices. The beta function was the first known scattering amplitude in string theory, first conjectured by Gabriele Veneziano. It also occurs in the theory of the preferential attachment process; a type of stochastic process. The incomplete beta function is a generalization of the beta function that replaces the definite integral of the beta function with an indefinite integral. The beta function comes into picture when calculating the total density by performing convolutions of classical state densities which have a power law form. The (complete) beta function B(α, β) is defined by the integral by special function defined by:

42

1

B(α, β)   t α 1(1  t) β 1dt, α  0, β  0 ,

for α  0, β  0 .

0

In the definition of both beta function and gamma function the exponents are always α  1 and not α . The aesthetic reason for that definition was that people prefer to get the formula and not α  β  1 below. There are also some ideological reasons for this. They say it is Mellin transform, which is version of Fourier for multiplicative group of positive numbers, but these reasons appeared much later. The next solution is even nicer than the previous, but works only for integer number (while the previous is easily generalizable for complex numbers). Figure 1.7: represent the Beta-function and log Betafunction.

Figure 1.7: Beta-function B(x, y) and log (a, z) Theorem (1): The beta function is well-defined, that is, (α, β)   , for any α  0, β  0 . Proof: Break the integral into two parts, from 0 to 1 / 2 and from 1 / 2 to 1 . If 0  α  1, the integral is improper at u  0 , but (1  u)  1 is bounded on

(0,1 / 2] . If 0    1 the integral is improper at u  1 , but u  1 is bounded on [1 / 2,1) . Result: Recall that the gamma function is a generalization of the factorial function. The following result follows easily from a basic property of the gamma function. The beta function satisfies the following properties: 1- (α, β)  (β, ) 43

2- (α,1 )  1 /  . 3-For j  IN * , and k  IN * , then; B(j , k)  4-

If

α  0, β  0 ,

and

j  IN ,

for

(  j,   k)  ( 1, j)  ( 1,k) ,  ( , ) (  )( 1, j k)

(j  1 )!(k  1 )! . (j  k  1 )! and

where

k  IN ,

the

then; function

a (s , j)  a(a  s)(a  s  1).....(a  ( j  1)s . 1 y

Example: Beta function is the integral



n x m 1  x  dx which is classical

0

Euler’s example of for integration by parts (and the answer is almost the reciprocal of binomial coefficient, as in our problem the answer is almost the reciprocal of trinomial coefficient), and of course, this example follows from it. However, both ideas below can be applied to the beta function. Solution: We can conclude this integral from another famous Euler’s integral, 

x e

n x

dx  n!

0

(a gamma function, up to shift by one). The proof of that is a simple in induction and integration by parts. No consider three-dimensional integral: 



k ! n! m!   x e  y e  z e dxdydz   x k y m z ne k x

m y

n z

0 0 0

 x  y  z 

0 0 0

Now switch to another variables t = x + y + z , u = x/t , v = y/t , u = xt , v =yt , z = (1 – x – y)t.

 x  u  y The Jacobi matrix is   u   z  u

x v y v z v 44

x  t   t 0 x   y    0 t y  t    t t 1  x  y  z    t 

dxdydz

t 0 x   Adding first two rows to the last brings us to the matrix 0 t y , so   0 0 1   the determinant is t2. Hence the integral that we computed was 

k ! n! m!  

  ut   vt  t 1  u  v   k

m

n

e tt 2dudvdt 

0 u ,v  0 u  v 1



 u v 1  u  v  k m

u ,v  0 u  v 1



n

dudv   t k  m n 2 du  X   k  m  n  2 ! 0

Where X denotes the integral we wanted to know from the beginning. Therefore, X 

k ! n! m! . The relationship between the gamma  k  m  n  2 !

and beta functions is:

B(n , m) 

Γ(n)Γ(m) Γ(n  m)

Theorem (2): Use the following result from calculus: If p  0, q  0 , then;

 ( p, q ) 

 p q   p  q 

Proof: Applying definition and theorems, 1

1

 ( p, q)   x p 1 (1  x) q 1 dx   (1  t ) p 1 t q 1  (dt ) 0

0

0

1

1

0

   (1  t ) p1  t q1dt   t q1 (1  t ) p1 dt   (q, p) 1

B ( p, q )   x

p 1

(1  x)

q 1

0

1

dx   (1  x) 0

1

q 1

 xp  d   p

  

1  x p (1  x) q 1  q 1 p  x (1  x) q  2 dx    p p 0  0

45

q  1 p 1 q  1 p 1 x (1  x) q  2 dx  x (1  x) q 1 dx  p 0 p 0 1



1

 q 1  q 1   ( p, q  1)     ( p, q)    p   p 

The rearrangement of the border, we find that;

 p  q 1  q 1   ( p, q  1)    ( p, q  1) p p     Note that by putting x 

dx 

t Note that: the development and, therefore, is 1 t ,

dt , and compensation in the function we have: (1  t ) 2 1

B ( p, q )   x

p 1

(1  x)

0

q 1



 t  dx     1 t  0

P 1

 t    1 t 

P 1

dt  (1  t ) 2



t P 1 0 1  t Pq dt

  p 1   p  q 1 (1t ) y    t dt   y dt  e dy  Pq     1  t 0 0  0       ( p  q)   ( p, q)    t p 1 dt   y p  q 1e (1t ) y  dy  0  0           t p 1e t dt     t q 1et dt  0  0  

( p  q ) 

t P 1

( p  q)   ( p, q)  ( p)  (q) , then  ( p, q)  ( P)  (q) ( P  q ) 

( x, y )   t x 1 1  t 

( x  y )

dt ,

x  0, y  0.

0

Theorem (3): The beta function can be defined as complementary sports the following: 1

( x, y )   t x 1 1  x 

y 1

dt

,

x  0, y  0.

0

Can simply make sure that the beta-function is symmetry function. Proof: Applying definition and theorems,

46

 x  y   x  y   y  x   ;  y  x 

( x, y ) 

x  0, y  0

You can use the definition of gamma function for negative values x  0 to define the beta function for negative values y  0 through the following mathematical formula: x  1  y  1 x y  ( x, y )   x  y  1 x y x  y x  1 y  1 ; x  0, y  0  x. y. x  y  1

Theorem (4): Using Beta Function to Evaluate Integrals  /2

 (sin

2 m 1

x )(cos 2 n1 y )dy 

0

B(m, n ) , m  0, n  0 2

1

Proof: We have, B(m, n )   x m1 (1  x ) n 1 dx, m  0, n  0 0

Let x  sin 2 y . Then dx  sin y cos ydy & x m1 (1  x) n1 dx  (sin 2 y ) m1 (cos 2 y ) n1 (dy / 2 sin x cos x)  2 sin 2m1 y  cos 2n1 ydy when x  0 , we have y  0 , when x  1 , we hae y   / 2 , Thus,  /2

I  2  (sin 2 m1 x )(cos 2 n 1 y )dy , then; 0

 /2

 (sin

2 m 1

x )(cos 2 n1 y )dy 

0

B(m, n ) , m  0, n  0 . 2

Theorem (5): Using Beta Function to Evaluate Integrals; 1

x

q 1

(1  x ) 1 dx  ( q)(1  q)   /(sin( q )), 0  q  1

0 1

Proof: We have, I   x q1 (1  x ) 1 dx 0

47

Let y  x /(1  x) ,

Hence, x  y /(1  y )  1  x  1  ( y / 1  y )  1 /(1  y ) &

dx  (1  y )  y( 1) /(1  y ) 2 dy ,

when x  0 ,

we

have y  0 ,

when;

lim ( x /(1  x ))  1, thus;

x

I





0

0

q 1 q 1 2  ( x /(1  x))dx   [( y /(1  y )) /(1 /(1  y ))](1 /(1  y ) )dy 1

  [ y q1 /(1  y ) q ]dy  ( q)(1  q) 0

Example (1): Divide the surface into small (or infinitesimal) countries. Divide the volume into conic parts, whose vertex is the center of the ball, and bases are countries on the surface. Each N-dimensional cone can be computed as Sh/N. In our case h=1, so each volume part is N times smaller than corresponding area part. This constant ratio will be preserved after integration. Second explanation to compute volume of the ball, we shall split it into thin concentric spherical layers (like cabbages). At radius R we have layer of width dR and area SNRN-1, and integration of this gives; 1

VN   S N R N1dR  0

SN N

So, it is sufficient to compute the area or the volume and not both. Solution: Consider N-dimensional integral:  

GN 





 ...  e

 



 x12  x22 ... xn2



dx1dx2 ...dxN



We shall compute it in 2 ways. First way is splitting it into product: 

GN 

e



 x12



dx1   e 

 x22



dx2  ...   e xn dxN  G1N 2



Second way is integrating it thing cabbage-like spherical slices of radius R centered at zero. The volume of each slice is Sn R 

N 1

GN   e R Sn R N 1dR , now substitute u  R2 , du  2RdR : 2

0

48

dR so we get





 

Sn  R2 N 2 Sn u N2 1 S N  S GN   e R 2 RdR   e u du  n  N    1! n 2 2 2 0 2 0 2  2 N  S N N N So, G1    1! n and G1   ! Vn . To conclude this solution, we 2  2 2 have yet to find G1. Instead of computing it directly, use the fact that we know the area of the unit circle in plane. N /2  N /2 . G12   . The answer: VN   , SN  N

( N / 2)!

Remark: If someone asks what is and

( N / 2)!

 1 2 ! or  3 2 ! we say that   12 !



 N  1! N ! N  1 hence;

 12 !

 2

 3 2 !

,

 5 2 !

1 3 , 2 2

 

1 3 5 2 2 2

   .

Of course, for N=2k+1 we can write the formula in usual notations

VN

 2  2

 N 1 2

1  3  ...  N

,

SN  2

 2 

 N 1 2

1  3  ...   N  2 

, where the denominators

contain products of first odd numbers.

1-8-Applications of Gamma, Beta and Zeta Functions Example (1): Compute the following ratio:

16 ) 3 10 ( ) 3 (

Solution:

We

need

to

repeatedly

apply

( x)  ( x  1)( x  1) to the numerator of the ratio:

49

the

recursive

formula

16 16 16 13 ) (  1) (  1) ( ) 13 3  3 3 3  10 10 3 ( 10 ) ( ) ( ) 3 3 3 13 13 10 (  1) (  1) ( ) 13 3 13 10 3 3  130   10 3 3 3 ( 10 ) 9 ( ) 3 3 (

Example (2): Compute;

(5) Solution: We need to use the relation of the Gamma function to the factorial function: (n)  (n  1)! which for n  5 becomes:

(5)  (5  1)!  4!  4  3  2 1  24 . Example (3): Express the following integral in terms of the Gamma function: 

9/2 x e

1 ( x) 2

dx

0

Solution: This is accomplished as; 

x

9/2

e

1 ( x) 2



dx   2t ()

0

9 / 2 ( t )

e



2dt  2

11/ 2

0

t

11/ 2 1 (  t )

e

dt  211/ 2 (211/ 2 ),

0

(by changing variables : t 

1 x ). Follows; where in the last step we have just 2

used the definition of Gamma function . Example (4): Given the above definition, it is straightforward to prove that the Gamma function satisfies the following recursion:

( x)  ( x  1)( x  1) Solution: Using integration by parts: 

( x ) 

t

x 1 (  t )

e



dt   t

0



x 1  x   0

e



  ( x  1)t x 2 e (  t ) dt 0



 (0  0)  ( x  1)  t ( x 1)1e (  t ) dt  ( x  1)( x  1) 0

Example (5): When the argument of the Gamma function is an integer IN then its value is equal to the factorial of (n)  (n  1)! . 50

Solution: First of all; 

(1)   t 11e (  t ) dt  0



 e



( t )  0





e (  t ) dt

0

1

Using the recursion ( x)  ( x  1)( x  1) we obtain:

(1)  1  0! ( 2)  ( 2  1) ( 2  1)  1(1)  1  1! (3)  (3  1)(3  1)  2( 2)  1  2  2! ( 4)  ( 4  1) ( 4  1)  3(3)  1  2  3  3! ................................................................... ( n )  ( n  1)( n  1)  1  2  ....  .( n  1)  ( n  1)! Example (6): Compute the following product: 5 3 ( ) B( ,1) 2 2

where () is the gamma function and B(, ) is the Beta function. Solution: We need to write the beta function in terms of Gamma functions:

3 3 ( ) (1) ( ) (1) 5 2  ( ) 2 3 5 2 (  1) ( ) 2 2 3 3  ( ) (1)  ( ) because (1)  1 2 2 3 3  (  1) (  1) recursive formula for the function  2 2 1 1 1 1    ( ) ( )  ( )  because ( )    2 2 2 2  

5 3 5 ( ) B( ,1)  ( ) 2 2 2

Where we have used several elementary facts about the Gamma function, that are explained in the lecture entitled gamma function. Example (7): Compute the following ratio;

7 9 B( , ) 2 2 5 11 B( , ) 2 2 Where B(, ) is the Beta function. 51

Solution: This is achieved by rewriting the numerator of the ratio in terms of Gamma functions and using the recursive formula for the Gamma function:

7 9 7 9 7 7 9 B( , ) ( ) ( ) (  1) (  1) ( ) 1 1 2 2  2 2  2 2 2 5 11 5 11 7 9 5 11 7 9 B ( , ) B ( , ) (  ) B ( , ) (  ) 2 2 2 2 2 2 2 2 2 2 5 7  11 11  ( ) (  1)  ( ) /(  1) 1 2 2 2  2   5 11 7 9 B( , ) (  1   1) 2 2 2 2 5 11 ( ) ( ) 5 2 1 1 5 11 5 2 2 5   B( , )  5 11 2 9 5 11 9 5 11 2 2 9 B ( , ) (  ) B( , ) 2 2 2 2 2 2 Example (8): Compute the following integral: 

3

5  x 2 (1  2 x) dx 0

Solution: We need to use the integral representation of the beta function: 

3 2



3

1 5 1 0 x (1  2 x) dx  0 ( 2 t ) 2 (1  t ) 2 dt 5



5

1 5 / 2 5 / 2 1 1 5 5  ( ) 5 / 2  (t ) 2 (1  t ) dt  ( ) 5 / 2 B( , ) 2 2 2 2 0

Now, write the beta function in terms of gamma functions: 2

3 1 1  5 5 5 ( ) ( ) ( ) ( ) 2  5 5 9 2 2 2   2 2 2 B( , )      5 5 2 2 (5) 4  3  2  1 384 (  ) 2 2 Substituting this number into the previous expression for the integral, we obtain: 

3 2

5/ 2

5 5 1 1 0 x (1  2 x) dx   2  B( 2 , 2 )   2  9 3  2  2 3072 31024 5

Example (9): Evaluate 52

5/ 2



9 1 9  2  384 8 384



x e

4 x

dx

0

Example (12): Find (1 / 2) 

Solution:

x e

4 x



dx 

0

x

51  x

e dx  (5)

0

(5)  (4  1)  4!  4  3  2  1  24 Example (10): Evaluate 

x

1/ 2  x

e dx

0 



0

0

1/ 2  x  x e dx 

Solution:

x

3 / 21  x

e dx  (3 / 2) and

3 / 2  1 / 2  1,

then

(3 / 2)  (1  1 / 2)  1 / 2(1 / 2)  (1 / 2)  . Example (11): Evaluate 

x

3/ 2 x

e dx

0 

Solution:

3/ 2 x  x e dx  0



x

5 / 21  x

e dx  (5 / 2) and

3 / 2  1 / 2  1 , then

0

(5 / 2)  (1  3 / 2)  3 / 2(3 / 2)  (3 / 2)(1 / 2)   1 / 4)  . Solution: Let  1 / 2  1  1 / 2 , then;

( 1 / 2)  (1 / 2  1) 

( 1 / 2  1)  2(1 / 2)  2  ( 1 / 2)

Example (13): Find (3 / 2) Solution: Let  3 / 2  1  1 / 2 , then;

( 3 / 2)  ( 1 / 2  1) 

( 3 / 2  1)  ( 1 / 2) /( 2 / 3)  ( 2  ) /( 2 / 3)  (4  ) / 3 ( 3 / 2)

Example (14): Evaluate 1

 x (1  x) dx 4

3

0

53

1

1

0

0

4 3 51 41  x (1  x) dx   x (1  x) dx 

Solution:

(5)(4) 4!3! 1   (9) 8! 280

Example (15): Evaluate 1

 [1 /

3

[ x 2 (1  x )] ]dx

0

Solution:

B(1 / 3,2 / 3) 

(1 / 3)(2 / 3)  (1 / 3)(1  / 3)   /(sin( / 3))   /( 3 / 2))  2 / 3 (1 / 3  3 / 3)

Example (16): Evaluate 1



x (1  x )dx

0

Solution:

1

1

0

0

1/ 2 3 / 21 21  x (1  x)dx   x (1  x) dx  B(3 / 2,2) 

(3 / 2)(2) (7 / 2)

(3 / 2)  (1 / 2  1)  (1 / 2)(1 / 2)  (1 / 2) (7 / 2)  (5 / 2  1)  (5 / 2)(5 / 2)  (5 / 2)(3 / 2  1)  (5 / 2)(3 / 4)(1 / 2)  (15 / 8) 1

Thus,

x

1/ 2

(1  x )dx 

0

(3 / 2)(2) (1 / 2)(1) 4   . (7 / 2) (15 / 8) 15

Example (17): Evaluate 

x

6

e 2 x dx

0

Solution: Letting y  2 x , we get, 



0

0

6 2 x 6 y  x e dx  (1 / 128)  y e dx  (1 / 128)(7)  (1 / 128)6! 

Example (18): Evaluate 



3

x e  x dx

0

Solution: Letting y  x 3 , we get, 

 0

xe

x 3



dx  (1 / 3)  y 1 / 2 e  y dx  (1 / 3)(1 / 2)  (1 / 3) 0

Example (19): Evaluate 54

45 8



x

m

n

e kx dx

0

Solution: Letting y  kxn , we get, 





x e  x dx  (1 / n.k ( m1) / n )  y ( m1) / n1 e  y dx  (1 / n.k ( m1) / n )((m  1) / n). 3

0

0

Example (20): Evaluate 2

x

2

/ (2  x ) dx

0

Solution: Letting x  2 y , we get, 2

x

1

2

/ (2  x ) dx  (8 / 2 )  y 2 (1  y ) 1/ 2 dy  (8 / 2 ) B(3, 1 / 2) 

0

0

64 2 15

Example (21): Evaluate a

x

4

/ (a 2  x 2 ) dx

0

Solution: Letting x 2  a 2 y , we get, a

1

0

0

4 2 2 6 3/ 2 1/ 2 6  x / (a  x ) dx  (a / 2)  y (1  y) dy  (a / 2) B(5 / 2, 3 / 2) 

a6 32

Example (22): Evaluate 

 1



/(1  x 4 ) dx

0

Solution: Letting x 4  y , we get, 



0

0

4 3 / 4  dx /(1  x ) dx  (1 / 4)  y (1  y ) dy  (1 / 4) (1 / 4, ) 

  2 /4 sin( / 4)

Example (23): Evaluate  / 2

a.

 sin (cos 3

2

x ) dx ,

0  / 2

b.

 sin

4

(cos 5 x ) dx

0

Solution: a. Notice that: 2m - 1 = 3 → m = 2 55

& 2n - 1 = 2 → m = 3/ 2 ,

 / 2

 sin (cos

a.

3

2

x ) dx  (1 / 2) B(2,3 / 2)  8 / 15

0  / 2

 sin

b.

(cos 5 x ) dx  (1 / 2) B(5 / 2,3)  8 / 315

4

0

Example (24): Evaluate  / 2

 sin

a.

x dx ,

6

0  / 2

 cos

b.

6

x dx ,

0

Solution: a. Notice that: 2m - 1 = 6 → m = 7/2  / 2

 sin

6

& 2n - 1 = 0 → m = 1/ 2

,

x dx  (1 / 2) B(7 / 2,1 / 2)  5 / 32

0  / 2

b.

 cos

6

x dx  (1 / 2) B(1 / 2,7 / 2)  5 / 32

0

Example (25): Evaluate 

a.

 cos

4

x dx ,

0 2

 sin

b.

8

x dx ,

0

Solution: 

a.

4  cos x dx  2

 / 2

0 

b.

8  sin x dx 

 cos

4

x dx  2(1 / 2) B(1 / 2,5 / 2)  3 / 8 ,

0  / 2

0

 sin

8

x dx  4(1 / 2) B(9 / 2,1 / 2)  35 / 64

0

Example (26): Evaluate 

x

6

e 2 x dx

0

Solution: Let x  y / 2, x 6  y 6 / 64, dx  (1 / 2)dy,

56

x 6 e 2 x dx  ( y 6 / 64)e  y .(1 / 2)dy





0

0

6 2 x 6 y  x e dx  (1 / 128)  y e dy  (1 / 128)(5)

Example (27): Evaluate 



3

x e  x dx

0

Solution: Let 

 0

x  y1/ 6 , dx  (1 / 3) y 2 / 3dy,

3

x e  x dx  ( y1/ 6 )e  y .(1 / 3) y 2 / 3dy







0

0

0

x e  x dx  (1 / 3)  ( y1/ 62 / 3 ) e  y dy  (1 / 3)  ( y 1/ 2 ) e  y dy  (1 / 3)  ( y1/ 21 ) e  y dy  (1 / 3)(1 / 2) 3

57

1-9-Exercises Exercise (1): Explain why the definition of factorial dictates when the

n combinatorial function C (n, k )    is defined. State when C (n, k ) is defined, k  without any reference to the factorial function. All further exercises assume that all uses of C (n, k ) are defined. Exercise (2): 1-T binomial theorem (for real numbers p, q ) can be expressed by; n n ( p  q) n     p k q n k k 0  k 

2-Following the same derived the factorial definition of C (n, k ) from the binomial theorem. Exercise (3): For real Variables, and see the reference book, prove throughout this subsection x  0; . 1 / 2 (1 / 2 ) x x e ( x)  e(1/12x ) a) 1  (2 ) x

b)

1 1  2 ( x ) (1 / x )

1 1  2 2 2      ( x )  ( 1 / x ) c) Exercise (04): Prove equivalently, for non-negative integer values of n : a)

1 (2n  1)!! (2n)! (  n )     n 2 2 n!4 n

b)

1 ( 2) n n! ( 4) n (  n )     2 (2n  1)!! (2n )!

Exercise (05): Prove equivalently, for non-negative integer values of n : 1 (n  2)!! ( n )  ( n 1) / 2  2 2

Exercise (06): In analogy with the half-integer formula, prove: a)

1 1 (3n  2)!( 3) (n  )  ( ) 3 3 3n 58

1 1 (4n  3)!( 4 ) b) (n  )  ( ) 4 4 4n c)

(n 

1 1 ( pn  ( p  1))!( p ) )  ( ) p p pn

Exercise (07): In both cases s is a complex parameter, with a positive real part. By integration by parts we find the recurrence relations ( s, x)  ( s  1)( s  1, x)  x s1e  x

and conversely;

 ( s, x)  ( s  1) ( s  1, x)  x s1e  x 

Since the ordinary gamma function is defined as ( s)   e t t s 1dx we have 0

 ( s, x)  ( s, x)  ( s) Exercise (08): Prove important properties:

1) ( x  1)  x( x ) 2) ( x )(1  x ) 

 sin x

3) 2 2 x 1 ( x )( x  1 / 2)  ( 2 x )  4) ( x )( x  1 / m)( x  2 / m)......( x  ( m  1) / m)  m1 / 2mx 2 x ( m1) / 2 ( mx), m  1,2,3,.. Exercise (9): Show that the (a, b) is finite for a  0 and b  0 , using these steps: a. Break the integral into two parts, from 0 to 1 / 2 and from 1 / 2 to 1. b. If 0  a  1 , the integral is improper at u  0 , but (1  u) b1 is bounded on (0,1 / 2) . c. If 0  b  1 , the integral is improper at u  1 , but u a 1 is bounded on

(1 / 2,1) . Exercise (10): Show that

(a, b)  (b, a ) for a  0 , b  0 , and (a,1)  1 / a . Exercise (11): Show that the beta function can be written in terms of the gamma function as follows:

(a, b)  gam(a) gam(b) / gam(a + b).

59

Hint: Express gam(a + b) (a, b) as a double integral with respect to x and y where x  0 and 0  y  1 . Use the transformation w = xy, z = x - xy and the change of variables theorem for multiple integrals. The transformation maps the (x, y) region one-to-one and onto the region z  0 , w  0 ; the Jacobian of the inverse transformation has magnitude 1 /( z  w) . Show that the transformed integral is gam(a) gam(b). Exercise (11): Show that if j and k are positive integers, then;

( j, k )  ( j  1)! (k  1)! /( j  k  1)! . Exercise (12): a)- Show that;

(a  1, b)  [(a /(b  1)] (a, b) b)-Use the recurrence relations, prove: 1)- (a, b)  [(a  b) / b] (a, b  1) 2)- (a, b)  [(a  b)(a  b  1) / ab] (a  1, b  1) Exercise (13): Show that

(1 / 2,1 / 2)   Exercise (14): Evaluate 

x

5

e  x dx

0

Exercise (15): Evaluate 

x

3/ 2

e  x dx

0

Exercise (16): Evaluate

(5 / 2) Exercise (17):Evaluate 

x

5/ 2

e  x dx

0

Exercise (18): Evaluate 1

x 0

60

2

(1  x ) 2 dx

Exercise (19): Evaluate 1

 [1 /

x 3 (1  x ) ]dx

4

0

Exercise (20): Evaluate 1

[

x 5 (1  x ) ]dx

0

Exercise (21): Evaluate 2

 [x

(8  x 3 ) ]dx

0

(Hint Let x 3  8 y , Answer, 6 /( 9 3 ) . Exercise (22): Evaluate 2

 [x

5

(32  x 5 ) ]dx

0

Exercise (23): Evaluate 1

 [x

(1  x ) ]dx

1/ 2

0

Exercise (24): Evaluate 1

x

1/ 2

(1  x ) 8 / 3 dx

0

Exercise (25): Evaluate 1

x 0

61

1 / 2

(1  x )8 / 3 dx

Chapter Two: On Gamma and Beta Distributions

2-Introduction: For the evaluation of many integrals, the Euler's gamma and beta functions and the complete elliptic integrals are among the most useful functions in engineering, physics and probability. The gamma and beta functions are also used for the generalization of many integrals and in the definition of other special functions, such as the Bessel, Legendre, and hypergeometric functions. Probability distributions have a surprising number of inter-connections as Shawn in figure 2.1, a dashed line in the chart below indicates an approximate (limit) relationship between two distribution families. A solid line indicates an exact relationship: special case, sum, or transformation. In statistics, there are many distributions which can be used in various areas. Such distributions include the Gamma, Chi-squared, Exponential, Rayleigh, Weibull, Erlang, Beta, Extreme-Value, Normal, Lognormal and others. The Figure 2.1, where the direction of each arrow represents going from the general to a special case, such as the relationships on this diagram a relationship (Normal, Lognormal, Beta, Gamma,…).

Figure 2.1: Diagram of distribution relationships 62

In all statements about any two random variables (RV), these are implicitly independent. For example if X is a gamma ( ,  ) random variable and Y is a normal random variable with the same mean and variance as X , then FX  FY if the shape parameter  is large relative to the scale parameter  .

If X 1 is gamma (1 ,1) random variable and X 2 is a gamma ( 2 ,1) random variable then X 1 /( X 1  X 2 ) is a beta (1 ,  2 ) random variable. More generally, if X 1 is gamma (1 , 1 ) random variable and X 2 is gamma ( 2 ,  2 ) random variable then  2 X 1 /(  2 X 1  1 X 2 ) is a beta (1 ,  2 ) random variable. So is it true that

X 1 / Y1 follows a beta

distribution?. In 1986, Lawrence Leemis published a paper containing a diagram of 43 probability distribution families. The diagram summaries connections between the distributions with arrows: chi-squared is a special case of gamma, Poisson is a limiting case of binomials; the ratio of two standard normals is a Cauchy, etc. It’s a very handy reference, a sort of periodic table for statisticians. His diagram and variations have appeared in several text books over the last 20 years. Continuous distributions are formed since everything in over world that can be measured varies to some degree. Measurements are like snowflakes and fingerprints, no two are exactly alike. The degree of variation will depend on the precision of the measuring instrument used. The more precise the instrument, the more variation will be detected. A distribution, when displayed graphically, shows the variation with respect to a central value. The distributions we have used so far are called empirical distributions because they are based on empirical observations, which are necessarily finite samples. The alternative is a continuous distribution, which is characterized by a Cumulative Distribution Function (CDF) that is a continuous function (as opposed to a step function). Many real world phenomena can be approximated by continuous distributions. 63

2-1-Continuous Random Variables and Probability Distributions: Definition (1): If the distribution of a continuous random variable has a probability density function f x  , then for any interval (a, b) , we have; b

Pa  X  b    f x dx a

The probability density function (p.d.f.) has the following properties, which follow from Kolmogorov’s Axioms. We say X : S  IR is a continuous random variable if there exists a function f : IR  [0, ) with the following properties:

(a)

f ( x)  0 for all x  IR .

(b)



(c)

P X  A 





f ( x)dx  1

 f ( x)dx for all intervals

A  IR

xA

Remarks: 1-If X is a continuous random variable, then P X  x   0 for any x (Think about this). 2-As a result, we have; Pa  X  b  Pa  X  b  Pa  X  b  Pa  X  b .

Definition (2): The cumulative distribution function (or c.d.f.) for a continuous random variable X is given by; F x   P X  x  

x

 f x dx , for all

x  IR



If the distribution does not have a p.d.f., we may still define the c.d.f. for any x as the probability that X takes on a value no greater than x . Remark: The c.d.f. for the distribution of a random variable is unique, and completely describes the distribution.

64

Definition (3): If X is a continuous random variable with probability density function f,

the expected value of X is

defined

by



E( X ) 

 xf ( x)dx, provided that the integral converges absolutely.



Definition (4): Let X be a continuous random variable with p.d.f. f x  , and mean=. The variance of X , or the variance of the distribution of X , is given by;



  x    f x dx

 2  V  X   E  X   2 



2



The standard deviation of X is just the square root of the variance. Remark: In practice, it is easier to use the computational formula for the variance, rather than the defining formula: 

  E X      x 2 f x dx   2 2

2

2



Definition (5): The k th 

   x

EX

k

k

moment of the distribution of

X

is;

f x dx; k  1,2,3,4,...



Theorem (1): Suppose M (t ) is finite on some open interval containing the origin. Then,

(a)

EX   M ' (0)

(b)

For all k  1, E X k  M (k ) (0)

 

Definition (6): Let X denote any random variable. The cumulative distribution function (CDF) of X , denoted by F (x ) , is given by

F ( x)  P( X  x), for    x   . Theorem (2): If F (x ) is a CDF, then; (1) F ( )  lim F ( x )  0. y 

(2). F ()  lim F ( x )  1. y 

(3). F (x ) is nondecreasing function of x . 65

Definition (7): Let X denote a random variable with CDF F (x ) . X is said to be continuous if the distribution function F (x ) is continuous for    x  .

Definition (8): Let F (x ) be the CDF for a continuous random variable X . The f (x ) , given by; f ( x) 

dF ( x )  F ( x ) dx

Wherever the derivative exists, is called the probability density function (PDF) of the random variable X . Therefore, F (x ) can be written as; F ( x)  

x



f (t )dt

Graphically, we have:

Figure 2.2: Probability density function (PDF) of the random variable X . Definition (9) (Expectation): We defined the expectation of a continuous random variable X with density f by: EX    xf ( x)dx whenever this 



integral makes sense. Theorem (3): Let X be a continuous random variable with density function f X , and let g : IR  IR , then Eg ( X )   g ( X ) f X ( x)dx , 



whenever the integral makes sense. Lemma: For a non-negative random variable X , E X    P( X  x )dx. 

0

66

Corollary: For all a, b  IR , EaX  b  b  aEX  Theorem (4): Let X be a continuous random variable with density f X . Let g be a continuous, strictly monotone function. Then the random variable Y  g (X ) has the following density:  d 1 1 g ( y)  f X ( g ( y )) f Y ( y)   dy 0 





y  g ( x) for some x else

2-2-The Gamma Distribution: This distribution is appropriate for variables with positive scale values that are skewed toward larger positive values. If a data value is less than or equal to 0 or is missing, then the corresponding case is not used in the analysis. The continuous random variable X has a gamma distribution, with parameters  and  , if its density functions is given by;

1  x  1e  x /    f ( x )    ( ) , 0,

x  0, elsewhere ,

Where (.) is the gamma function,   0 and   0 , and  and  are called shape and scale parameters respectively. The special gamma distribution for which   1 is called the exponential distribution, the continuous random variable X has an exponential distribution, with parameter  , if its density function is given by;

 1 x /   e f ( x)    , 0,

x  0, elsewhere ,

where   0 . The mean  and variance  2 of the gamma distribution are     and  2    2 . The gamma distribution can assume different shapes for different values of  and  as shown in Figure 2.3, Figure 2.4 and Figure 2.5 below. For   1, f X ( x) is a strictly decreasing function of x 67

asymptotically approaching 0 as n  . for   1, f X ( x) has a unique mode at the location x 

 1 (  1) 1  ( 1) and of value , and the e  ( )

gamma distribution is important because many practical data can be fitted with it.

Figure 2.3: f X ( x) gamma distributions for different values of  ,   

Figure 2.4: f X ( x) gamma distribution for different values of  and 

68

Figure 2.5: f X ( x) gamma distributions for different values of  and  

Many distributions may be considered as special cases of the gamma distribution.   1 gives the exponential distribution with

  2 and  

k for positive integer k , we get the chi-square 2

distribution. 

The Erlang distribution is a special case of the gamma distribution where the shape parameter  is an integer. With   k



1



and

,   0, the Erlang pdf is given by;

  k x k 1   x e x 0  f X ( x)   (k  1)! 0 x0 



If

X1 , X 2 ,..., X k

are

iid

exp( ) random

variables,

Y  X1  X 2  ...  X k has Erlang distribution with   k.

69

then

It’s good to have a some familiarity with the Gamma distribution. Since the sum of exponential distributions is a gamma distribution. The exponential distribution is tested very heavily on the exam, and there has been at least one recent exam question where it would have been helpful to know that the sum of two exponentials is a gamma. That’s about all you’ll need to know, but you might get tested on the gamma outright, so listed below are some relevant formulas for the gamma. Example (1): Let X

and Y

be iid (independent and identically

distributed) random quantities, with the exponential distribution;

f  x;     e x ,

 x  0,   0

Then the p.d.f. of the random quantity T  X  Y is given by the convolution t

f T (t )  f ( x )  f ( x )   e  x . e  ( t  x ) dx, (t  0) 0



2

t

e

 ( x  t  x )

2  t

dx  e

0

t

  1 dx  e x 2  t

t x 0

0

Therefore the p.d.f. of T  X  Y is the Gamma distribution with parameters   2 and   1 ;

f  t    2t e t

t  0 .

Example (2): Let X follow a Gamma distribution with parameters

  m and   1 and Y follow a Gamma distribution with parameters   n and   1 , where m and n are natural numbers. Then the p.d.f. of the random quantity T  X  Y is given by the convolution; t

f T (t )  f ( x )  f ( x ) 



m x m1e   x n (t  x ) n 1 e   ( t  x )

0



mn e  t

t

x (m  1)! ( n  1)! 

( m  1)! m 1

.

(n  1)!

dx, (t  0)

(1  x ) n 1 dx

0

The integral is a standard Beta integral, which, through repeated integrations by parts, can be reduced to ; 70

f t  

 m  1!  n  1! t mn1  m  1!  n  1!  m  n  1!  m n e   t



 mnt mn1e t



 m  n  1!

which is the Gamma distribution for   m  n and   1 . Example (3): Demonstration of the Central Limit Theorem (Exponential Distribution). Therefore, if the random quantity X follows an exponential distribution, with probability density function; f  x;     e x ,

 x  0,   0 

 E X  

1



, V X  

1

2

Then the sum T of n independent and identically distributed (iid) such random quantities follows the Erlang[gamma] distribution, with p.d.f.; f (t ,  , n ) 

 (t ) n1e   t (n  1)!

, (t  0,   0, n  IN ),

E[T ]  n /  , V [T ]  n / 2

These n values of X form a random sample of size n . Quoting f X  x   n fT  nx  , the p.d.f. of the sample mean X

is the

related function; f ( x,  , n ) 

n(nx ) n 1e  n x ( n  1)!

, ( x  0,   0, n  IN ),

E[T ]  n /  , V [T ]  n / 2

For illustration, setting   1 , the p.d.f. for the sample mean for sample sizes n  1,2,4 and 8 are: n  1:

f  x   e x

n  2:

f X  x   4 x e2 x

n  4:

4  4 x  e4 x fX  x  3!

n  8:

fX  x 

3

8  8 x  e8 x 7! 7

The gamma distribution can assume different shapes for different values of   1 , n as shown in Figure 2.6.

71

Figure 2.6: f ( x,  , n) gamma distribution for different values of   1 , n The population mean   E[ X ]  1, for all sample sizes. The variance and the positive skew both diminish with increasing sample size. The mode and the median approach the mean from the left. For a sample size of n  16 , the sample mean X has the p.d.f.;

16 16 x  e16 x fX  x  ; 15! 15

and

1 . parameters   E  X   1 and  2  V  X   16

The

gamma

distribution can assume different shapes for different values of   1 , n  16 as shown in Fig (2.7).

Figure 2.7: f ( x,  , n) gamma distributions for different values of   1 , n  16 .

72

A plot of the exact p.d.f is drawn here Figure 2.7, together with the normal distribution that has the same mean and variance. The approach to normality is clear. Beyond n  40 or so, the difference between the exact p.d.f. and the Normal approximation is negligible. It is generally the case (for distributions with well-defined mean and variance) that, whatever the probability distribution of a random quantity may be, the probability distribution of the sample mean X approaches normality as the sample size n increases.

For most

probability distributions of practical interest, the normal approximation becomes very good beyond a sample size of n  30 . Remark: The three different shapes arising from the above three cases.    1  lim f X x;  ,     1   1 x 0 0   1 

The mean of the Gamma random variable X is given by; 

EX   x 0

  x 1  x /  e dx ( )

 u u e du 0  ( ) (  1)  ( )   

( substituting u  x /  )



Similarly

  x 1  x /  e dx 0 ( )  2 (  2)  ( ) 2   (  1)  X2  EX 2  ( EX ) 2   2 

EX 2   x 2

73

Figure 2.8: Densities gamma distribution of G(0.5, 2), G(1, 2) and G(2, 2). Example (4): Suppose X has a Gamma distribution with parameters  and  , if its density functions is given by;

1  x  1e  x /    f ( x )    ( ) , 0,

x  0, elsewhere ,

Some Special Gamma Distributions: (  ,  ) distribution then the expected value of X are: 1-Exponential (  ) distribution: Exponential probability distribution: A random variable X is said to have an exponential distribution with parameter   0 if and only if X has a gamma distribution with parameter   1 and  arbitrary. 2-Chi-square probability distribution: Let n be a positive integer. A random variable X is said to have a chi-square distribution with v degrees of freedom if and only if X has a gamma distribution with parameter

  n / 2 and   2 . Easy to see that if Y is a chi-square random variable with n degrees of freedom, then E ( X )  n , V ( X )  2n . Example (5): In addition, the response exponential family, taking the form; 74

Y has a distribution in the

f  y, ,    e

  y b      c  y ,   a   

Intuitively, generalized linear model is the “extension” of the linear model. As the distribution is normal and the link function is identity function, the generalized linear model reduces to the linear model. If Y ~ G , v  (Gamma distribution). Then;

1

1

   ,   , a      , b    log    log   , v v   1 1 1 2  E Y   b'      , Var Y   b''  a    2     v v 1

1

Example (6)(Determine the parameters): How can we determine the parameters (  ,  ) to satisfy our needs above? As we have two parameters we need two equations and we will try the following two:



 (*) 

  1  t  t  e dt ( ) 0 x

P( X  x)  

The left expression is the expected value (  ) and the right one is called the cumulative distribution function (CDF) and is exactly the same as the diagram of the %Gpdfcdf-macro. (This is the common expression, Minitab puts it as a product of the two parameters (  ,  )). From the requirements above we know the following three things:  5

x 8   1  t  t  e dt ( ) 0 x

P( X  x)  

We start by rewriting the first expression to the following:       5

Then we use everything in our second expression: 5 5 1  t t  e dt (5 ) 0 x

0.98  

Example (7): The Likelihood Gamma Distribution pdf: 75

f ( y , α, β ) 

y α 1e  y / β ; 0 y β α Γ (α )

cdf: No simple explicit expression Mean = αβ Variance = αβ 2

yiα 1e  yi / β likelihood =  α β Γ(α ) i 1 n

n

= { log(  )  log ( )  (  1) log( yi )  yi /  }

Loglikelihood

i 1

n

 n log(  )  n log ( )  (  1){log( yi )   1 yi } i 1

Figure 2.9: Illustrative Densities for Gamma and Exponential Models Example (8): Let X 1 and X 2 denote a random sample from an Exponential

distribution

with

mean  .

Let

Y1  X 1  X 2 and

Y2  X 1  X 2 .

a)- f X 1 x1    e 1

1  x1



and f X 2 x2    e 1

1  x2



1  x  x   1  1 x1 1  1 x2  2  1 2  e  e  e 0  xi   f X 1 , X 2  x1 , x2       elsewhere 0 

76

let

1  y1  y2  2 1 2 x2  y1  y2  x2   y1  y2  2 2 x1  y1  y2  x1 

1 2 Then J  1 2

1 1 2  1 2  2

T   y1  y2 , y2  2  y1, y2  y1, y1  2  y2  1  2 1  12  y1  y2  12  y1  y2  1 2  y1  1  e  2   e f Y1 ,Y2  y1 , y2    2 0 else 

 yi  T    

 y 2  y2  1 2 y2 2  1 y1   2       y e dy1  e  e     1  y 2  0 2 2   y2 y2  2  1 2 2 y2  1 y1  b)- f Y2  y 2      e  dy1  e   e  0  y2  1  y2 2  else 0     

Example (9): Let X 1 and X 2 denote a random sample from a Gamma distribution Y2 

parameters  and  .

with

Let

Y1  X 1  X 2 and

let

X1 . X1  X 2

f X 1 x1  

f X 1 , X 2  x1 , x2 

1

 



  1  x1 / 

x1

e

;

and

f X 2 x2  

 

1  1   1  x1 /   1  x 2 e  x2 /       x1 e      2  x1  x2       1  1       x x e 0  x     1 2 i         0  else    

x1  y 2 x1  x2   y1 y 2 x2  y1  x1  y1  y1 y 2

, then T  0  yi   , 77

1



x2

 1  x2 / 

e

 1  f Y1 ,Y2  y1 , y 2        0

2

y

 1  2 2 2  1   y1 y 2  y1 y 2  e  

 0  yi      else 

2-3-Exponential Distribution: The exponential random variable can be used to describe the life time of a machine, industrial product and Human being. Also, it can be used to describe the waiting time of a customer for some service. This continuous probability distribution often arises in the consideration of lifetimes or waiting times and is a close relative of the discrete Poisson probability distribution. Consider again using the Poisson distribution to characterize the probability distribution of a random variable X (t ) representing the number of failures occurring in the time interval (0, t ) . Denote by T1 the time of the first failure, T2 , the time between the first failure and the second failure, …, and Tn as the time between the failure n  1 and failure n . For example, if T1  3 years and T2  4 years, then the first event

would occur at year 3 and the second event at year 7. What is the distribution of any one of the Ti , i=1,…,n?. This is another aspect of the essential distributions; the exponential distribution is used to measure the waiting time until failure of machines, among other applications. A continuous random variable X is said to have an exponential distribution with parameter  ,   0 if its probability density function is given by;  e   x , f  x    0,

x0 x0

or, equivalently, if its cumulative distribution function is given by; x 1  e  x , F x    f  y dy    0,

x0 x0

78

is the probability density function for an exponentially distributed random variable; its graph is illustrated in Figure 2.10.

Figure 2.10: Exponential density functions for same values of 

2-3-1-Properties of Exponential Density Function: Let X be the random variable with the exponential density function f (x ) and the parameter  , if the pdf of X is;  e  x , if x  0 f ( x)    0, otherwise

A look at the graph of the pdf is informative. Integrating the p.d.f. for the exponential function, f ( x)  e x , results in the cumulative distribution function:  x  t  e dt F ( x)   0  0 

x0

 1  e x

x0

x0

Here is a graph produced by Maple for   0.5 . Note that it has the same shape as every exponential graph with negative exponent (exponential decay). The tail shrinks to 0 quickly enough to make the area under the 79

curve equal 1. Later we will see that the expected value of an exponential random variable is 1 /  (in this case 2). That is the balance point of the lamina with shape defined by the pdf is the probability density function for an exponentially distributed random variable; its graph is illustrated in Figure 2.11.

Figure 2. 11: Exponential density functions for same value of  A simple integration shows that the total area under f is 1;







t

 e  x dx  lim   e  x dx  lim e  x 0

 lim e t 

t  0

 t

  e

 0

t

t 

  0  1  1

Essentially the same computation as above shows that F ( x)  1  e x . Here is the graph of the cdf for X ~exponential (0.5); its graph is illustrated in Figure 2.12.

Figure 2. 12: The p.d.f. and c.d.f. graphs of the exponential distribution with   0.5 . 80

As the coming examples will show, this formula greatly facilitates finding exponential probabilities. Finding the expected value of X ~exponential (

 ) is not hard as long as we remember how to integrate by parts. Recall that you can derive the integration by parts formula by differentiating a product of functions by the product rule, integrating both sides of the equation, and then rearranging algebraically. Thus integration by parts is the product rule used backwards and sideways; Given functions u and v, Dx uv  uDx v  vDx u. Integrate both sides wrt x to get uv   udv   vdu. Rearrange to get  udv  uv   vdu. In practice we look for u and dv in our original integral and convert to the RHS of the equation to see if that is easier.

So, if X ~exponential (  ), then 

t

E ( X )   x e   x dx  lim  x e   x dx. Let u  x and dv   e   x dx. t 

0

0

 t t  Then du  dx and v  e   x . So E ( x)  lim  uv 0   vdu  t  0     t  1  x t     x t t  x   lim  xe   e dx   lim  te  0   e   0 t  t    0 0       1 1 1 1    lim  te  t   e  t  e   0    lim  te  t  e  t   t        t    1 1  00 





So, similar if X ~exponential (), then Var ( X ) 

1

2

.

For memory less property of the exponential random variable X with parameter  ,

81

P( X  t  t0 / X  t0 )  P ( X  t ) for t  0, t0  0 Proof: P[( X  t  t0 )  P  X  t0 ] P ( X  t0 )

P ( X  t  t0 / X  t0 )  

P ( X  t  t0 ) P ( X  t0 )



1  FX (t  t0 ) 1  FX (t0 )

e   ( t  t0 ) e   t0  e  t  P ( X  t ) 

Hence if X represents the life of a component in hours, the probability that the component will last more than t  t0 hours given that it has lasted t0 hours, is same as the probability that the component will last t hours.

The information that the component has already lasted for t0 hours is not used. Thus the life expectancy of a used component is same as that for a new component. Such a model cannot represent a real-world situation, but used for its simplicity.

2-3-2-Applications: Example (1): Exponential distributions are sometimes used to model waiting times or lifetimes. That is, they model the time until some event happens or something quits working. Of course mathematics cannot tell us that exponentials are right to describe such situation. That conclusion depends on finding data from such real-world situations and fitting it to an exponential distribution. Example (2): Suppose the wait time X for service at the post office has an exponential distribution with mean 3 minutes. If you enter the post office immediately behind another customer, what is the probability you wait over 5 minutes? Since E ( X )  1 /   3 minutes, then   1 / 3 , so X ~exponential( 1 / 3 ). We want; 82

P( X  5)  1  P( X  5)  1  F (5) 1 5  5     1  1  e 3   e 3  0.189  

Under the same conditions, what is the probability of waiting between 2 and 4 minutes?. Here we calculate; 4 2       P(2  X  4)  F (4)  F (2)  1  e 3   1  e 3     

e The

trick



2 3

e



4 3

in

 0.250 the

previous

example

of

calculating

P(a  X  b)  F (b)  F (a) quite common, it is the reason the cdf is so useful in computing probabilities of continuous random variables. Example (3): Let X represent the life time of a washing machine. Suppose the average lifetime for this type of washing machine is 15 years. What is the probability that this washing machine can be used for less than 6 years? Also, what is the probability that this washing machine can be used for more than 18 years?. Solution: X has the exponential density function with   15 (years) , then;

P( X  6)  1  e

6 15

 0.3297 and P( X  18)  e

18 15

 0.3012

Thus, for this washing machine, it is about 30% chance that it can be used for quite a long time or a short time. Example (4): Suppose the average number of car accidents on the highway in two days is 8. What is the probability of no accident for more than 3 days?. Solution: The average number of car accidents on the highway in one day is 8 / 2  4 . Thus, the mean time of one occurrence is 1/4 (day) . Let X be the Poisson random variable with mean 4 representing the number of car accidents in one day while X be the exponential random variable with mean 1/4 (day) representing the time of one accident occurrence. Thus; 83

P(No accident for more than 3 days)  P( the time of one occurrence larger tha n 3)  P( X  3)  e

3 1 4

 e 12  0

Example (5): The average length of time that a flower will live after it is cut is 7 days. What is the probability that a randomly chosen cut flower will live more than 10 days?. First, find the m, the inverse of the mean. Because the mean is 7, the multiplicative inverse is 1 / 7 . With Exponential density Distribution 1 / 7  mx Function f ( x )  e , then f(10) = e and with replace x with 10 and m 1 / 7 with 1 / 7 , and f ( x )  e  0.239 or 23.9%Use a calculator.

There is a 23.9% chance that a randomly selected cut flower lives for more than 10 days. This appears to be a reasonable solution because few cut flowers live either a short amount of time or a long amount of time. Example (6): The waiting time of packets in X in a computer network is an exponential random variable with; f X ( x)  0.5e 0.5 x

x0 0.5

P({0.1  X  0.5}) 

 0.5e

0.5 x

dx

0.1

=e 0.50.5  e0.50.1  0.0241

2-3-3-The Chi-Squared Distribution: If the distribution of the sum of the independent normal variates with zero mean is itself normally distributed, the sum of the squares of these variates has the Chi Square distribution with v  n  1 degrees of freedom. The continuous random variable X has a chi-squared distribution, with v degrees of freedom, if its density function is given by;

1  x v / 21e  x / 2 ,  f ( x )   2 v / 2 ( v / 2 ) 0, 84

x  0, elsewhere ,

Where v is a positive integer, where v  n  1 shows the degrees of freedom, Y is the function of these degrees of freedom. Since the degrees of freedom are generally exogenously given, there is no need to estimate this as the distributional parameter. This makes the χ2 distribution parametric free. The shape of the distribution obviously depends upon the degrees of freedom, which is the only parameter of the distribution. The mean (  ) and variance ( 2 ) of the chi-squared distribution are

  v and  2  2v . these curves show that; 1-for each value of n, there exists one χ2 distribution. Thus, like t distribution, Chi-square is a family of the distributions. One distribution corresponds to each given value of the degrees of freedom, ranging from 1 to infinity; 2-for one degree of freedom, χ2 is approximated by hyperbola; iii) as the degrees of freedom change, the shape of the curve changes; iv) as the degrees of freedom increase, there occurs a peak; v  n  1 ) larger the n, more towards the left the peak shifts. The following Figures shows the Chi square curves for three different values of n: n=2, n=4, and n=8, ; its graph is illustrated in Figure 2.13.

Figure 2. 13: Chi Square distributions for 2, 4, 8 85

Properties: The χ2 distribution has the following characteristic features: Chi-square is the sum of squares, and therefore, its value can never be negative. The distribution, in fact, ranges from zero to infinity (positive). The mode of Chi-square distribution is located at the point where Chisquare equals n-1. The distribution is uni-model and it is positively skewed towards the right. In fact, the curve falls gradually and if ultimately approaches zero as Chi-square tends towards infinity; χ2 curve is a Pearsonian type III curve. As the size of the sample increases, the distribution approaches normality. Fisher showed that, if n>30, the quantity [(2 χ2)1/2 – (2y-1)1/2] is distributed normally with zero mean and unit standard deviation. Therefore, it is a normal deviate, having values of 1.96 and 2.58 at 5 and 1 per cent probability level. This quantity can be both positive and negative; it is distributed like a usual normal variate. Thus, 5 and 1 per cent significance points of χ2, like other test statistics, refer to both the tails of the distribution; A sum of several Chi-squares is also distributed as Chi-square and the degrees of freedom are equal to the sum of the degrees of freedom of the component Chi-squares (For proof, See, Kenny and Keeping). Chi-square distribution is a continuous distribution even though the actual frequencies of the occurrence may be discontinuous. This introduces an error, especially if the frequency is small, a frequency of less than 5 is considered to be small.

2-4-Linear combinations and transformation of random variables: Let the density function of the random variables X , Y be g (x ) and g ( y ) respectively. To find the density function of Z  X  Y , we argue that a value z is attained if x, y are chosen such that z  x  y . This required chance is; 

 f ( x) g ( z  x) dx



86

For discrete random variables the chance for Z attaining value of N is



f ( m) g ( n )

mn  N

Consider the gamma distribution (for x  0 ); f ( x) 

 e   x ( x ) s 1 ( s )

Suppose X is a gamma random variable with parameter ( s,  ) , and Y is a gamma variable with parameter (t ,  ). Show that the variable Z  X  Y has a gamma distribution with parameters (s  t ,  ). If X, Y are independent

Poisson

random

variables

with

parameters

1 , 2

respectively, show that Z  X  Y is a Poisson variable with parameter

1  2 . The pdf of  n2 RVs with different degrees of freedom is shown in Figure 2.14.

Figure 2. 14: Chi Square distributions for1, 2, 4, 6, 8

87

2-4-1-Applications Mean and variance of the chi-square random variable: Let the random variable X with chi-square random variable, then: 

 X   xf X ( x)dx  

x 0

2 x n / 21 e  x / 2 dx n/2 n 2  (n / 2)



2 xn / 2 e  x / 2 dx n/2 n  (n / 2) 0 2



n

(2 2 ) n / 2 u 2  u   n/2 n e (2 2 )du ( Substituting u  x / 2 2 )  (n / 2) 0 2 



2 2 [(n  2) / 2] (n / 2)



2 2 n / 2(n / 2) (n / 2)

 n 2

Similarly, 

EX 2   x 2 f X ( x)dx 



  x2 0

2 x n / 21 e  x / 2 dx n/2 n 2  (n / 2)

2 x ( n  2) / 2 e  x / 2 dx n/2 n  (n / 2) 0 2





( n 2 ) / 2

(2 2 )( n  2) / 2 u   n/2 n e  u (2 2 )du ( Substituting u  x / 2 2 ) 2  (n / 2) 0 

4 4 [(n  4) / 2]  (n / 2) 

4 4 [(n  2) / 2]  n / 2  ( n / 2) (n / 2)

 n(n  2) 4

 X2  EX 2   X2  n(n  2) 4  n 2 4  2n 4

88

2-4-2-Results: 1) - X 1 , X 2 . . . X n be independent zero-mean Gaussian variables each with variance  X2 . Then Y  X12  X 22  . . .  X n2 has  n2

distribution

with mean n 2 and variance 2n 4 . This result gives an application of the chi-square random variable. 2) - Let X 1 , X 2 . . . X n be independent zero-mean Gaussian variables each with variance  X2 ; then Y  X12  X 22  . . .  X n2 has  n2 distribution with mean n 2 and variance 2n  4 .

2-5-The Beta distribution: The Beta distribution in its standard form ranges from zero to one, and takes a wide range of shapes. The continuous random variable X has an exponential distribution, with parameters  ,

 , if its density function is given by; f ( x) 

(   )  1 x (1  x )  1 ,   0,   0 ; ( )(  )

Where (t )  (t  1)! , (t  0) for 0  x  1

E( X ) 

 

, V(X ) 

 (   ) (    1) 2

If  =  = 1 then; (2) 11 ( 2  1)! x (1  x )11  1 (1) (1) (1  1)!(1  1)! 1 1 1 E( X )  V(X )   2 2 (2) (3) 12 f ( x) 

The Uniform is a special case of the beta, so the posterior has the same parametric form as the prior, and the prior is naturally conjugate with the likelihood. Beta is symmetrical when    . If     1 we have a uniform distribution on the unit interval [0,1] (not drawn above) .The beta function can take on different shapes depending on the values of the two parameters: 89

1.     1 is the uniform distribution (would be a horizontal line

f ( x )  1 ); 2.    is symmetric about 1 / 2 (red & purple plots); 3.   1 ,   1 is U -shaped (red plot); 4.   1 ,   1 is strictly increasing (green plot); 5.   1 ,   1 is strictly decreasing (blue plot); 6.   1 ,   1 is unimodal (purple & black plots). Example (1): Could we consider the Standard Beta distribution?

f ( x) 

(   )  1 x (1  x )  1 for 0  x  1 ( )(  )

E(X) = /(+) U (0, 1) ~ Beta (1, 1) The add-in provides the Beta distribution as an option for the probability distribution. Beta distributions are useful because they are defined between a minimum and maximum value, many different shapes are available depending on the parameters chosen and, for some parameter choices. The distribution has a well defined mode (most likely point), and the distribution has four parameters, alpha, and beta, minimum and maximum. The figures below show three examples of symmetric Beta distributions. For symmetric distributions alpha and beta are the same. As their values increase the distributions become more peaked. The uniform distribution has alpha and beta both equal to 1. This does not have a well defined mode because every point has the same probability. Distributions with alpha and beta less than 1 are bathtub shaped curves and are not useful for modeling activity times; its graph is illustrated in Figure 2.15.

90

Figure

2.

15:

Beta

density functions

for

same

values

B(0.1,0.8),B(0.8,0.2), and B(0.8,0.8). Asymmetric distributions are obtained by choosing alpha and beta to be different. The Figure below shows several cases with alpha held constant at 2. The greater the difference between alpha and beta the greater the asymmetry and the asymmetric examples are all skewed to the right because alpha is less than beta. To obtain distributions skewed to the left choose alpha greater than beta, its graph is illustrated in Figure 2.16.

Figure 2. 16: Beta density functions for different values.

2-6-Related distributions: If

X

has a beta distribution, then

T  X /(1  X ) has a "beta distribution of the second kind", also called the beta prime distribution. The Beta (1, 1) distribution is identical to the standard uniform distribution. If X has the Beta (3/2, 3/2) distribution

91

and R  0 is a real parameter, then X  2RX  R has the Wigner semicircle distribution. If X and Y are independently distributed Gamma (α, θ) and Gamma (β, θ) respectively, then T  X /( X  Y ) is distributed Beta (α, β). The beta distribution is a special case of the Dirichlet distribution for only two parameters. To obtain Beta’s pdf and cdf are drawn above, its graph is illustrated in Figure 2.17.

Figure 2. 17: Beta’s pdf and cdf are drawn above. The Kumaraswamy distribution resembles the Beta distribution. If

X has a uniform distribution U(0,1), then X 2 has a special beta distribution case, Beta (1/2, 1), called the power-function distribution. The Beta distribution in its standard form ranges from zero to one, and takes a wide range of shapes. Examples of the Beta distribution are shown below, The following Figures shows the Beta curves for three different values; its graph is illustrated in Figure 2.18 and Figure 2. 19.

Figure 2. 18: Beta density functions for same values B(1, 2), B(1, 1), and B(2, 1). 92

Figure 2. 19: Beta density functions for same values B(2, 8), B(5, 5), and B(8, 2).

2-6-Applications: Beta distributions are used extensively in Bayesian statistics, since beta distributions provide a family of conjugate prior distributions

for

binomial

(including

Bernoulli)

and

geometric

distributions. The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task. Beta distributions are used extensively in Bayesian statistics, since beta distributions provide a family of conjugate prior distributions for binomial (including Bernoulli) and geometric distributions. The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task.

93

2-7-Examples Example (1): If X is a nonnegative continuous random variable, show that; 

E ( X )   [1  F ( x )]dx 0

Apply this result to find the mean of the exponential distribution. Solution: The random variable X is nonnegative continue; 

b

(i) E ( X )   xf ( x)dx = lim  xf ( x)dx b  0

0



= lim ( x( F ( x)  1)) 0   ( F ( x)  1)dx b

b

0



= lim b( F (b)  1)   (1  F ( x))dx b

0

claim: lim b( F (b)  1)  0 b





b

b

consider: 0  b(1  F (b))  b  f ( x)dx   xf ( x)dx

0  lim b(1  F (b))  0 b



 E ( X )   (1  F ( x))dx 0

(ii) x ~ exp( )

F ( x)  1  e x



1

0



E ( X )   e x dx 

Example (2): If U1 ,  ,U n are independent uniform random variables, Find E (U ( n )  U (1) ) . Solution: U1 ,  ,U n ~ uniform(0,1)

f U ( n ) ,U (1) (u n , u1 ) 

n!  1  1  (u n  u1 ) n2 = n(n  1)(u n  u1 ) n2 0!(n  2)!0!

R  U ( n )  U (1) U (1)  Y   Y  U (1) U ( n )  R  Y

J 

0 1 1 1 1

 f R,Y (r , y)  n(n  1)r n2  1 , 0  y  r  y  1

94

1 r

f R (r )  

0

n(n  1)r n2 dy  n(n  1)(1  r )r n2 , 0  r  1

1

E ( R)   r  n(n  1)(1  r )r n2 dr 0

1

1

0

0

=  n(n  1)(1  r )r n1dr  n(n  1)  (1  r )r n1dr ; (  n,   2) =

n(n  1)(n)(2) n(n  1)(n  1)! n  1   (n  2) (n  1)! n 1

Example (3): If X 1 and X 2 are independent random variables following a gamma distribution with parameters  and  , fin;

E ( R 2 ) , where R 2  X12  X 22 Solution: X 1 , X 2 ~ Gamma( ,  ) , indep f X1 , X 2 ( x1 , x2 ) 

  1 x   1 x x1 e x2 e , x1 , x2  0 ( ) ( ) 1

E ( R 2 )  E ( X 12  X 22 ) = 



0





0

2

( x12  x22 ) f x1 , x2 ( x1 , x2 )dx1dx2

  1 x   1 x   x1 e x2 e dx1dx2  0 0 ( ) ( )       1 x  1 x x e x2 e dx1dx2 0 0 ( ) 1 ( ) 



1

2

1

=



0

=

2

   (  2)   1 x2  x e dx  x2 1e x2 dx2 2 2 2  0 ( ) ( ) ( )

(  2) (  2) 2 (  1)   ( )2 ( )2 2

Example (4): Let X be uniform on [0,1], and let Y  X . Find E (Y ) by finding the density of Y and then finding the expectation . Solution: If the random variable X ~ U [0,1]  f X ( x)  1, x  [0.1] Y

X  X Y2

dx  2y dy

95

f Y ( y)  f X ( y 2 ) J  1 2 y  2 y

E (Y ) 

1



0

y  2 ydy 

1



0

2 y 2 dy 

2 3

Example (5): Let X be an exponential random variable with standard deviation  . Find p( X  E ( X )  k ) for k  2,3,4, and compare the results to the bounds from Chebyshev’s inequality. Solution: If the random variable X ~  ( )    E( X ) 

1



,   Var ( X ) 

p( x    k )  p( x  = p( x 

1





k



1



2

) = p( x 



1



k 1 1 k )  p( x  )





k 1 1 k ) ( k  2 1  k  0  p( x  )  0)







= k 1 e x dx  e k 1 , k  2,3,   . The result in the table (2.1) below; 

Chebyshev’s inequality k

p( x    k )  e

2

e 3  0.04979

1  0.25 4

3

e 4  0.01832

1  0.1111 9

4

e 5  0.006738

1  0.0625 16

 k 1

p( x    k ) 

1 k2

Example (6): Show that Var(X - Y)  Var(X)  Var(Y) - 2Cov(X, Y). Solution: Var ( X  Y )  E ( X  Y ) 2  [ E ( X  Y )]2

96

 E ( X 2  2 XY  Y 2 )  [ E ( X )  E (Y )]2  E ( X 2 )  2 E ( XY )  E (Y 2 )  E 2 ( X )  2 E ( X ) E (Y )  E 2 (Y )  E ( X 2 )  E 2 ( X )  E (Y 2 )  E 2 (Y )  2[ E ( XY )  E ( X ) E (Y )]  Var ( X )  Var (Y )  2 cov( X , Y ) Example (7): Let T = k 1 X k , where the X k are independent random n

variables with means μ and variances σ 2 . Find E(T) and Var(T). n

n

n

k 1

k 1

k 1

Solution: E (T )  E ( k  X k )  E (k  X k )  kE( X k ) 

n(n  1)  2

n

Var (T )  Var ( k  X k ) ( X 1 , X 2 ,, X n are independent) k 1

n

n

k 1

k 1

 Var (k  X k )  k 2Var ( X k )  Example (8):

n(n  1)(2n  1) 2  6

If X and Y are independent random variables, find

Var(XY) in terms of the means and variances of X and Y. Solution: X , Y independent Var ( XY )  E ( XY ) 2  E 2 ( XY )  E ( X 2Y 2 )  E 2 X  E 2Y ( X ,Y indep ,  E( XY )  EX  EY )  EX 2  EY 2  E 2 X  E 2Y

( X ,Y indep ,  X 2 ,Y 2 indep , E( X 2  Y 2 )  EX 2  EY 2 )

 (VarX  E 2 X )  (VarY  E 2Y )  E 2 X  E 2Y  VarX VarY  VarX  E 2Y  E 2 X VarY  E 2 X  E 2Y  E 2 X  E 2Y  VarX VarY  VarX  E 2Y  E 2 X VarY   X2  Y2  Y2 X2   X2  Y2 Example (9): Let (X,Y) be a random point uniformly distributed on a unit disk. Show that Cov(X, Y)=0, but that X and Y are not independent. Solution: f X ,Y ( x, y)  1 x 2

f X ( x) 



 1 x

2

1



dy 

1



2



,

x2  y2  1

1 x2 ,

x   1,1

97

f y ( y) 

2

y   1,1

1 y2 ,



 f x, y ( x, y)  f x ( x)  f y ( y) x 2  y 2  1 1

 X ,Y are not independen t EX 

2

 x

1  x 2 dx  0 (

1

2



x 1  x 2 and

EY  0 1 x 2

x y 2

1

 

2 1 1 1 2 1 x x y 2 dx   1  x   1  1 x  2 1 2 Cov( X ,Y )  E( XY )  EX  EY  0  0  0 Cov( X ,Y )  0, but that X and Y are not independen t.

1

E ( X ,Y )  

dydx 

1

 x  0dx  0 1

Example (10): Let Y have a density that is symmetric about zero, and let X=SY, where S is an independent random variable taking on the values +1 and –1 with probability

1 each. Show that Cov(X,Y)=0, but that X and Y 2

are not independent. Solution: E ( S )  1 P( S  1)  (1)  P( S  1) 

1 1  0 2 2

Cov( X , Y )  Cov( SY , Y )  E ( SY  Y )  E ( SY ) E (Y )  E ( S ) E (Y 2 )  E ( S ) E (Y ) E (Y ) ( S , Y indep .) 0

 0  1 f x│y ( x│y )   2 1  2  f x│y  f x

xy x  y

f X ( x) 

1 1 fY ( x)  fY ( x) 2 2

xy

 X , Y are notindependen t.

Example (11): The joint density of the minimum and maximum of n independent uniform random variables was found. In the case n =2, this amounts to X and Y, the minimum and maximum, respectively, of two independent random variables uniform on [0,1], having the joint density;

f ( x, y)  2, 98

0 x y

a . Find the covariance and the correlation of X and Y . Does the sigh of the correlation make sense intuitively?. b . Find E( X │ Y  y ) and E( Y │ X  x ). Do these results male sense intuitively?. c . Find the probability density functions of the random variables E( X │

Y ) and E( Y │ X ). d . What is the linear predictor of Y in terms of X (denoted by Yˆ =a + bX) that has minimal mean squared error? What is the mean square prediction error?. e . What is the predictor of Y in terms of X [ Yˆ =h(x)] that has minimal mean squared error? What is the mean square prediction error?. Solution: f X ,Y ( x, y)  2, 0 x y1 y

y

f Y ( y )   f X ,Y ( x, y )dx   2dx  2 y, 0

0

1

E ( XY )  

0



y

0

1

f X ,Y ( x, y )dxdy  

0

1

1

0

0

EX   xf X ( x)dx   2 x(1  x)dx  1

1

0

0



y

0

2 xydxdy 

1 3

EX 2   x 2 f X ( x)dx   2 x 2 (1  x)dx  1

1

0

0

EY   yf Y ( y )dy   2 y 2 dx 

a.

1

1

0

0

y  (0,1)

1 6

2 3

EY 2   y 2 f Y ( y )dx   2 y 3 dy 

1 2

1 1 2 1 ( )  6 3 18 1 2 1 VarY  EY 2  E 2Y   ( ) 2  2 3 18 1 1 2 1 Cov ( X , Y )  E ( XY )  EX  EY     4 3 3 36 1 Cov ( X , Y ) 1   36  2 VarX  VarY 1 1  18 18

VarX  EX 2  E 2 X 

99

1 4

b. f X│Y ( x│y) 

f X ,Y ( x, y)

1  , y  (0,1), x  (0, y)  X│Y  y ~ U (0, y) fY ( y) y

y , y  (0,1) 2 f ( x, y ) 1 ( y│x)  X ,Y  , x  (0,1), y  ( x,1)  Y│X  x ~ U ( x,1) f X ( x) 1 x

 E ( X│Y  y )  f ,Y│X

x 1 , x  (0,1) 2 y 1 c. 令W1  ( X│Y )  ,  fW1 (W1 )  2 f Y (2W1 )  8W1 , W1  (0, ) 2 2  E (Y│X  x) 

X 1 1 ,  fW2 (W2 )  2 f X (2W2  1)  8(1  W2 ), W2  ( ,1) 2 2 d. Yˆ  a  bX W2  (Y│X ) 

a  Y  b X 

2 1 1 1    3 2 3 2

 1 b  ( Y )  X 2

Yˆ 

1 1  X, 2 2

x  (0,1)

1 1 1 1 1  X ) 2   Y2 (1   2 )  (1  )  2 2 18 4 24 X  1 Yˆ  E (Y│X )  , x  (0,1) 2 M.S.E.: E (Y 

M.S.E.: E (Y  E (Y│X )) 2  EY 2  E ( E 2 (Y│X ))

d. 1 ( x  1) 2  E( ) 2 4 1 1 ( x  1) 2   f X ( x)dx 2 0 4 1 1 1    ( x  1) 2 (1  x)dx 2 2 0 1 11   2 24 1  24 

Example (12): Let X and Y be jointly distributed random variables with ~ ~ correlation  XY ; define the s tan dardized random variables X and Y as 100

~ X = ( X  E ( X ))

Var ( X )

~ and Y = (Y  E (Y ))

Var (Y ) . Show that

~ ~ Cov( X , Y )=  XY . ~ ( X  EX ) ~ (Y  EY ) Solution: X  , Y , correlatio n: X ,Y VarX VarY ( X  EX )  (Y  EY ) X  EX Y  EY ~ ~ Cov( X , Y )  E ( )  E( )  E( ) VarX  VarY VarX VarY E ( X  EX )  (Y  EY )  VarX VarY   X ,Y

Example (13): Let a > 0 and let b be any real number. A random variable Y has a gamma distribution with parameters a and b if it has the density function; f ( y)  [constant] y a 1eby .  for y  0 only!

(Note that if a = 1, this is just an exponential distribution with b = 1/.) Solution: Show that the constant has to be b 2 / (a ) . Let’s integrate the function without the constant. First, substitute u = by; 

a 1 by  y e dy 

y 0



 ( u / b)

a 1 u

e (1 / b)du (sin ce du  bdy )

u 0



1 1   a   u a 1e u du   a  ( a )  b  u 0 b  Since that last integral is just (a ) by definition. So, the constant has to be the inverse of this last expression, hence b 2 / (a ) .

What is E(Y) ?.

 b a    b a   ( a 1)1 by a 1 by E (Y )   e dy   y . y e dy   ( a )   y  ( a )  y 0   y 0  b a   ( a  1)     a 1  by part a with ( a  1) for a   ( a )   b a  sin ce ( a  1) / ( a )  a b 101

2-8-Exercises Exercise (1): Show that the following function is a probability density function for any k  0 :

1 k 1  x  x e  f ( x )   ( k ) 0,

x  0,

,

elsewhere ,

A random variable X with this density is said to have the gamma distribution with shape parameter k . The following exercise shows that the family of densities has a rich variety of shapes, and shows why k is called the shape parameter. Exercise (2): Draw a careful sketch of the gamma probability density functions in each of the following cases: a.

0  k 1.

b.

k  0.

c.

k  1 . Show that the mode occurs at k  1

Exercise (3): In the simulation of the random variable experiment, select the gamma distribution. Vary the shape parameter and note the shape of the density function. Now with k  3 , run the simulation 1000 times with an update frequency of 10 and watch the apparent convergence of the empirical density function to the true density function. Exercise (4): Suppose that the lifetime of a device (in 100 hour units) has the gamma distribution with parameters k  3 . Find the probability that the device will last more than 300 hours. The distribution function and the quantile function do not have simple, closed representations. Approximate values of these functions can be obtained from quantile applet. Exercise (5): Using the quantile applet, find the median, the first and third quartiles, and the interquartile range in each of the following cases: a.

k 1

b.

k 2

c.

k 3

102

The following exercise gives the mean and variance of the gamma distribution. Exercise (6): Suppose that X has the gamma distribution with shape parameter k . Show that a.

E( X )  k

b. V ( X )  k . More generally, the moments can be expressed easily in terms of the gamma function. Exercise (7): Suppose that X has the gamma distribution with shape parameter k . Show that a.

E ( X n )  gam(n  k ) / gam(k ), for n  0.

b.

E ( X n )  k (k  1)....(k  n  1) if n is a positive integer.

Exercise (8): Suppose that X has the gamma distribution with shape parameter k . Show that; E[exp( tX )]  1 /(1  t ) k

for t  1 .

Exercise (9): In the simulation of the random variable experiment, select the gamma distribution. Vary the shape parameters and note the size and location of the mean/standard deviation bar. Now with k  4 , run the simulation 1000 times with an update frequency of 10 and note the apparent convergence of the empirical moments to the distribution moments. Exercise (10): Suppose that X 1 has the gamma distribution with shape parameter k1 and scale parameter b ; that X 2 has the gamma distribution with shape parameter k2and scale parameter b; and that X 1 and X 2 are independent. Show that X 1  X 2 has the gamma distribution with shape parameter k1  k 2 and scale parameter b . Hint: Use moment generating functions. Exercise (11): Suppose that X has the gamma distribution with shape parameter k  4 and scale parameter b  0 . Show that the distribution is a 103

two-parameter exponential family with natural parameters k  1 and 1 / b  , and natural statistics X and ln( X ) . Exercise (12): Suppose that X has the gamma distribution with shape parameter k and scale parameter b . Show that if c  0 then cX has the gamma distribution with shape parameter k and scale parameter bc . More importantly, if the scale parameter is fixed, the gamma family is closed with respect to sums of independent variables. Exercise (13): Suppose that X has the gamma distribution with shape parameter k and scale parameter b. Show that a.

E ( X n )  bn gam(n  k ) / gam(k ), for n  0.

b.

E ( X n )  bn k (k  1)....(k  n  1) if n is a positive integer.

Exercise (14): 1-Suppose that X has the gamma distribution with shape parameter k and scale parameter b . Show that if k  1 , the mode occurs at

(k  1)b . 2-Suppose that X has the gamma distribution with shape parameter k and scale parameter b . Show that: a)- E ( X )  kb. b)- V ( X )  kb2 . Exercise (15): 1-Suppose that X has the gamma distribution with shape parameter k and scale parameter b . Show that; E[exp( tX )]  1 /(1  bt ) k

for t  1 / b

2-Suppose that X has the gamma distribution with shape parameter k and scale parameter b . Show that X has density function; f ( x)  x k 1 exp(  x / b) /[ gam(k )bk ]

for x  0

Recall that the inclusion of a scale parameter does not change the shape of the density function, but simply scales the graph horizontally and vertically.

104

Exercise (16): For n  0 , the gamma distribution with shape parameter k  n / 2 and scale parameter 2 is called the chi-square distribution with n

degree of freedom. 1-Show that the chi-square distribution with n degrees of freedom has density function; f ( x)  x ( n / 2)1 exp(  x / 2) /[2n / 2 gam(n / 2)]

for x  0

2-In the random variable experiment, select the chi-square distribution. Vary n with the scroll bar and note the shape of the density function. With

n  5 , run the simulation 1000 times with an update frequency of 10 and note the apparent convergence of the empirical density function to the true density function. Exercise (17): 1-Show that the chi-square distribution with 2 degrees of freedom is the exponential distribution with scale parameter 2. 2-Draw a careful sketch of the chi-square density functions in each of the following cases: a.

0  n  2.

b.

n  2 (The exponential distribution).

c.

n  2 . Show that the mode occurs at n  2 .

Exercise (18): 1-In the quantile applet, select the chi-square distribution. Vary the parameter and note the shape of the density function and the distribution function. In each of the following cases, find the median, the first and third quartiles, and the interquartile range. a.

n 1

b.

n2

c.

n5

d.

n  10

2-Show that; a.

E( X )  n

b. V ( X )  2n Exercise (18): 1-Show that; 105

E ( X k )  2k gam(n / 2  k ) / gam(n / 2), for n  0.

2-Show that; E[exp( tX )]  (1  2t )  n / 2

for t  1 / 2 .

Exercise (19): 1-Suppose that Z has the standard normal distribution. Use change of variable techniques to show that U  Z 2 has the chi-square distribution with 1 degree of freedom. 2- Use moment generating functions or properties of the gamma distribution to show that if X has the chi-square distribution with m degrees of freedom, Y has the chi-square distribution with n degrees of freedom, and X and Y are independent, then X  Y has the chi-square distribution with n  m degrees of freedom. Exercise (20): 1- Suppose that Z1 , Z 2 ,..., Z n1 , Z n are independent standard normal variables (that is, a random sample of size n, from the standard normal distribution). Use the results of the previous two exercises to show that; V  Z 21  Z 2 2  ...  Z 2 n1  Z 2 n has the chi-square distribution with n degrees of freedom. The result of the last exercise is the reason that the chi-square distribution deserves a name of its own. Sums of squares of independent normal variables occur frequently in statistics. On the other hand, the following exercise shows that any gamma distributed variable can be rescaled into a variable with a chi-square distribution. 2- Suppose that X has the gamma distribution with shape parameter k and scale parameter nb . Show that Y  2 X / b has the chi-square distribution with 2k degrees of freedom. 3- Suppose that a missile is fired at a target at the origin of a plane coordinate system, with units in meters. The missile lands at ( X , Y ) where

X and Y are independent and each has the normal distribution with mean 0 and variance 100. The missile will destroy the target if it lands within 20 meters of the target. Find the probability of this event. Exercise (21): The beta function B(a, b) is defined for a  0 and b  0 by; 106

1

B(a, b)   u a 1 (1  u ) b1 du 0

1-Show that the B(a, b) is finite for a  0 and b  0 , using these steps: a-Break the integral into two parts, from 0 to 1 / 2 and from 1 / 2 to 1. b-If 0  a  1 , the integral is improper at u  0 , but (1  u)b1 is bounded on (0, 1 / 2 ). c-If 0  b  1 , the integral is improper at u  1 , but u a 1 is bounded on ( 1 / 2 , 1).

2-Show that; a- B(a, b)  B(b, a) , for a  0 , b  0 . b- B(a,1)  1 / a . 3- Show that the beta function can be written in terms of the gamma function as follows:

B(a, b)  gam(a).gam(b) / gam(a  b) Hint: Express gam(a  b) B(a, b) as a double integral with respect to x and

y

x  0 and

where

0  y  1.

Use

the

transformation

w  xy , z  x  xy and the change of variables theorem for multiple integrals. The transformation maps the ( x, y ) region one-to-one and onto the region z  0 , w  0 ; the Jacobian of the inverse transformation has magnitude 1( z  w) . Show that the transformed integral is gam(a). gam(b) . Exercise (22): 1-Show that if j and k are positive integers, then;

B( j, k )  ( j  1)! (k  1)! /( j  k  1)! 2- Show that;

B(a  1, b)  [a /( a  b)]B(a, b) 3- Show that;

B(1 / 2,1 / 2)   . Exercise (23): 1-Show that f given below is a probability density function for any a  0 , and b  0 : 107

f (u)  u a 1 (1  u) b1 / B(a, b), 0  u  1,

The distribution with the density in Exercise (23) is called the beta distribution with parameters a and b . The beta distribution is useful for modeling random probabilities and proportions, particularly in the context of Bayesian analysis. The distribution has two parameters and yet a rich variety of shapes: 2-Sketch the graph of the beta density function. Note the qualitative differences in the shape of the density for the following parameter ranges: a.

0  a 1, 0  b 1

b.

a  1 , b  1 (the uniform distribution)

c.

a  1, 0  b  1

d.

0  a 1, b 1

e.

0  a 1, b  1

f.

a  1, 0  b  1

g.

a  1, b 1

h.

a  1, b  1

i.

a  1 , b  1 . Show that the mode occurs at (a  1) /( a  b  2) .

Exercise (24): 1- For a  1 and b  1 , show that; a.

F ( x )  x a , for 0  x  1 .

b.

F 1 ( p)  p1/ a , for 0  p  1 .

2- For a  1 and b  0 , show that; a.

F ( x)  1  (1  x) b , for 0  x  1 .

b.

F 1 ( p)  1  (1  p)1/ b , for 0  p  1 .

In general, there is an interesting relationship between the distribution functions of the beta distribution and the binomial distribution. 3- Fix n . Let Fp

denote the binomial distribution function with

parameters n and p and let Gk denote the beta distribution function with parameters n  k  1 and k . Show that Fk (k  1)  Gk (1  p)

108

Hint: Express Gk (1  p) as an integral of the beta density, and then integrate by parts. Exercise (25): 1-In the quantile applet, select the beta distribution. Vary the parameters and note the shape of the density function and the distribution function. In each of the following cases, find the median, the first and third quartiles, and the interquartile range. Sketch the boxplot; a.

a  1, b  1

b.

a  1, b  3

c.

a  3, b  1

d.

a  2, b  4

e.

a  4, b  2

2-Suppose that U has the beta distribution with parameters a and b . Show that E (U k )  B(a  k , b) / B(ab) 3- Suppose that U has the beta distribution with parameters a and b . Show that; a.

E (U )  a /( a  b)

b. V (U )  ab /[( a  b) 2 (a  b  1)]

Exercise (26): 1-Suppose that X has the gamma distribution with parameters a and r , that Y has the gamma distribution with parameters b and r , and that X and Y are independent. Show that U  X /( X  Y ) has the beta distribution with parameters a and b . 2-Suppose that U has the beta distribution with parameters a and b . Show that 1  U has the beta distribution with parameters b and a . 3-Suppose that X has the F-distribution with m degrees of freedom in the numerator and n degrees of freedom in the denominator. Show that;

U  (m / n) X /[1  (m / n) X ] has the beta distribution with parameters a  m / 2 and b  n / 2 . 4- Suppose that X has the beta distribution with parameters a  0 and b  0 . Show that the distribution is a two-parameter exponential family

with natural parameters a  1 and b  1 , and natural statistics ln ( X ) and ln (1  X ) . 109

Chapter Three: On Generalized Gamma and Beta Functions

3-Introduction: The subject of generalized functions is rich and expanding continuously with the emergence of new problems encountered in engineering and applied science applications. The development of computational techniques and the rapid growth in computing power have increased the importance of the special functions and their formulae for analytic representations. However, problems remain, particularly in heat conduction, astrophysics, and probability theory, whose solutions seem to defy even the most general classes of special functions. On a Class of generalized Gamma Functions, developed by the author, useful in the analytic study of several heat conduction problems, it presents some basic properties of these functions, including their recurrence relations, special cases, asymptotic representations, and integral transform relationships. The authors explore applications of these generalized functions to problems in transient heat conduction, special cases of laser sources, and problems associated with heat transfer in human tissues. They also discuss applications to astrophysics, probability theory, and other problems in theory of functions and present a fundamental solution to time-dependent laser sources with convective-type boundary conditions. Filled with tabular and graphical representations for applications, this monograph offers a unique opportunity to add to mathematical toolbox a new and useful class of special functions. The General Gamma Function is an extension of the concept of Gamma Function. We can input (almost) any real or complex number into the Gamma function and find its value. Such values will be related to factorial values. The gamma function  is applied in exact sciences almost as often as the well‐known factorial. It was introduced by the famous 110

mathematician L. Euler (1729) as a natural extension of the factorial operation. Following we call fundamental region, or fundamental domain of an analytic It was shown how to use the Gamma function to obtain the general solutions to Besssel's equation. We further discussed a few useful properties of Bessel functions, function  , a domain which is mapped conformally by IR onto the whole plane, except for one or more cuts (or slits). It has been proved in that every neighborhood of an isolated essential singularity of an analytic gamma function contains infinitely many no overlapping fundamental domains of  . In fact this is true as well for essential singularities which are limits of poles or of isolated essential singularities. The function given by the above integral is called the Gamma function. Actually, is written Gamma (n+1). It is true that for natural numbers n. It is also true that Gamma is defined for all nonnegative real numbers. Furthermore, Gamma (n) is differentiable, so you could define the derivative of n! with respect to n to be the derivative of Gamma (n+1) with respect to n . Gamma can be extended to a function defined for all complex numbers except for 0 and negative integral values, at which points it has a pole. This function is also called the Gamma function, and it is also infinitely differentiable wherever it is defined. The Euler Gamma function and the Beta function have ∞ as their unique essential singularity. For the function Gamma,  is a limit of poles, while for the function Beta it is an isolated essential singularity.  It follows that for each one of these functions, the complex plane can be written as a disjoint union of sets whose interiors are fundamental domains, that is, domains which are mapped conformally by the respective function onto the complex plane with a slit. By analogy with the well-known case of elementary functions we use the preimage of the real axis in order to find such a disjoint union of 111

sets. As we will see next, for the function Gamma there is a great similarity with that case, while for the function Beta a supplementary construction is needed. The (complete) gamma function  is defined by the integral is the gamma function given by;   x 1 t   t e dt; x  0 0   x   ; x  0  x 1 x  1; x  0.  

Returning of the graph below of the values of factorial numbers, we can use the above integral to calculate values of the Gamma function for any real value of n . This time, I have included a smooth curve passing through our factorial values. This curve is f (n)  n  1 . I’ve also added the new point F (3.5, 11.6317), which is the ordered pair representing  (4.5) = 11.6317 we just found. You can see that this new point lies on the smooth curve joining the other factorial values, the relation shown in the figure below will be used;

Figure 3.1: plot gamma function f (n)  n  1 . What does this integral mean?. The function under the integral sign is very interesting. It is the product of an ever-decreasing function with an ever-increasing one; f ( x )  x 3.5e  x

112

Let’s look at the graphs involved in this expression. Firstly, f ( x )  e x . Note that the value of the function (its height) decreases as x increases. We will use the relation shown in the figure below;

Figure 3.2: plot exponential function f ( x )  e x . Secondly, f ( x )  x 3.5 increases as x increases. This function is not defined (over the reals) for negative x. We will use the relation shown in the figure below;

Figure 3.3: plot exponential function f ( x )  x 3.5 .

113

Finally, we look at the product of the 2 functions. This is the graph of f ( x )  x 3.5e  x . We will use the relation shown in the figure below;

Figure 3.4: plot exponential function f ( x)  x 3.5e  x . The area under this graph (the shaded portion) from 0 to ∞ (infinity) gives us the value of Γ(4.5) = 3.5!. We use computer mathematics software (Scientific Notebook) to find the value of the integral. We only need to choose some "large numbers" (I chose 10000) for the upper bound of the integral since as you can see in the graph, the height of the curve is very small as x becomes very large. Here’s what we get: 4.5 

10000

x

e dx  11.6317

3.5 . x

0

This means the shaded area above is 11.6317 square units. This is in the range we estimated earlier. The Gamma function gives us values that are analogous to factorials of non-integer numbers. To finish, let’s look at the graph of f (n)  n  1for a greater range of real values of n . We observe there are some "holes" (discontinuities) in the graph for the negative integers. We will use the relation shown in the figure below; 114

Figure 3.5: plot general function f ( x)  x 3.5e  x .

3-1-Generalized Gamma Function: The generalized gamma function is initially known as an extension of the gamma function; however, it goes beyond this definition as its use extends various disciplines such as combinatory, physics, statistics, etc. Despite its uniqueness and ubiquity in mathematics, the generalized gamma function unfortunately has the characteristic of possessing singularities at negative integer values and zero. The gamma function and generalized by the author to a ,b  ; ,   and the generalized gamma function (GGF) is defined by the integral the gamma function given by; 

a ,b  ;  ,     (t  a ) 1e  ( t b ) dt;   0, a  0, b  0,  ,   IR  

0

Can simply make sure that this tight integration, which allows the definition of the value of complementarily, and we will prove this very 115

well in the generalized case in several chapters and extended formula of gamma and beta functions. Important case: Can make sure that the first parameter cannot be generalized because the shape change can be adjusted and become the same formula, so: 

a ,b k ;  ;  ,   1   (t  a ) k 1e  ( t b ) dt; k  0, a  0, b  0,  ,   IR  0

Let change variable y  t  a , then t  y  a limits

y  a t0 , alors y t  

, then dy  dt , then e  ( t b)  e  ( y a b)  e  ( ab) .e  y , and the integral; 



a

a

a ,b k ;  ,  ,   1 

k 1  ( a b )  y e dy  e  ( a b )  y k 1e  y dy y e

Let change variable z  y , then y  1 z limits

y  a y  

, alors

z  a , then z

dz  dy , and the integral; 

a ,b k ;  ,  ,   1 

y

k 1  ( a b )  y

e

e

dy  e

 ( a b )

a



(

1

z ) k 1e  z dz

a

  0   k e  ( a b )   z k 1e  z dz   z k 1e  z dz   a  0 

 k e  ( a b )  z k 1e  z dz  k e  ( a b ) 0,0 k ; 1, 0,   0 0



 k

e

 ( a b )

k



Remark: The generalized gamma case four k  y parameter is: a ,b k ;  ,  ,   1   e  ( a b)  y 

The definition and integral representations, for p most properties of

E p (x ) follow straight for warmly from those of   p, x  . Is defined by the integral is the gamma function given by; 

E p x   x p 1  t  p e t dt x

116

An alternative form of the integral function, obtained; E p x   x p 1 (1  p, x )

For generalized exponential integral [see Temme (1996b, p. 180)]. When the path of integration excludes the origin and does not cross the negative real axis, defines the principal value of E p (x ) , and unless indicated otherwise in the DLMF principal values are assumed other integral representations given by; 

E p x    t  p e  xt dt , x   / 2 1

An alternative form of the integral function, obtained; E p x  



x p 1e  x p 1 e  xt t dt , x   / 2, p  0. ( p ) 0 1 t

Integral representations of Mellin–Barnes type for E p (x ) follow the graphics below. We will use the relation shown in the figure below in figure3.6.

Figure 3.6: plot E p (x ) , 0  x  3, 0  p  8 .

117

In Figures 3.7-3.10, height corresponds to the absolute value of the function and color to the phase.

Figure 3.7: Plot E p ( x  iy ) ,  4  x  4,  4  y  4, p  1 / 2 . Principal value, there is a branch cut along the negative real axis.

Figure 3.8: Plot E p ( x  iy ) ,  4  x  4,  4  y  4, p  1 . Principal value, there is a branch cut along the negative real axis.

118

Figure 3.9: Plot E p ( x  iy ) ,  3  x  3,  3  y  3, p  3 / 2 . Principal value, there is a branch cut along the negative real axis.

Figure 3.10: Plot E p ( x  iy ) ,  3  x  3,  3  y  3, p  2 .

119

Example (Applications of the General gamma function): The volume of an n-dimensional hyper sphere of radius R is Vn   n / 2 R n /((n / 2)  1) , and factorial at the complex plane. We will use the relation shown in the figure below (Figure 3.11).

Figure 3.11: Plot Amplitude and phase of factorial of complex argument. Representation through the Gamma-function allows evaluation of factorial of complex argument. Equilines of amplitude and phase of factorial are shown in figure3.11. Let f   exp(i )  ( x  iy )!  ( x  iy  1) . Several levels of constant modulus (amplitude) ρ = const and constant phase  const are shown. The grid covers range  3  x  3,  2  y  2 , with unit step. The scratched line shows the level    .

120

3-2-Incomplete Generalized Gamma Functions: In mathematics, the gamma function is defined by a definite integral. The incomplete gamma function is defined as an integral function of the same integrand There are two varieties of the incomplete gamma function: the upper incomplete gamma function is for the case that the lower limit of integration is variable (i.e. where the "upper" limit is fixed), and the lower incomplete gamma function can vary the upper limit of integration. Generalizing the integral definition of the gamma function, we define the incomplete generalized gamma functions by the variable limit integrals; x

 a ,b  ;  ,  , x    (t  a )  1e 

( t b )

dt;   0, a  0, b  0,  ,   IR 

0

and 

a ,b  ;  ,  , x    (t  a )  1e 

( t b )

dt;   0, a  0, b  0,  ,   IR 

x

Clearly, two functions are related, for;

 a ,b ;  ,  , x   a ,b ;  ,  , x   a ,b ;  ,   The simple case of incomplete generalized Gamma function is defined as; 

x

 ( , x )   e t t  1dt, and ( , x )   e t t  1dt. x

0

Clearly, two functions are related, for;

 ( , x)  ( , x)  ( ) This incomplete Gamma function relation is important. Returns the digamma or psi function of x . Digamma is defined as the logarithmic. The incomplete gamma function,  ( , x ) , is defined by; x

 ( , x )   e t t  1dt, x  0 0

The incomplete gamma function is defined only for  .

121

Figure 3.12: plot same cases of  ( , x ) . Although  ( , x ) is well defined for x  0 , this algorithm does not calculate  ( , x ) for negative x . For large  and sufficiently large x ,

 ( , x) may overflow.  ( , x ) is bounded by ( ) , and users may find this bound a useful guide in determining legal values for  . For special case, let, x

 ( , x )   t  1e t dt 0

Which is often called the incomplete gamma function (x ) returns the value of the Euler gamma function of x . 

t

x 1 t

e dt

0

( , x ) Returns the value of the incomplete gamma function of x with parameter  ( ,0)  ( ) , where; 

t

x 1 t

e dt

x

ln( ( x )) Returns the natural log of the Euler gamma function, evaluated at x . Usually the computation of the gamma function ( ) is divided into

122

two components, the incomplete gamma function and the complement of the incomplete gamma function: 

x

 ( , x )   t  1e t dt , and

( , x )   t  1e t dt

0

x

,

and the recurrence formula is ;

( )   ( , x)  ( , x) Accurate computation of the complement of the incomplete gamma function,

( , x) , is not an easy task.

Computer scientists,

mathematicians, and statisticians have worked for more than half a century approximating ( , x) . However, the results are still not satisfactory. In the early 1970’s, Dingle [1973] and Olver [1974] developed an asymptotic expansion for the gamma function;





( x;  )  e  x  x  1 1  (  1) / x  (  1)(  2) / x 2 ...

The choice of employing  (a, x) or (a, x) is purely a matter of convenience. If the parameter a is a positive integer, in this equation its may be integrated completely to yield, 

n 1

 ( n, x )  (n  1)! 1  e  x  

s 0

n 1

( n, x )  ( n  1)! e  x  s 0

xs , s!

xs   s!  n  1,2 

These approximations for  are o(x) only. For large values of x and, Temme [1975] obtained a better approximation in terms of the complementary error function: G( , x )  Q( , x ) 

 ( , x ) erfc (  1 2 )   R( , x ) and ( ) 2

( , x ) erfc ( 1 2 )   R( , x ) where  is a function of (x/), ( ) 2

R( , x ) 

e (  ) 2i



c k 0

2k

( )(k  1 / 2)( / 2) k 1/ 2 ,

123

 ( t )  t  1  ln t , and coefficient ck() can be found in [Temme 1975]. Moreover, Temme [1979] extended his uniform asymptotic expansions to new results that included complex variables and estimations for the remainder. Fettis [1979] used an asymptotic expansion to estimate the upper percentage points of the Chi-square distribution. These four mathematicians characterized important mathematical properties and made significant contributions in the study of the gamma function. In 1979, Gautschi [1979] proposed a computational procedure for the incomplete gamma function  ( , x ) and its complementary function,

( , x) . Gautschi evaluated,

 * ( x;  ) 

x   ( x;  ) ,  ( )

by a power series to obtain;

 * ( x;  ) 

M ( ,   1; x ) ,  (  1)

where M ( ,   1; x) is a power series, ( , x) = ()[1x  * ( , x ) ].

3-3-Generalized Digamma Function: In mathematics, a special function which is given by the logarithmic derivative of the gamma function (or, depending on the definition, the logarithmic derivative of the factorial). Because of this ambiguity, two different notations are sometimes (but not always) used, with;

( x ) 

d ( x ) ln( x )  dx ( x )

Defined as the logarithmic derivative of the gamma function (x ) , and, F ( x) 

d ln( x! ) Defined as the logarithmic derivative of the factorial dx .

function and the two are connected by the relationship;

F ( x)  ( x  1)

124

The digamma function is defined as the logarithmic derivative of the gamma function;

( x ) 

d ( x ) ln( x )  dx ( x )

It is the first of the polygamma functions. As may be noted from the three definitions in the previous section, it is inconvenient to deal with the derivatives of the gamma or factorial function directly. Instead, it is customary to take the natural logarithm of the factorial function, convert the product to a sum, and then differentiate, that is;

n! nz ( z  1)( z  2) ( z  n) ln( z!)  limln(n!)  z ln n  ln( z  1)  ln( z  2)    ln( z  n), z! z( z )  lim

n 

n

Differentiating with respect to z, we obtain; d 1 1 1 ln( z! )  F ( z )  lim (ln n    ), n dz z 1 z  2 zn F  z 

1 1 1    lim  ln n    ......   n  z 1 z  2 zn 

(1)

d ln  z ! dz

  1 z  1            n n 1  z  n n 1 n(n  z )

(8)

Which defines F (z ) , the digamma function. From the definition of the Euler constant, the above equation may be rewritten as; 

F ( z )     ( n 1

 1 1 z  )     . zn n n 1 n( n  z )

The n th derivative of (x ) is called the polygamma function, denoted

 n (x ) . The notation  0 ( x)  ( x) . We will use the relation shown in the figure below in figure3.13.

125

Figure 3.13: Plot of 0 ( x)  ( x ) . And the Polygamma Functions is defined by:

F

 m

dm  z  m F  z dz , m=1, 2, 3, …… ( def )

d m1  m1 ln  z ! dz

(23)

(25)

  1

m 1



m ! n 1

1

 z  n

m 1

The final policy argument is optional and can be used to control the behavior of the function: how it handles errors. The logarithm of the absolute value of the gamma function log |Γ(x)| is computed. We will use the relation shown in the figure below in figure 3.14.

Figure 3.14: Plot of log |Γ(x)|. 126

We shall first derive an important formula for the gamma function x  that we will need later. we have for 0  x 1;  cos2x   log2n  1 logx   log2    sin2  nx  2n  2 n n1



Replacing x by 1 x , we have;



 cos2nx   log2n  1 log1 x   log2    sin2nx  2n  2 n n1

 Subtracting two formula leads to;

 x  2    log 2n   sin 2nx 1  x   n 1 n 2  log2   sin2nx 2  log n   n    n sin2nx  n1 n1



log

So ; log

 x  2 log n    log 2 1 2x    sin2nx 1 x   n1 n

 And the Riemann zetas function, we will use the relation shown in the 

figure below in figure 3.15, and the Riemann zetas function is given by; 

(29)(30) 1 m 1 n  m  F 0   1 m!  m  1  ( n )  z   F    z  1   m n n 1

 (m)  

(16)(23) (32) d n1 dn ln  z  F z  1    n  z      n 1 n dz dz

Figure 3.15: Plot of Riemann zetas function. 127

Results: 1-Alternative Forms of Gamma-Function: Let    2 , then dx  2d 

    e  

 1



d   e 

0

 2

2 1 



2d  2 e   2 1d 2

0

0

2-An alternative definition of   due to Euler is;  n!*n     1.........  n   

   lim  n 

This form is valid for positive and negative  and shows clearly the singularities of   at   0,1,2,....... and so on. Example (1): Evaluation of the Gamma integral , consider the integral; 

I n   dxx n e  x 0

For n  0 , the evaluation is trivial, giving; 

I 0   dxe  x   e  x

 0

1

0

For positive integer n, integration by part gives; 

I n    x de n

x

n x 

x e

0



 n  dxx n 1e  x  nI n 1

0

0

where, by repeated application of the L’Hospital rule, we have;

xn nx n1 n!  lim  lim x  0 x x x  e x  e x e

lim x n e  x  lim x 

Repeated application of we gives I n  n(n  1) I n2  n(n  1)

 n(n  1)

 n  m  I nm1

n   n  1 I nn11  n ! I 0

so that; 

I n   dxx n e  x  n ! 0

Example (3): Evaluate the integral 128



   t 5.2e t dt 2

0

Solution: Put t 2  x ;

2.11.1 1.1 1 1 1    x 2.1e  x dx  3.1  2.12.1  20 2 2 2 

=

1 2.11.10.1 20

From tables: 0.1  9.5135 , then:   1.10 . Example (4): Evaluate: 1

   ln 1 / t  dt 0

Solution: Let t  e x , then; 

1   3    x 2 e  x dx     2 2 0

 /2

 tan

Example (5): If  

3

  tan5  e tan  d Evaluate  . 2

0

tan2   x , then;

Solution: Let



1 1 1    xe  x dx  2  20 2 2 1

Example (6): If     x ln x  dx , evaluate  . 3

0

Solution: Let  t  ln x , then; e t  x, dx  e t dt , and; 3 3 t 3 t  x ln x  dx   e   t   e dt 1

0

0







0

0

     t 3e t e 3t dt    t 3e 4 t dt 1 let , u  4t , and du  dt 4

129



   e 2 x  x dx 2

1

3) Evaluate 1

   x 2 ln 1 / x  dx 3

1 



1 u / 43 e u du   1  u 3e u du  40 256 0



4  33  3!

1 3 * 3 * 2 *1  256 128



1) If 1.1  0.951 find 4.1 and  3.9 2) Evaluate 4) Evaluate 

   t 4e 2 t dt 3

0

Example (7): Evaluate 

u2 du 5   1  u 0



Hint

u   v   1 u  

Solution: Let v 

dv 1 1 u dv  du  , then 2 and du 1  u  1 u 1  u 2



 v 1  v dv. 2

0

Let v  sin 2  , then, dv  2 sin  cos d , then ;  /2

 /2

0

0

  2  sin 4  1  sin 2  sin  cos  d  2  sin 5  cos 3  d   3,2 

130

32 1  5 12

3-4-The Generalized Beta-Function: generalized four-parameter Beta function. The failure and repair rates are taken to be the generalized beta variable where the corresponding .We defines the Beta –Function

a ,b  ,   by the integral: 1

a ,b  ,     ( x  a ) 1 b  x 

 1

dx ;

0

The beta function requires that   0 and   0 . It underflows for large arguments.

Figure 3.16: Plot of 0,1  ,  

.

The generalized beta function is generalized by the same authors to

a ,b  ,   and GBF functions are defined by the integral is the beta function given by; 1

a ,b  ,     ( x  a ) 1 b  x 

 1

dx

0

Case one: Let change variable y  x  a , then x  y  a limits alors

y  a y 1 a

x0 x  1

, then dy  dx , then;

( x  a ) 1 b  x 

 1

 y  1 b  y  a 

 1

integral; 131

 y  1 (b  a)  y 

 1

, and the

,

1

a ,b  ,     ( x  a ) 1 b  x 

 1

1a

   y (b  a)  y 

dx 

0

1

1

dy

.a

Let change variable b  a  1 , then 1  a  b and  a  1  b , alors the integral; 1

a ,b  ,     ( x  a )

 1

b  x 

 1

1 a

dx 

0

1

1

dy

.a

b

   y 1  y 



   y (b  a )  y 

1

1

dy

.1b

Case two: Let change variable y  b  x , then x  b  y limits alors

yb y 1 b

x0 x  1

,

, then dy  dx , then;

( x  a ) 1 b  x 

 1

 b  y  a 

 1

y  1  (b  a )  y 

 1

y  1 , and the

integral; 1

a ,b  ,     ( x  a ) 1 b  x 

 1

1b

dx    (b  a )  y 

0

y  1dy

.b

b



 1

 (b  a )  y 

 1

y  1dy

.1b

Remark: The generalized beta case four a  0, b  1 parameter is: 1

0,1  p, q    ( x  0)

p 1

1  x 

q 1

0

  p, q   q, p 

1

dx   y p 1 1  y  dy q 1

.0

Suppose u  1  x , then x  1  u , then dx  du ; 0

0,1  p, q    1  u 

p 1

u 

q 1

1

  u q1 1  u  du   q, p 

du

1

p 1

0

 p, q  q, p . Going back to the definition: 1

0,1  p, q    x p 1 1  x  dx 0

Integrating by parts by assuming;

132

q 1

,

then

u  1  x 

q 1

,

dv  x p 1 dx , then;

1 xp  q  1 p q 1  q 2 0.1  p, q    1  x    x 1  x  dx  p 0  p 0 1

Then; 1

p  x 1  x 

q 2

1

dx   x p 1 1  1  x 1  x 

0

q 2

dx

0 1

x

p 1

1  x 

q 2

0

1

dx   x p 1 1  x  dx. q 1

0

 q  1 0,1  p, q    0,1  p, q  1  0,1  p, q   p 

 q  1  q  1 0,1  p, q .1   0,1  p, q  1 , then p   p    q 1  0,1  p, q     0,1  p, q  1  p  q  1  p  q  10,1  p, q   q  10,1  p, q  1

 p  q  10,1  p, q    p  10,1  p  1, q  This comes from the symmetric relation

0,1  p, q  0,1 q, p   1,0 q, p 

 p  q  10,1  p, q   p  1 q  1 0,1  p  1, q  1   p  q  2   q  1  0,1  p  1, q  1 An See  p  q  10,1  p, q    p  1   p  q  2  alternative form of the Beta-function, obtained from (1) by putting

  sin 2  is: 

0,1  p, q  

2

 sin

2 p 2

 cos 2 q2  2 sin  cos d

0



2

 2  sin 2 p 1  cos 2 q1 d ................(2) 0

133

The beta function  p, q  is the name used by Legendre and Whittaker and Watson (1990) for the beta integral (also called the Eulerian integral of the first kind). It is defined by;

0,1  p, q  

( p)( q) ( p  1)! ( q  1)!  ( p  q) ( p  q  1)!

The beta function  p, q  is implemented in Mathematics Beta [a, b], to derive the integral representation of the beta function, write the product of two factorials as; m! n! 





0

0

u m v n  e u du  e v dv

Now, let u  x 2 , v  y 2 , so; 

m! n! 

e

 x2

x

2 m 1

0



dx  e

 y2

y

2 n 1



dy 



0

( x e

2

 y2 )

x

2 m 1

y

2 m 1

dxdy

0 0

Transforming to polar coordinates with x  r cos  , y  r sin  , 2 2

m! n! 

 e 0

r2

r sin 

2 n 1

rdrd

0 2





em 1

r cos 

r 2 m  2 n 3 dr  cos 2 m 1  sin 2 n 1  d e r 2

0

0



 4e

r2

r

2 m  2 n 3

 /2

dr  cos 2 m 1  sin 2 n 1  d

0

0

 /2

 2( m  n  1)!  cos 2 m 1  sin 2 n 1  d 0

The beta function is then defined by ;  /2

0,1 (m  1, n  1)  2  cos 2 m1  sin 2 n 1  d  0

m! n! (m  n  1)!

Rewriting the arguments then gives the usual form for the beta function, ( p ) ( q ) ( p  q ) ( p  1)! ( q  1)!  ( p  q  1)!

0,1  p, q  

By symmetry, 134

0,1  p, q  0,1 q, p  The general trigonometric form is;  /2

 cos

m

x sin n xdx 

0

1 1 1 0,1 ( (n  1), (m  1)) 2 2 2

This equation can be transformed to an integral over polynomials by letting u  cos 2  , 0,1 m  1, n  1 

m ! n! ( m  n  1)!

1

  u m (1  u ) n du 0

0,1 m, n  

( m ) ( n ) ( m  n )

1

  u m1 (1  u ) n 1 du 0

For any p, q with p, q  0 ,

0,1  p, q  0,1 q, p  To put it in a form which can be used to derive the legendre duplication formula, let u  x , so u  x 2 and du  2 xdx , and ; 1

0,1 m, n    x

2 ( m 1)

2 n 1

(1  x )

1

2 xdx  2 x 2 m1 (1  x 2 ) n1 dx

0

0

To put it in a form which can be used to develop integral representations of the Bessel functions and hypergeometric function, let u  x 2 /(1  x 2 ) , 

so; 0,1 m  1, n  1   [u m /(1  u ) mn1 ]du 0

Derivatives of the beta function are given by;

d 0,1 x, y   0,1 x, y 0 ( y )  0 ( x  y ) dy d2 2 0,1 x, y   0,1 x, y 0 ( x )  0 ( x  y )  1 ( x )  1 ( x  y ) 2 dx 135

d2 2 0,1 x, y   0,1 x, y 0 ( y )  0 ( x  y )  1 ( y )  1 ( x  y ) 2 dy d2 2 0,1 x, y   0,1 x, y 0 ( x )  0 ( x  y )  0 ( y )  0 ( x  y )  1 ( x  y ) dxdy

where n (x ) is the polygamma function. Various identities can be derived using the Gauss multiplication formula;

(np ) (nq) (n( p  q))  ( p, q). 0,1 ( p  1 / n, q).0,1 ( p  2 / n, q)....0,1 ( p  (n  1) / n, q)  n nq 0,1 0,1 ( q, q).0,1 (2q, q).0,1 (3q, q)...0,1 (( n  1)q, q) Additional identities include ; 0,1 np, nq  

( p) ( q  1) q ( p  1) ( q) q   0,1  p  1, q  ( p  q  1) p (( p  1)q) p 0,1  p, q   0,1  p  1, q   0,1  p, q  1

0,1  p, q  1 

0,1  p, q  1 

q 0,1  p, q  pq

If n is a positive integer, then;

0,1  p, n  1 

1.2.3.4...n p( p  1)...( p  n )

0,1  p, p .0,1  p  1 / 2, p  1 / 2  

 4 p 1

2 p ......................................................... 0,1  p, p  . 0,1  p  q, r   0,1 q, r  0,1 q  r, p  The beta function is also given by the product

0,1 x, y  

x y  1  ( x  y) / k  xy k 1 (1  x / k )(1  y / k )

Gosper gave the general formulas; 

 k 1

0,1

i  i   a,  b  2n  1   2n  1

1 (2n  1) ( 2 n1) / 2  n 0,1 (n, (b  a )(2n  1)  1)0,1 a (2n  1)  1, b(2n  1)  2  (n  1)! for odd n , and ; 136



 k 1

n n n 0,1 (n, 2(b  a )n)0,1 2an, 2bn  i  i   a ,  b    0,1 2 ( b  a ) n  n 1 2n (n  1)! 0,1 2(a  b)n, (a  b  1)n   2n  2

which are an immediate consequence of the analogous identities for gamma functions. Plugging n  1 and n  2 into the above give the special cases; 0,1 ( a, b)0,1 a  1 / 3, b  1 / 30,1 a  2 / 3, b  2 / 3 

6 3 0,1 3a, 3b 

1  3( a  b) 0,1 ( a, b)0,1 a  1 / 4, b  1 / 4 0,1 a  2 / 4, b  2 / 4 0,1 a  3 / 4, b  3 / 4   

2 34 ( a b )  2 0,1 4a, 4b 

( a  b)[1  4( a  b)]0,1 2( a  b), 2( a  b  1

3-5-Relation of Beta-Function to Gamma-Function: The beta function requires that a  0 and b  0 . It underflows for large arguments;

  a, b  

  a    b  1 a 1 b1   t 1  t  dt 0   a  b

The incomplete beta function is defined by;

I x  a, b  

x a 1  x  a, b  1 b1  t 1  t  dt  0   a, b    a , b 

The incomplete beta function requires that 0  x  1 , a  0 , and b  0 . It under flows for sufficiently small x and large a , this underflow is not reported as an error. Instead, the value zero is returned. 



    e  x x 1dx  2  e t t 2 1dt , let 2

0

x  t  2

0 

 p  q   4 e  x 0

2

x

2 p 1



dx  e

 y2

y

2 q 1



dy  4   e x

0

0 0

Transform integral to polar coordinates 137

2

 y2

x 2 p 1 y 2 q1dxdy

x  r cos 

0r

; for

y  r sin 

0   /2

  /2

 p  q   4 

 e

r 2

r 2 p 1 cos 2 p 1 r 2 q1 sin 2 q1 rdrd

r 0  0



 x, y  J  drd  rdrd r,    Using Jacobean , then; x y x y J .  . r   r 

 p  q   4  e r r

2 p  2 q 1

 /2

dr  cos 2 p 1  sin 2 q1 d

0

0

  p  q  q, p    p  q  p, q 

 p, q  

 p  q   p  q 

1 / 2,1 / 2  

1 / 22  1 / 22 1

Then 1  1  /2

1 / 2,1 / 2  2  sin

11

 /2

 cos d  2  d   11

0

0 

1 / 2    But 1 / 2   e  x x 1 / 2 dx  0 , then 1 / 2    0

Useful alternative forms of Beta-Function: 1

 p, q    x p 1 1  x  dx q 1

0



Change variables x

 1 

, 

x . Limits 1 x

x0 x 1

 1   1 d  dx    d 2  1   2  1   1    

    p, q      1   0

p 1

 1    1  

q 1

1 d 1   2



 p 1 d pq 0 1   

 p, q   

138

 0 , then  

We may assume; 

 p 1  p1 d   pq 0 1   pq d 0 1   

1

 p, q   

In the 2nd integral put   

1 w

 p 1 1 1 1 w p  q p 12 d   dw  1 1   pq 0 w p1  1  pq w2 0 1  wpq dw 1   1

1



w

wq1  q1 dw  pq 0 1   pq d . 0 1  w 

1

1



 p 1   q1 d pq 0 1   

1

Then  p, q   

Example (1): Evaluate 1

dx



1  x4

0

Solution: Let x 4  u 

3

1 u 4 1 / 4 1 / 2  1 / 4   du  (1 / 4)  1 / 4,1 / 2   . 4 0 1 u 4 3 / 4 4 3 / 4 1

Example (2): Evaluate  /2



sin  d

0

Solution: Let u  sin2   /2

 0



1

1 u 4 1 3 / 4 1 / 2 sin  d   du   3 / 4,1 / 2  2 0 1 u 2 2 5 / 4 1

Since 5 / 4   /2

Then

1 1 / 4  and 4

 sin d  2



0

1 / 2  

3 / 4   1.198 1 / 4

Example (3): Evaluate 139



 0

x

3 2

1  x 2 7

dx

Solution:

 p 1 2.5 1 d   5 / 2,1  pq 3.5 0 1   



  p, q    

2.5.1 1   0.4 2.5 2.5 2.5

Example (4): Evaluate 1 4



1  4 x 5 dx x

0

1/ 4

Solution:  

x



1 2

1  4 x 2 dx 5

Let

u  4 x , then (1 / 4)du  dx

0 1

  (1 / 4)  1 / 4  2 u 

1



1 2

1

5

2

0



 



1

1  u 2 du  1  u 2 1  u 2 du  1  1 / 2,7 / 2 6

2

0

1 / 2  7 / 2   3.5  24  2 4 

 2.5 2.5 2

33



 2.5 1.5 1.5

 2.5 1.5 1 

3 2 1

2

3 2 2

2 

 12



 2.5 1.5 1 1 / 2 2

3 2 1 1

2.5 1.5 .

3-6-The Error-Function: The error function erf(x) is defined as; erf x  

2



x

e

u 2

du....................(1)

0

and clearly represents (apart from the factor 2



) the area under the

curve e  u from u  0 to u  x . We see immediately that erf 0  0 and 2

that erf   1 . 140



e

Since

u 2



du 

0



u  e du  2

0

erf   

,

2

then

,

  u2

 



1 12  1   e d   1 2   20 2 2 2





e

u 2

du 

0

2



.

 2

1

The graph of the error function is as shown below;

Figure 3.17: Plot of erf x  . The error function is essentially identical to the standard normal cumulative distribution function, denoted  , also named norm(x) by software languages, as they differ only by scaling and translation. Indeed, x  

1 2

x

e

t 2 / 2

dt 



1 x  1 x 1  erf ( )   erf (  )  2 2 2  2

or rearranged for erf and erf(c):

erf ( x)  2( x 2 )  1  2( x 2 ) 141

Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q function can be expressed in terms of the error function as;

Q( x ) 

1 1 x 1 x  erf ( )  erf ( ) 2 2 2 2 2

The inverse of  is known as the normal quantile function, or Probit function and may be expressed in terms of the inverse error function as;

Pr obit ( p)   1 ( p)  2erf 1 (2 p  1)   2erf 1 (2 p) The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics. The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):

erf ( x ) 

2x

1 3 F1 ( , , x 2 ) 2 2 

It has a simple expression in terms of the Fresnel integral. In terms of the Regularized Gamma function P and the incomplete gamma function,

1 sgn( x ) 1 2 erf ( x )  sgn( x ) P( , x 2 )  ( ,x ) 2 2 

sgn( x ) is the sign function.

3-7-Generalized error function: These generalized functions can equivalently be expressed for x  0 using the error function .The Graph of generalized error functions En x  :

142

Figure 3.18: Plot of En x  . The particular cases of generalized error functions En x  : The curve:

E1 x   (1  e  x ) /  The red curve: E2 x   erf ( x)

The green curve: E3  x 

The blue curve: The gold curve: E5  x 

Some authors discuss the more general functions: En x  

n!



x

t  e dt  n

0

Notable cases are: 143

n!





 (1) p p 0

x np1 (np  1) p!

E0 x  is a straight line through the origin En x  

x e 

E2 x  is the error function, erf x  erf(x).

After division by n! , all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n! . All generalized error functions for n  0 look similar on the positive x side of the graph. These generalized functions can equivalently be expressed for x  0 using the Gamma function and incomplete Gamma function: 1 1 ( n ) ( )  ( , x n ) n n En  x   ,x0



Therefore, we can define the error function in terms of the incomplete Gamma function: 1 ( , x n ) erf x   1  2 ,x0



3-8-Examples Example (1): For the infinite limit of the first definition, named after Euler, prove

( z )  lim

n 

1  2  3 n n z , z  0,1,2,3, z ( z  1)( z  2) ( z  n)

Solution: This definition of (z) is useful in developing the infiniteproduct form of (z). Here z may be either real or complex. Replacing z with z+1, we have 1  2  3 n n z 1 n  ( z  1)( z  2)( z  3)  ( z  n  1) nz 1  2  3 n  lim  nz n  z  n  1 z ( z  1)( z  2)  ( z  n)  z( z ).

( z  1)  lim

144

This is the basic functional relation for (z) . Also from the definition;

1  2  3 n n 1 n  1  2  3 n( n  1)

(1)  lim

(2)  1, (3)  2(2)  2,  (n)  1  2  3 (n  1)  (n  1)!. Example (2): Prove that; n

lim F ( z, n)  F ( z, )  ( z ), where F ( z, n)   (1  nt ) n t z 1dt , n

0

Solution: To show the equivalence of these two definitions, consider the function of two variables; n

F ( z, n)   (1  nt ) n t z 1dt ,

( z )  0

0

Since lim (1  t n ) n  e t ,

with n a positive integer.

n



lim F ( z, n)  F ( z, )   e t t z 1dt  ( z ). n

0

Returning to F(z, n), we evaluate it in successive integration by parts. For convenience let u= t/n. Then; 1

F ( z, n)  n z  (1  u) n u z 1du 0

Integrating by parts, we obtain; 1

z F ( z, n ) n 1 nu  ( 1  u )   (1  u ) n 1 u z du z n z 0 z 0

Repeating this, we finally get; 1 n(n  1) 1 u z  n 1 du  0 z ( z  1)  ( z  n  1) 1  2  3 n  nz. z ( z  1)( z  2)  ( z  n)

F ( z , n)  n z

This is identical with the expression on the right side of Equation. Hence

lim F ( z, n)  F ( z, )  ( z ), by Equation, completing the proof. n

Example (3): Prove the infinite product; 145

 1 z z  ze  (1  )e  z n ( z ) n n 1

Solution: The third definition is;  1 z  zez  (1  )e  z n , ( z ) n n 1

where  is the Euler-Mascheroni constant,

1  1 1    lim 1       ln n   0.577216 n  n  2 3  This form can be derived from the original definition by writing it a; 1  2  3 n nz n  z ( z  1)  ( z  n )

( z )  lim

1

1 n  z 1  nz   n  z m m 1 

 lim

Inverting and using; n  z  e (  ln n) z ,

we obtain; n 1 z  z lim e (  ln n ) z  (1  ), n  ( z ) m m 1

Multiplying and dividing by;

1 1 1  n  exp (1      ) z    e z m , 2 3 n  m1  we get;

  1 1 1 1    z lim exp 1       ln n  z   ( z ) n    2 3 n  n   z  lim  (1  )e  z m . n  m m 1   Then;

 1 z  zez  (1  )e  z n . ( z ) n n 1

Example (4): Prove that;

B ( p, q )  146

( p )( q) ( p  q )

Solution: Using the integral definition of Gamma function related to factorial expiration integral, we write the product of two factorials as the product of two integrals. To facilitate a change in variables, we take the integrals over a finite range;

m!n! lim 2



a2

a  0

a2

(m)  1,

0

(n)  1.

e u u m du  e v v n dv, where;

Replacing u with x2 and v with y2, we obtain; a

a

m! n!  lim 4 e  x x 2 m1dx  e  y y 2 n1dy a 

2

0

2

0

Transforming to polar coordinates gives us; a

 2

m!n! lim 4 e r r 2 m 2 n 3 dr  a 

2

0

0

 2

 (m  n  1)!2

0

cos 2 m1  sin 2 n 1 d

cos 2 m 1  sin 2 n 1 d .

The definite integral, together with the factor 2, has been named the beta function;  2

B(m  1, n  1)  2 

0



cos 2 m1  sin 2 n 1 d

m! n!  B( n  1, m  1) (m  n  1)!

Equivalently, in terms of the gamma function;

B ( p, q ) 

 ( p ) ( q ) . ( p  q )

Example (5): Prove that;





0

u a du a  2 (1  u ) sin a

Solution: Definite integrals, alternative forms. The beta function is useful in the evaluation of a wide variety of definite integrals. The substitution t=cosθ2 converts Equation to

B(m  1, n  1) 

1 m! n!   t m (1  t ) n dt 0 (m  n  1)!

Replacing t by x 2 , we obtain;

147

1 m! n!   x 2 m1 (1  x 2 ) n dx 2(m  n  1)! 0

The substitution t  u (1  u) in Equation yields still another useful form;  m! n! um  du (m  n  1)! 0 (1  u) mn 2

Verification of a / sin(a)  a! (a)! relation; If we take m  a, n  a, 0  a  1 , then;





0

ua du  a! ( a )! (1  u ) 2

On the other hand,





0



a 1  u u a du  ua du   a 2  0 (1  u ) (1  u ) 1 u 0

e ax dx  1  e x a  . sin a  a



(u  e x )

Example (6): Prove that;

7 9 9 B( , ) 2 2 1 11 5 5B ( , ) 2 2 Where B(, ) is the Beta function. Solution: This is achieved by rewriting the numerator of the ratio in terms of Gamma functions and using the recursive formula for the Gamma function:

148

7 9 7 9 7 7 9 9 B( , ) 9 ( ) ( ) (  1) (  1) ( ) 1 1 2 2  2 2 9 2 2 2 11 5 11 5 7 9 5 11 7 9 5 5B ( , ) 5 B ( , ) (  ) B( , ) (  ) 2 2 2 2 2 2 2 2 2 2 5 7  11 11  ( ) (  1)  ( ) /(  1) 9 2 2 2  2   11 5 9 7 5B ( , ) (  1   11) 2 2 2 2 5 11 ( ) ( ) 9 5 2 1 1 5 11 2 2  95    B( , )  1 5 2 9 B( 5 , 11) ( 5  11) 5 9 B( 5 , 11) 2 2 2 2 2 2 2 2 Example (7): a)-Prove that; 

x

3 2

5

(1  2 x ) dx  2

5 / 2

0

1 3 2

t

3 2

(1  t ) dt 

0

b)-Compute the following integral: 

1 3

3

3

5  x 2 (1  2 x) dx.  t 2 (1  t ) 2 dt 0

0

Solution: We need to use the integral representation of the Beta function: 

3 2



3

1 5 1 0 x (1  2 x) dx  0 ( 2 t ) 2 (1  t ) 2 dt 5



5

1 5 / 2 5 / 2 1 1 5 5  ( ) 5 / 2  (t ) 2 (1  t ) dt  ( ) 5 / 2 B( , ) 2 2 2 2 0

Now, write the beta function in terms of Gamma functions: 2

3 1 1  5 5 5 ( ) ( ) ( ) ( ) 2  5 5 9 2 2 2   2 2 2 B( , )      5 5 2 2 (5) 4  3  2  1 384 (  ) 2 2 Substituting this number into the previous expression for the integral, we obtain: 

3

1 3

3

1 5 0 x 2 (1  2 x) dx. 0 t 2 (1  t ) 2 dt   2  Example (8) : Evaluate 149

5/ 2

5 5 1 B( , ) 2    2 2 2

5/ 2

 9     384 

2



 (x

 3x1/ 2 )e  x dx

4

0

Solution:







0

0

0

4 1/ 2 x  ( x  3x )e dx 



4 x  x e dx  0



x

4 x 1/ 2  x  x e dx   3x e dx

51  x

e dx  (5)

0

(5)  (4  1)  4!  4  3  2  1  24 

and

1/ 2  x  x e dx  0



x

3 / 21  x

e dx  (3 / 2) and

3 / 2  1 / 2  1,

0

Then (3 / 2)  (1  1 / 2)  1 / 2(1 / 2)  (1 / 2)  





0

0

0

Then  ( x 4  3x1/ 2 )e  x dx 

4 x 1/ 2  x  x e dx   3x e dx  24 

3   36  2

Example (09): Evaluate

(5 / 2)  (3 / 2) Solution: (5 / 2) 





0

0

3/ 2 x  x e dx 

x

5 / 21  x

e dx and

3 / 2  1 / 2  1,

Then (5 / 2)  (1  3 / 2)  3 / 2(3 / 2)  (3 / 2)(1 / 2)   (1 / 4)  , and let  1 / 2  1  1 / 2 , then;

( 1 / 2)  (1 / 2  1) 

( 1 / 2  1)  2(1 / 2)  2  ( 1 / 2)

Then; (5 / 2)  ( 3 / 2)  (1 / 4)   2  

18 7    4 4

Example (10): Evaluate 1

 (x

4

(1  x ) 3  1 / 3 [ x 2 (1  x )] ])dx

0 1

Solution:

x 0

1

4

(1  x ) dx   x 51 (1  x ) 41 dx  3

0

the integral;

150

(5)(4) 4!3! 1 and   ( 9 ) 8! 280

1

1

1

0

0

0

2 2 / 3 1 / 3 1 / 31 2 / 31  [1 /3 [ x (1  x)] ]dx   x (1  x) dx   x (1  x) dx  B(1 / 3,2 / 3)

B(1 / 3,2 / 3) 

(1 / 3)(2 / 3)  (1 / 3)(1  / 3)   /(sin( / 3))   /( 3 / 2))  2 / 3 (1 / 3  3 / 3)

1

 (x

Then

4

(1  x ) 3  1 / 3 [ x 2 (1  x )] ])dx 

0

1  2 / 3 280

Example (11): Evaluate 

1

x (1  x )dx +  x 6 e 2 x dx

 0

Solution:

0

1

1

0

0

1/ 2 3 / 21 21  x (1  x)dx   x (1  x) dx  B(3 / 2,2) 

(3 / 2)(2) (7 / 2)

(3 / 2)  (1 / 2  1)  (1 / 2)(1 / 2)  (1 / 2) (7 / 2)  (5 / 2  1)  (5 / 2)(5 / 2)  (5 / 2)(3 / 2  1)  (5 / 2)(3 / 4)(1 / 2)  (15 / 8) 1

Thus,

x

1/ 2

(1  x )dx 

0 

x

6

(3 / 2)(2) (1 / 2)(1) 4   , and the integral (7 / 2) (15 / 8) 15

e 2 x dx , letting y  2 x , we get

0 



6 2 x 6 y  x e dx  (1 / 128)  y e dx  (1 / 128)(7)  (1 / 128)6!  0

0 

1



x (1  x )dx 

0

x

6

e 2 x dx  (15 / 8) 

0

45 8

45  (  3)(15 / 8) 8

Example (12): Evaluate 

[

3

x e  x  ( x 3e  x ) 2 ]dx

0 

Solution:



x 3 x 2 x  [ x e  ( x e ) ]dx   [ x e dx  3

0

0

3



x

6

e 2 x dx

0

Letting y  x 3 , we get 

 0

xe

x 3



dx  (1 / 3)  y 1 / 2 e  y dx  (1 / 3)(1 / 2)  (1 / 3) . 0

Letting y  2 x , we get; 151





0

0

6 2 x 6 y  x e dx  (1 / 128)  y e dx  (1 / 128)(7)  (1 / 128)6! 





x 3 x 2 x  [ x e  ( x e ) ]dx   [ x e dx  3

0

3



x

0

6

45 . 8

e 2 x dx = (1 / 3) +

0

45 . 8

Example (13): Evaluate 



 1

/(1  x

4

) dx   x

0

6

e 2 x dx

0

Solution: Letting x 4  y , we get, 

 dx /(1  x



4

) dx  (1 / 4)  y 3 / 4 (1  y ) dy  (1 / 4) (1 / 4, ) 

0

0



 1





x

/(1  x 4 ) dx 

0

6

e 2 x dx = ( 2 / 4) +

0

  2 /4 sin( / 4)

45 . 8

Example (14): Evaluate  / 2

 sin (cos 3

 / 2 2

x ) dx +

0

 sin

4

(cos 5 x ) dx

0

Solution: a. Notice that: 2m - 1 = 3 → m = 2

& 2n - 1 = 2 → m = 3/ 2 ,

 / 2

 sin (cos 3

2

x ) dx  (1 / 2) B(2,3 / 2)  8 / 15

0  / 2

b.

 sin

4

(cos 5 x ) dx  (1 / 2) B(5 / 2,3)  8 / 315

0  / 2

 sin (cos 3

0

 / 2 2

x ) dx +

 sin

4

(cos 5 x ) dx  8 / 15  8 / 315

0

Example (15): Evaluate  / 2

 [sin

6

x  cos 6 x ] dx ,

0

Solution: a. Notice that: 2m - 1 = 6 → m = 7/2

152

& 2n - 1 = 0 → m = 1/ 2

,

 / 2

 sin

6

x dx  (1 / 2) B(7 / 2,1 / 2)  5 / 32 , and

0  / 2

 cos

6

x dx  (1 / 2) B(1 / 2,7 / 2)  5 / 32

0  / 2

 [sin

Then

6

x  cos 6 x ] dx  5 / 32  5 / 32  5 / 16

0

Example (16): Evaluate 

x

6

(e 2 x  x 11/ 2 e  x )dx 3

0

Solution: The integral 



6 2 x 11/ 2  x 6 2 x  x ( e  x e )dx =  x e dx  3

0

0

x

6

e

2 x



x e  x dx Let 3

0

x 6 e 2 x dx  ( y 6 / 64)e  y .(1 / 2)dy

x  y / 2, x 6  y 6 / 64, dx  (1 / 2)dy, 





dx  (1 / 128)  y 6 e  y dy  (1 / 128)(5) and the integral

0

0





3

x e  x dx is calculate with form, Let

0

3

x  y1/ 6 , dx  (1 / 3) y 2 / 3dy, 



x e  x dx  ( y1/ 6 )e  y .(1 / 3) y 2 / 3dy







0

0

0

x e  x dx  (1 / 3)  ( y1/ 62 / 3 ) e  y dy  (1 / 3)  ( y 1/ 2 ) e  y dy  (1 / 3)  ( y1/ 21 ) e  y dy  (1 / 3)(1 / 2) 3

0

Then; 

x

6

( e 2 x  x 11/ 2 e  x )dx = 3

0 

6 2 x  x e dx  0





x e  x dx  (1 / 128)(5)  (1 / 3)(1 / 2) 3

0

153

3-9-Exercises Exercise (1): For real variables, and see the reference book, prove throughout this subsection x  0; a) 1  (2 ) 1/ n x (1/ n ) x e x 0,1 ( x)  e (1/ m ) x , n, m  IR 1 1   2 n m b) 0,1 (n x ) 0,1 (m / x )

1 1   2k k k c) 0,1 ( x )  0,1 (1 / x )  Exercise (02): Prove equivalently, for non-negative integer values of n 1 (2n  1)!! (2n)! 1) 0,1 (  kn)     n k 2 2 n!4 n k

1 ( 2) n k n! ( 4) n k 0,1 (  kn)    2 (2n  1)!! (2n )! 2)



Exercise (03): Prove equivalently, for non-negative integer values of

  0, a  0, b  0, and  ,   IR  , then; 

0  a ,b  ;  ,     (t  a ) 1e  ( t b ) dt   

0

Exercise (03): Prove equivalently, for non-negative integer values of

a  0, b  0, and  ,   IR  , then: 1

0  a ,b  ,     ( x  a ) 1 b  x 

 1

dx  

0

Exercise (05): Prove equivalently, for non-negative integer values of n : 1 (n  k )!! a ,b ( n)  ( n1) / 2 (b  a )  , k  n, b  a k k

Exercise (06): In analogy with the half-integer formula, prove:

k k (3n  2)!( k ) 1) a ,b (n  )  a ,b ( ) 3 3 3n k k (4n  3)!( k ) a ,b (n  )  a ,b ( ) 4 4 4n 2) 154

k k ( pn  ( p  1))!( pk ) a ,b (n  )  a ,b ( ) p p pn 3)

Exercise (07): In both cases s is a complex parameter, such that the real part of s is positive. By integration by parts we find the recurrence relations if is true; a ,b ( s, x )  ( s  1)a ,b ( s  1, x )  x s1e  x and conversely;

 a ,b ( s, x)  ( s  1) a ,b ( s  1, x)  x s1e  x Since the ordinary gamma function is defined as 

a ,b ( s)   e ( t b ) (t  a ) s 1 dx we have  a ,b ( s, x)  a ,b ( s, x)  a ,b ( s) 0

Exercise (08): Prove important properties: 1) a ,b ( x  1)  xa ,b ( x )

 sin x 2 x 1 3) 2 a ,b ( x ) a ,b ( x  1 / 2)  a ,b (2 x )  2) a ,b ( x ) a ,b (1  x ) 

4) a ,b ( x ) a ,b ( x  1 / m) a ,b ( x  2 / m)......a ,b ( x  (m  1) / m)  m1/ 2mx 2 x ( m1) / 2 a ,b (mx), m  1,2,3,..

Exercise (9): Show that the a ,b  ,   is finite for a  0 and b  0 , an

 ,   IR  using these steps: 1)Break the integral into two parts, from 0 to 1 / 2 and from 1 / 2 to 1. 2)If 0    1 , the integral is improper at u  0 , but (b  u)  1 is bounded on

(0,1 / 2) . 3)If 0    1 , the integral is improper at u  1 , but (u  a ) 1 is bounded on (1 / 2,1) . Exercise (10): Show that

B(a, b)  B(b, a) for a  0 , b  0 , and B(a,1)  1 / a . Exercise (11): Show that the beta function can be written in terms of the gamma function as follows:

a ,b  ,   

155

a ,b ( ) a ,b (  ) a ,b (   )

Hint: Express a ,b (   )a ,b  ,   as a double integral with respect to x and y where x  0 and 0  y  1 . Use the transformation w = xy, z = x - xy and the change of variables theorem for multiple integrals. The transformation maps the (x, y) region one-to-one and onto the region z  0 , w  0 ; the Jacobian of the inverse transformation has magnitude

1 /( z  w) . Show that the transformed integral is a ,b ( ) a ,b (  ) . Exercise (11): Show that if j and k are positive integers, then;

a ,b  j, k  

( j  1)! (k  1)! ( j  k  1)!

Exercise (12): a)- Show that;

a ,b (i  1, j )  [(i /( j  1)]a ,b (i, j ) b)-Use the recurrence relations, prove: 1)- a ,b (i, j )  [(i  j ) / j ]a ,b (i, j  1) 2)- a ,b (i, j )  [(i  j )(i  j  1) / ij ]  a ,b (i  1, j  1) Exercise (13): Show that

a ,b (1 / 2,1 / 2)   (b  a ) Exercise (14): Evaluate 

 ( x  2)

5

e ( x 3) dx

0

Exercise (15): Evaluate 

 ( x  1)

3/ 2

e  x 2 dx

0

Exercise (16): Evaluate

2,5 ( 5 / 2) , 2,5 ( 5 / 2) Exercise (17): Evaluate 

 ( x  3) 0

Exercise (18): Evaluate 156

5/ 2

e 2 x dx

1

 ( x  10)

2

(10  x ) 2 dx

0

Exercise (19): Evaluate 1

 [1 /

4

( x  2) 3 (11  x ) ]dx

0

Exercise (20): Evaluate 1

[

( x  3) 5 (3  x ) ]dx

0

Exercise (21): Evaluate 2

 [x

(8  x 3 ) ]dx

0

Exercise (22): Evaluate 2

 [( x  3)

5

(2  x ) 5 ]dx

0

Exercise (23): Evaluate 1

 [( x  1)

1/ 2

(1  x ) 3 ]dx

0

Exercise (24): Evaluate 1

 ( x  1)

1/ 2

(1  x ) 8 / 3 dx

0

Exercise (25): Evaluate 1

 ( x  1)

1 / 2

0

157

(1  x ) 8 / 3 dx

Chapter Four: On Generalized Gamma and Beta Distributions

4-Introduction: Statistical distributions have important role in data analysis, the generalized distributions are very flexible distributions for several kinds of random variables, and most important distribution is the generalized gamma and beta distribution which is used for positive random variables. A general family of Gamma and Beta distributions generated by general gamma and beta random variables. This family of distributions possesses great flexibility while fitting symmetric as well as skewed models with varying tail weights. In a similar vein, we define here a family of univariate distributions generated by Stacy’s generalized gamma variables. Several special cases of these results are then highlighted, and the form of these Gamma and Beta distributions is particularly amenable to these two families of distributions. This chapter introduces a five-parameter gamma and beta distributions which nests the generalized beta and gamma distributions and includes more than thirty distributions as limiting or special cases. The generalized gamma and beta leads to an exponential generalized beta distribution which includes generalized forms of the logistics, exponential, Gompertz, and Gumbell distributions, and the normal as special cases. In recent years, the Weibull and generalized gamma and beta distributions have been used to fit the risk-neutral density from option prices, the generalized gamma distribution for recovering the risk-neutral density. In terms of complexity, this distribution, having five, four and three parameters, fall between the Weibull and generalized beta distributions. The empirical evidence based on a set of interest rate derivatives data indicates that this distribution is capable of producing the same type 158

of performance as the Weibull, generalized gamma, beta, and Burr (III) distributions. This general model distribution introduces generalized gamma and beta-generated distributions. Sub-models include all classical gamma and beta-generated, Kumaraswamy-generated and exponentiated distributions. The diagrams have twenty eight and nineteen distributions including: continuous, discrete distributions. The relationship between the discrete and continuous models for the combined distribution and assignment problem is presented. Figure 4.1 shows the relation of the above continuous and discrete distributions.

Figure 4.1: Diagram of same continuous distribution relationships. 159

The generalized gamma and beta family of distributions provides the basis for partially adaptive estimation of econometric models with possibly skewed and leptokurtic error distributions. Applications of the models to investigating the distribution of income, stock returns and in regression analysis are considered. In probability and statistics, an exponential family is an important class of probability distribution sharing a certain form, specified below. This special form is chosen for mathematical convenience, on account of some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural distributions to consider. The concept of exponential families is credited to E.j.G. Pitman, G. Darmois, and B.O. Koopman in 1935–6. The term exponential class is sometimes used in place of exponential family, a diagram to show the relationships among probability distributions. Introductory probability and statistics textbooks typically introduce common univariate distributions individually, and seldom report all of the relationships between these distributions. The work given by Leemis (1986) contains an update of figure 4.2 which shows the same properties and relationships among several common univariate distributions. More details concerning these distributions are given by Johnson, Kotz, and Balakrishnan (1994, 1995) and Johnson, Kemp, and Kotz (2005). More concise treatments are given by Balakrishnan and Nevzorov (2003), Evans, Hastings, and Peacock (2000), Ord (1972), Patel, Kapadia, and Owen (1976), Patil, Boswell, Joshi, and Ratnaparkhi (1985), Patil, Boswell, and Ratnaparkhi (1985), and Shapiro and Gross (1981). Figures similar to the one presented here have appeared in Casella and Berger (2002), Marshall and Olkin (1985), Nakagawa and Yoda (1977), Song (2005), and Taha (1982). These generalized gamma and beta distributions are also good references for the specific distributions. In order to keep the calling sequences simple, whenever possible, the routines in this chapter are written for standard forms of statistical distributions. Hence, the 160

number of parameters for any given distribution may be fewer than the number often associated with the distribution. The generalized gamma distribution with most other types of parameters and with three or four parameters distributions can fit certain types of tail behavior. The more realistic relationship between important continuous distributions of second, third and fourth parameters can be related with other type of each possible value of the same parameter. The generalized beta of the first kind "GB1" and second kind "GB2" are special cases of the generalized beta distribution. Figure 4.2 shows the relation of the above distributions. The relationship between the discrete and continuous models for the combined distribution and assignment problem is presented in the paper [February 2008 issue of American Statistician and is also available online], which shows the content of the diagram given in figure 4.2 that contains 76 univariate probability distributions, there are 19 discrete and 57 continuous models. Discrete distributions are displayed in rectangular boxes; continuous distributions are displayed in rounded boxes. The discrete distributions are at the top of the figure, with the exception of the Benford distribution. A distribution is described by two lines of text in each box. The first line gives the Gamma distribution and its particular cases and its parameters. The second line contains the Beta distribution and cases assumes. The importance of the normal curve stems primarily from the fact that the distributions of many natural phenomena are at least approximately normally distributed. Galileo in the 17th century noted that these errors were symmetric and that small errors occurred more frequently than large errors. This led to several hypothesized distributions of errors, but it was not until the early 19th century that it was discovered that these errors followed a normal distribution. Independently, the mathematicians Adrian in 1808 and Gauss in 1809 developed the formula 161

for the normal distribution and showed that errors were fit well by this distribution.

Figure 4.2: Diagram of continuous and discrete distribution relationships. 162

Most statistical procedures for testing differences between means assume normal distributions. Because the distribution of means is very close to normal, these tests work well even if the original distribution is only roughly normal. Use a Generalized gamma and Beta distributions as the mixing distributions of other continuous, or parameter generalization of the Gamma or beta distributions. The "GB2" includes the "B2" and the "Generalized Gamma" as limiting cases. The "GB1" includes the distributions of "B2", the "two Burr", Generalized Gamma, Fisk, Weibull and Log-normal as special cases.

4-1-Generalized

Gamma

Distribution:

The

generalized

gamma

distribution is initially known as an extension of the gamma distribution; however the generalized gamma distribution is a continuous probability distribution with three parameters. It is a generalization of the twoparameter gamma distribution. Since many distributions commonly used for parametric models in survival analysis (such as the Weibull distribution and the long-normal distribution) are special cases of the generalized gamma, it is sometimes used to determine which parametric model is appropriate for a given set of data. The generalized gamma has five parameters a  0, b  0,  ,   IR  , and

  0, . For non-negative x, the probability density function of the generalized gamma is; 

( x  a ) 1 e   ( x b ) f a ,b  ,  ,  ; x   ; a ,b  ;  ,  

a, b,  ,   IR  ,   0

where a ,b  ; ,   denotes the generalized gamma function; a ,b  ;  ,   



 (t  a )

 1  ( t b ) 

e

dt;

a, b,  ,   IR  ,   0

0

A generalized gamma distribution of four parameters was first introduced by Amoroso 1925, and since then, different distributions 163

emerged as subclasses of this model, we use a reparameterization of the generalized gamma distribution that is compared with other usual twoparameter distributions, Weibull, generalized exponential (Gupta and Kundu 1999). The notation used here differs from Stacy (1962) and Prentice (1974). For initial values may result in failure to converge, so if there are covariates and there are convergence problems integral with different parameters. The one of special cases of generalized gamma has three parameters  ,   IR  , and   0 . For non-negative x, the probability density function of the generalized gamma is; f  ,  ,  ; x  



 1

 x     

( /  ) x e  /  

;

 ,   IR  ,   0

where  denotes the gamma function. The cumulative distribution function is; x

 t 

  1 F  ,  ,  ; x   (  /  )t  1e    dt;   /   0

 ,   IR  ,   0

Case (1): If     0 then the generalized gamma distribution becomes the Weibull distribution. Case (2): Alternatively, if   1 , the generalized gamma becomes the gamma distribution. Case (3): Alternative parameterizations of this distribution are sometimes used; for example with the substitution   /   0 α = d/p. In addition, a shift parameter can be added, so the domain of x starts at some value other than zero. If X has a generalized gamma distribution, then;   r       r r E X    ; r  1,2,3,...      

The special case of generalized gamma distribution is a three-parameter distribution. One parameterization of the generalized gamma distribution 164

uses the parameters k ,  , and  . The pdf for this form of the generalized gamma distribution is given as;

x    f k ,  ,  ; x    

k  1

e

 x    

 k 

k ,  ,   IR 

;

While another parameterization (as used by Relia Soft's), software and as used in this work) uses the parameters  ,  and  , where:

  ln( ) 

1



. ln(

1



2

) 

  k



1 ; k

As can be seen, the pdf of the generalized gamma distribution is complex, and parameter evaluation is by no means trivial. This is one of the reasons that the distribution has not been widely used or discussed in reliability and life data analysis. However, this complexity aside (given the fact that software can be utilized), the generalized gamma can prove quite useful since (and as mentioned previously) the generalized gamma distribution includes other distributions as special cases based on the values of the parameters. As can be seen from the following graph, the Weibull distribution is a special case of the generalized gamma when   1 . The exponential distribution is a special case when   1 and   1 . The lognormal distribution is a special case when   0 . The gamma distribution is a special case when    . We will use the relation shown in the figure below (Figure 4.3);

165

Figure 4.3: Plot of f k , ,  ; x  . Furthermore, by allowing



taking negative values, the

generalized gamma distribution can be further extended to include additional distributions as special cases. For example, the Frechet distribution of maxima (also known as a reciprocal Weibull) is a special case when   1 . Given this flexibility and the fact that the distribution is a superset of the commonly used life distributions, one may want to consider the generalized gamma distribution in lieu of another candidate (i.e. Weibull, lognormal, etc). While this is appropriate, one should also consider the data requirement drawback present when dealing with more complex distributions. In other words, the more parameters present, the more data points that are needed. This is the so-called "Weibull" family of distributions, named after the Swedish engineer Waloddi Weibull (1887-1979) who popularized its use for reliability analysis, especially for metallurgical failure modes. Weibull's first paper on the subject was published in 1939, but the method didn't attract much attention until the 1950's. Interestingly, the "Weibull distribution" had already been studied in the 1920's by the statistician Emil 166

Gumbel (1891-1966), who is best remembered today for his confrontation with the Nazis in 1931 when they organized a campaign to force him out of his professorship at Heidelberg University for his outspoken pacifist and anti-Nazi views. The important special cases of the generalized gamma is Weibull distribution is one of the most commonly used distributions in reliability engineering because of the many shapes it attains for various values of β (slope). It can therefore model a great variety of data and life characteristics. The 2-parameter Weibull pdf is given by: x f  ,  ; x   (  /  )  

 1

e

x    



;

,   0

where: f ,  ; x   0,

x  0,

,   0 ; and:

 = scale parameter.  = shape parameter (or slope). __

The mean, X of the 2-parameter Weibull pdf is given by: __ 1  X     1  

1  where   1 is the gamma function evaluated at the value of  

1    1 .  

~

The median, X , of the 2-parameter Weibull is given by:

X   ln 2  ~

1

~

The mode, X , of the 2-parameter Weibull is given by: 1

 1  X   1     ~

The standard deviation,  T of the 2-parameter Weibull is given by:

167

  2 1  T   1    1      

2

The cdf of the 2-parameter Weibull distribution is given by: F ( x)  1  e

x    



The Weibull reliability function is given by: R( x )  1  F ( x )  e

x    



The Weibull conditional reliability function is given by:  x t      





R( x  t ) e R ( x, t )    or: R ( x, t )  e x R (t )   e  

  x t   x              



  

This Equation gives the reliability for a new mission of t duration, having already accumulated X hours of operation up to the start of this new mission and the units are checked out to assure that they will start the next mission successfully (It is called conditional because you can calculate the reliability of a new mission based on the fact that the unit(s) already accumulated X hours of operation successfully). For the 2-parameter Weibull distribution, the reliable life, X R , of a unit for a specified reliability, starting the mission at age zero, is given by: 1

X R  .  lnR( x R )



This is the life for which the unit will function successfully with a ~

reliability of R( xR ) . If R( xR )  0.5 then X R  X , the median life, or the life by which half of the units will survive. The 2-parameter Weibull failure rate function,  (x ) , is given by:

f ( x)   ( x)   R( x ) 

 x    

 1

This subchapter includes the following topics. The characteristics of the 2parameter Weibull distribution can be exemplified by examining the two 168

parameters, beta,  and eta,  , and the effect they have on the pdf, reliability and failure rate functions. Looking at  : Beta,  , is called the shape parameter or slope of the Weibull distribution, changing the value of  forces a change in the shape of the pdf as shown in figure 4.4. In addition, when the cdf is plotted on Weibull probability paper, as shown in Figure 4.4, a change in beta is a change in the slope of the distribution on Weibull probability effects of  on the pdf.

Figure 4.4: Weibull pdf with 0    1 ,   1 ,   1 and a fixed  . For 0    1 , the failure rate decreases with time and: 

As x  0,



As x  ,

 

f ( x)  0 . f ( x)  0 .

f (x ) decreases monotonically and is convex as X increases. The mode is non-existent.

For   1 , it becomes the exponential distribution, as a special case, or:

f ; x   where

1



1





e

x



;

  0, x  0

  = chance, useful life or failure rate. 169

For   1 , f x  , the Weibull assumes wear-out type shapes (i.e. the failure rate increases with time) and: 

f x   0 at x  0 .



f x  increases as X  X mode) and decreases thereafter.



~

For   2 it becomes the Rayleigh distribution as a special case. For   2.6 the Weibull pdf is positively skewed (has a right tail), for 2.6    3.7 its coefficient of skewness approaches zero (no tail); consequently, it may approximate the normal pdf and for

  3.7 it is negatively skewed (left tail). The parameter  is a pure number, i.e. it is dimensionless, and effects of

 on the Reliability Function and the cdf;

Figure 4.5: Weibull cdf, or unreliability vs. time, on Weibull probability plotting paper with 0    1 ,   1 ,   1 and a fixed  .

170

Figure 4.6: Weibull 1-cdf, or reliability vs. time, on linear scales with

0    1 ,   1 ,   1 and a fixed  . R(x ) decreases sharply and monotonically for 0    1 , it is convex and decreases less sharply for the same  . For   1 and the same  , R(x ) decreases monotonically but less sharply than for 0    1 and is convex. For   1 , R(x ) decreases as X increases but less sharply than before and as wear-out sets in, it decreases sharply and goes through an inflection point, and effects of  on the Failure Rate Function.

Figure 4.7: Weibull failure rate vs. time with 0    1 ,   1 ,   1 . 171

The Weibull failure rate for 0    1 is unbounded at x  0 . The failure rate,  (x ) , decreases thereafter monotonically and is convex, approaching the value of zero as x   or  ()  0 . This behavior makes it suitable for representing the failure rate of units exhibiting early-type failures, for which the failure rate decreases with age. When such behavior is encountered, one or more of the following conclusions can be drawn: 

Burn-in testing and/or environmental stress screening are not well implemented.



There are problems in the production line.



Inadequate quality control.



Packaging and transit problems.



For   1 ,  (x ) yields a constant value of 1 /  , or:

 ( x)    1 /  This makes it suitable for representing the failure rate of chance-type failures and the useful life period failure rate of units. For   1 ,  (x ) increases as T increases and becomes suitable for representing the failure rate of units exhibiting wear-out type failures. For

1    2 the  (x ) curve is concave, consequently the failure rate increases at a decreasing rate as T increases. For   2 , or for the Rayleigh distribution case, the failure rate function is given by:

 ( x) 

2 x      

Hence there emerges a straight line relationship between  (x ) and x , starting at a value of  ( x)  0 at x  0 and increasing thereafter with a slope of

2 . Consequently, the failure rate increases at a constant rate as T 2

increases. Furthermore, if   1 the slope becomes equal to 2 and  (x ) becomes a straight line which passes through the origin with a slope of 2. 172

When   2 the  (x ) curve is convex, with its slope increasing as T increases. Consequently, the failure rate increases at an increasing rate as T increases indicating wear-out life. Looking at  :  , is called the scale parameter of the Weibull distribution. The parameter  has the same units as x , such as hours, miles, cycles, actuations, etc.

Figure 4.8: Weibull pdf with   50, 100,200 . A change in the scale parameter  has the same effect on the distribution as a change of the abscissa scale. If η is increased while  is kept the same, the distribution gets stretched out to the right and its height decreases, while maintaining its shape and location. If η is decreased while  is kept the same, the distribution gets pushed in toward the left (i.e. toward its beginning, or 0) and its height increases. Example (Weibull Model): A nonlinear expression for the hazard-rate function is used when it clearly cannot be represented linearly with time. A typical expression for the hazard function (decreasing or increasing) under this condition is;

 t h(t )  ( )  1.  

This model is referred to as the Weibull Model, and its f (t ) is given as; 173

 t  1  (t )  , t > 0, f (t )  ( ) e   where  and  are positive and are referred to as the characteristic life and the shape parameter of the distribution respectively, and for   1 this

f (t ) becomes an exponential density, when   2, the density function becomes a Rayleigh distribution. It is also well known that the Weibull p.d.f. approximates to a normal distribution if a suitable value for the shape parameter  is chosen. Makino (1984) approximated the Weibull distribution to a normal using the mean hazard rate and found that the shape parameter that approximates the two distributions is   3.43927. This value of  is near to the value

  3.43938 , which is the value of the shape parameter of the Weibull distribution at which the mean is equal to the median. The p.d.f.'s of the Weibull distribution for different  ' s are shown in Figure 4.9. The distribution and reliability functions of the Weibull distribution F (t ) and

R(t ) are given in Equations:

Figure 4.9: The Weibull p.d.f. for different  . 174

t t ( )  ( )     1 (  )  F (t )   ( ) e d = F (t ) 1  e  , and R(t )  e  , 0 

t

t>0

The Weibull distribution is widely used in reliability modeling since other distributions such as exponential, Rayleigh, and normal are special cases of the Weibull distribution. Again, the hazard rate function follows the Weibull model:

h(t ) 

f (t )  t  1  ( ) . 1  F (t )  

When   1 , the hazard rate is a monotonically increasing function with no upper bound that describes the wear out region of the bathtub curve. When

  1 , the hazard rate becomes constant (constant failure rate region) and when   1 , the hazard rate function decreases with time (the early failure rate region). This enables the Weibull model to describe the failure rate of many failure data in practice. The mean and variance of the Weibull distribution are:

 1 E T  time tofailure      1   ,   2    2    1    Var T       1       1     ,            2

where



 0

(n) is

the

gamma

function



(n)   x n1e x dx 0

and

x n1e x / dx  (n) n .

Application: To determine the fatigue limit of specially treated steel bars, the Prot method (Collins, 1981) for performing fatigue test is utilized. The test involves the application of a steadily increasing stress level with applied cycles until the specimen under test fails. The number of cycles to failure is observed to follow a Weibull distribution with   5 (measurements are in 103 cycles) and   2 . What is the reliability of a bar

175

at 106 cycles? What is the corresponding hazard rate, and what is the expected life (in cycles) for a bar of this type?. Solution: Since the shape parameter  equals 2, the Weibull distribution becomes a Rayleigh distribution, and we have a linearly increasing hazard function. Its probability density function is given by Equation. The reliability expression for the Weibull model is given by Equation, R(106 )  e (10

6

/ 5103 ) 2

 e40,000  0.

The hazard rate at 106 cycles is ;

 t 2 106 h(t )  ( )  1  ( ) or h(106 )  0.08 failures /cycle.   5000 5000 The expected life of a bar is;

 1  3 1 1 E T  cycles to failure      1    (5  103 )     (5000)      2 2 2   1   5000      4431. 2 The expected life of a bar from this steel is 4, 431 cycles. The important special cases of the generalized gamma are gamma distribution of two parameters. Let a random variable X that is gammadistributed with scale  and shape k is denoted X ~ (k ,  ) , then the probability density function of the gamma distribution can be expressed in terms of the simple gamma function parameterized in terms of a shape parameter k and scale parameter  . Both k and  will be positive values. The equation defining the probability density function of a gamma-distributed random variable x is; x

x k 1     f k ,  ; x   k e ;  (k )

x  0; k ,   0 ;

This parameterization is used in the info box and the plots. Alternatively, the gamma distribution can be parameterized in terms of a shape parameter   k and an inverse scale parameter   1 /  , called a rate parameter: 176

f  ,  ; x  

  x  1  x e ; ( )

x  0;  ,   0

If  is a positive integer, then ( )  (  1)! , both parameterizations are common because either can be more convenient depending on the situation.

Figure 4.10: Plot of gamma density function and cumulative distribution function.

Figure 4.11: Plot of gamma density function and cumulative distribution function. A general family of univariate distributions generated by generalized gamma distributions, proposed, has been discussed recently in the same simple cases literature. This family of distributions given chance to new distributions cases , we define here a family of univariate distributions generated by Stacy’s generalized gamma variables, is also proposed for the search other form of models of the parameters, and the form of these cases is particularly amenable to these simple or mixture families of distributions.

177

4-2-The Generalized Beta Distribution: The generalized four-parameter Beta distribution in its standard form ranges from a to b , and takes a wide range of shapes a, b . The continuous random variable X has an exponential distribution, with parameters  ,  if its density function is given by; f a ,b  ,  ; x  

( x  a ) 1 b  x  a ,b  ,  

 1

, x  a, b,   0,   0

The failure and repair rates are taken to be the generalized beta variable where the corresponding. We define the general Beta –Function

a ,b  ,   by the integral: 1

a ,b  ,     ( x  a ) 1 b  x 

 1

dx ;

0

The generalized beta function requires that   0 ,   0 . We need to shift from 0,1 to a, b  . To do this, we will use the linear transformation 1  a  b  a x . The “shifted” pdf becomes, it underflows for large

arguments.





If k  (b  a ) pq1  p, q 

1

and   p  1 and   p  1 , then the two

pdfs are the same. The parameters variables a and b are the same optimistic and pessimistic times described above. The parameters  and  are determined by assuming that the standard deviation of the distribution is (b  a ) / 6 . In the beta distribution, the mean and standard deviation alone do not determine a unique distribution. A general type of statistical distribution which is related to the generalized gamma distribution, and generalized Beta distributions have two free parameters, which are labeled according to one of two notational conventions. The usual definition calls these  and  , and the other uses

     1 and      1 (Beyer 1987, p. 534). The domain is 0,1 , and the probability function p(x ) and distribution function D(x ) are given by; 178

x  1 1  x  (   )  1  1 p( x )  f 0,1  ,  ; x    x 1  x  , x  0,1,   0,   0 0,1  ,   ( )(  )  1

D( x )  F0,1  ,  ; x  

x



t  1 1  t 

0

0,1  ,  

 1

, x  0,1,   0,   0

Figure 4.12: Plot of beta density function and cumulative distribution function. The raw moments are given by;

E( X r ) 

(   )(  r ) (Papoulis 1984, p. 147), and the central ( )(    r )

moments by; 2

      r  E ( X   )    2 F1   ,r,    ;         r

where

2

   F1   , r,    ;  is a hypergeometric function. The mean,   

variance, skewness, and kurtosis are therefore given by;



 

179

2  1  2 

 .         1 2

2(    )     1



      2

6  3   2 (1  2  )   2 (1   )  2 (2   )       2    3



The beta distribution describes a family of curves that are unique in that they are nonzero only on the interval 0,1 . A more general version of the function assigns parameters to the end-points of the interval. The beta cdf is the same as the incomplete beta function. The beta distribution has a functional relationship with the t distribution. If Y is an observation from Student's t distribution with v degrees of freedom, then the following transformation generates X , which is beta distributed.

X

1 1 Y  2 2 v Y2

If

Y ~ t (v ) then X ~ B(v / 2, v / 2) . The Statistics Toolbox uses this

relationship to compute values of the t cdf and inverse function as well as generating t distributed random numbers. Example (1): A bank may have data on the number of creditors of a certain type that have defaulted ( s ) out of the total number of creditors of this type. Then the probability that the next creditor of the same type will default can be estimated as s  1, n  s  1 ; Example (2): A random survey of 100 car owners over 65 years of age reveals that 57 considered a newly proposed insurance policy to be more attractive than their current policy. One can estimate that the fraction of drivers in this age group who would have the same opinion as 57  1, 100  57  1  58, 44 .

The Beta distribution is the conjugate prior (meaning it has the same functional form, therefore also often called "convenience prior") to the Binomial likelihood function in Bayesian inference and, as such, is often 180

used to describe the uncertainty about a binomial probability, given a number of trials n have been made with a number of recorded successes s. In these situations, a is set to the value ( s  x ) and b is set to (n  s  y ) , where x, y  is the prior. Example (3): The probability density of Beta distribution is symmetric. 0.25



P( X  0.25) 

a)

0

0.25





0

(   )  1 )x (1  x)  1 ( )(  )

(3.5) (2.5)(1.5)(0.5)  x 2.5 1.5 )x  (2.5)(1) 2.5 (1.5)(0.5) 

0.25

 0.252.5  0.0313 0

(   )  1 )x (1  x)  1  (  )  (  ) 0.25 0.75

b) P(0.25  X  0.75) 



(3.5) (2.5)(1.5)(0.5)  x 2.5   )x1.5  (2.5)(1) 2.5 (1.5)(0.5)  0.25 0.75



0.75

 0.752.5  0.252.5  0.4559 0.25

2.5   0.7143 c)   E ( X )     2.5  1

 2  V(X ) 

 2.5   0.0454 (   ) (    1) (3.5) 2 (4.5) 2

0.25

d) P( X  0.25) 

 0

0.25



 0

(   )  1 )x (1  x)  1 ( )(  )

(5.2) (4.2)(3.2)(2.2)(1.2)(1.2) (1)(1  x) 4.2 3.2 )(1 x)  (1)(4.2) (3.2)(2.2)(1.2)(1.2) 4.2

(   )

1

e) P(0.5  X ) 

 ( )( ) )x

 1

0.25

 (0.75) 4.2  1  0.7013 0

(1  x)  1

0.5

(5.2) (4.2)(3.2)(2.2)(1.2)(1.2) (1)(1  x) 4.2  )(1 x) 3.2  (1)(4.2) (3.2)(2.2)(1.2)(1.2) 4.2 0.5 1

f)   E ( X ) 

 2  V(X ) 

 



1

 0  (0.5) 4.2  0.0544 0.5

1  0.1923 1  4.2

 4.2   0.0251 (   ) (    1) (5.2) 2 (6.2) 2

181

j) Mode =

 1 2   0.8333     2 3  1.4  2

  E( X ) 

 

 2 V(X )  h) Mode =



3  0.6818 3  1.4

 4.2   0.0402 (   ) (    1) (4.4) 2 (5.4) 2

 1 9   0.6316     2 10  6.25  2

  E( X ) 

 

 2 V(X ) 



10  0.6154 10  6.25

 62.5   0.0137 (   ) (    1) (16.25) 2 (17.25) 2

(   )

1

g) P( X  0.9) 

 ( )( ) )x

 1

(1  x)  1

0.9

(11) (10)(9)(9) x10   )x9  (10)(1) (9)(9) 10 0.9 1

0.5

l) P( X  0.5) 

(   )

 ( )( ) )x

 1

1

 1  (0.910 )  0.6513 0.9

(1  x)  1

0

(11) (10)(9)(9) x10   )x9  (10)(1) (9)(9) 10 0 0.5

n)   E ( X ) 

 



0.5

 0.510  0.0010 0

10  0.9091 10  1

 10   0.0069 (   ) (    1) (11) 2 (12) 1 (   )  1 P( X  0.8)   )x (1  x)  1  (  )  (  ) 0.8

 2  V(X ) 

2

1

(5) ( 4)(3)(3) x 2 2 x 3 x 4   ) x(1  x ) 2  (   ) (2)(3) (2)(3) 2 3 4 0.8 0.8 1

 12(0.0833  0.0811)  0.0272

182

4-3-The PERT Beta Distribution: The PERT (also known as Beta PERT) distribution gets its name because it uses the same assumption about the mean (see below) as PERT networks (used in the past for project planning). It is a version of the Beta distribution and requires the same three parameters as the Triangle distribution, namely minimum (a), mode (b) and maximum (c). The figure below shows three PERT distributions whose shape can be compared to the generalized four-parameter Beta distribution in its standard form ranges from a  min( x) to b  max( x) , and takes a wide range of shapes a, b . The continuous random variable

X has an exponential distribution, with parameters  ,  if its density function is given by; ( x  min)  1 max  x  f  ,  ; x   a ,b  ,  

 1

, x  min( x ), max( x ) ,   0,   0

The failure and repair rates are taken to be the generalized beta variable where the corresponding. We define the Beta –Function  ,   by the integral: 1

 ,     x  1 1  x 

 1

dx

0

It is a version of the Beta distribution and requires the same three parameters as the Triangle distribution, namely minimum (a), mode (b) and maximum (c). The Figure 4.11 below shows three PERT distributions whose shape can be compared to the triangle distributions:

183

Figure 4.13: Plot of density function of PERT distribution for same values. The PERT distribution is used exclusively for modeling expert estimates, where one is given the expert's minimum, most likely and maximum guesses. It is a direct alternative to a Triangle distribution, so a discussion is warranted on comparing the two cases of comparison with the Triangle distribution. The equation of a PERT distribution is related to the Beta4 distribution as follows Pert a, b, c   a ,b  ,     ,  , a, c  . The mean, variance, skewness, and kurtosis are therefore given by;



a  4b  c 6

(   a )2b  a  c  (b   )c  a   c    2  (  a)

1 

The last equation for the mean is a restriction that is assumed in order to be able to determine values for  1 and  2 . It also shows how the mean for the PERT distribution is four times more sensitive to the most likely value than to the minimum and maximum values. This should be compared with the Triangle distribution where the mean is equally sensitive to each parameter. The PERT distribution therefore does not suffer to the same extent the potential systematic bias problems of the 184

Triangle distribution, that is in producing too great a value for the mean of the risk analysis results where the maximum for the distribution is very large. The standard deviation of a PERT distribution is also less sensitive to the estimate of the extremes. Although the equation for the PERT standard deviation is rather complex, the point can be illustrated very well graphically. The figure below compares the standard deviations of the Triangle and PERT distributions with minimum a=0, maximum c= 1, and varying most likely value b. The observed pattern extends to any {a,b,c} set of values. The graph shows that the PERT distribution produces a systematically lower standard deviation than the Triangle distribution, particularly where the distribution is highly skewed (i.e. b is close to the minimum or maximum). As a general rough rule of thumb, cost and duration distributions for project tasks often have a ratio of about 2:1 between the (maximum - most likely) and (most likely - minimum), equivalent to b = 0.3333 on the figure above. The standard deviation of the PERT distribution at this point is about 88% of that for the Triangle distribution. This implies that using PERT distributions throughout a cost or schedule model, or any other additive model with similar ratios, will display about 10% less uncertainty than the equivalent model using Triangle distributions. The figure 4.14 below shows three the standard deviation of PERT distribution:

185

Figure 4.14: Plot of standard deviation of PERT distribution for same values. You might argue that the increased uncertainty that occurs with Triangle distributions will compensate to some degree for the overconfidence that is often apparent in subjective estimating. The argument is quite appealing at first sight but is not conducive to the long term improvement of the organization's ability to estimate. We would rather see an expert's opinion modelled as precisely as is practical. Then, if the expert is consistently over-confident, this will become apparent with time and his/her estimating can be re-calibrated. The PERT distribution came out of the need to describe the uncertainty in tasks during the development of the Polaris missile (Clark, 1962). the project had thousands of tasks and estimates needed to be made that were intuitive, quick and consistent in approach. The 4-parameter beta distribution was used just because it came to the author's mind (the Kumaraswamy distribution would also have been a good candidate, for example). The decision to constrain the distribution so that it's Mean = (Min + 4* Mode + Max)/6 was an approximation to their decision that the distribution should have a standard deviation of 1/6 of its range (i.e. Max Min).Farnum and Stanton (1987) demonstrated that, if one wishes to maintain this [standard deviation = range/6] idea then the PERT 186

distribution should only be used with a certain range of values for the mode, namely: Mode + 0.13(Max - Mode) < Mode < Mode + 0.13(Max - Mode) i.e. at the mode should not lie less tha 13% of the range from either the Min or Max values. In practice we think this is pretty good indication, and tends to occur when one has a very high Max value relative to the Min and Mode, since the distribution is very skewed and gives very small density in the extreme tail making the Max value estimate rather meaningless, for example:

Figure 4.15: Plot the extreme tail making the Max value estimate rather meaningless. Golenko-Ginzurg (1988) describes a study that analyzed many PERT networks and concluded that "the "most likely" activity-time estimate m [mode] is practically useless". They found that the location of the mode in most project tasks was approximately one third of the distance from the Min to the Max, i.e: Mode = Min + (Max-Min)/3 Taking the  ,  , min, max  distribution again, this equates to   2 ,

  3 . Thus, from Golenko-Ginzburg's viewpoint it is sufficient to use 187

 ,  , min, max  in place of PERT(min, mode, max) with the added

advantage that one is only asking a subject matter expert for two values. A 1, 1  U 0,1 is usually used as a non-informative prior, though a 1 / 2, 1 / 2 and a (0, 0) are also sometimes used. A Uniform distribution

assigns equal probability to all values between its minimum and maximum. Examples of the Uniform distribution are given below:

Figure 4.16: Plot of density function of Uniform distribution for same values. The Beta distribution has also been used for a wide variety of other applications because it can take a very diverse set of shapes, as illustrated in the graphs above. The Beta distribution can be rescaled to model a variable that runs from a to b by using the formula: x  a   ,    b  a 

This is the four-parameter version of the Beta distribution. It is implemented directly in the Examples of the  ,  , min, max  given below:

188

Figure 4.17: Plot of density function of  ,  , min, max  for same values. A version of this four-parameter Beta distribution is called a PERT distribution. The last equation for the mean is a restriction that is assumed in order to be able to determine values for  1 and  2 . It also shows how the mean for the PERT distribution is four times more sensitive to the most likely value than to the minimum and maximum values.

Figure 4.18: Plot of density function of Pert for minimum and maximum values. The Kumaraswamy distribution is bounded at zero and 1 and can take a wide variety of shapes. It is therefore useful for many of the areas where 189

the Beta distribution is used: the modeling of a prevalence, probability or fraction. Rescaling and shifting the distribution gives the Kumaraswamy 4- distribution a four-parameter bounded distribution like can take on many shapes and any finite range. Examples of the Kumaraswamy distribution are given below:

Figure 4.19: Plot of density function of Pert for minimum and maximum values. The Kumaraswamy distribution is sadly not yet widely used but, for example, it has been applied to model the storage volume of a reservoir (Fletcher and Ponnambalam, 1996) and system design. It has a simple form for its density and cumulative distributions, and is very flexible like the Beta distribution (which does not have simple forms for these functions). It will probably have a lot more applications as it becomes better known.

190

Figure 4.20: Plot of density function of Pert for minimum and maximum values. A class of distributions is defined and studied which includes in particular cases the ordinary and generalized beta distribution, the (univariate) triangular distribution, the uniform distribution over any no degenerate simplex, and a continuous range of other distributions over such called basic beta distributions and immediately analogous to the ordinary beta distribution. Our class also includes various (univariate and other) distributions which arise in connection with the random division of an interval. The main results are given in generalized beta distribution and further results for the univariate case are given in same examples. One application may, however, be mentioned, which will be considered in more detail elsewhere. Suppose we wish to test the hypothesis that numbers (all lying between 0 and 1) were drawn independently from a rectangular distribution over (0, 1).

191

4-4-Weibull Analysis: Given any function

g t  with

g 0  0 and

increasing monotonically to infinity as x goes to infinity, we can define a cumulative probability function by; F t   1  e  g ( t )

Obviously this probability is 0 at t  0 and increases monotonically to 1.0 as t goes to infinity. The corresponding density distribution f t  is the derivative of this, i.e, f t   g t e  g (t )

As discussed in Failure Rates, MTBFs, and the "rate" of occurrence for a given density distribution is;

Rt  

f (t ) 1  F (t )

so the rate for the preceding density function f(t) is; Rt  

g (t )e  g ( t ) 1  (1  e  g ( t ) )

This enables us to define a probability density distribution with any specified rate function R(t). One useful two-parameter family of rate functions is given by;

Rt  

t     

 1

where  and  are constants. The constant  is called the scale parameter, because it scales the t variable, and the constant  is called the shape parameter, because it determines the shape of the rate function. If  greater than 1, the rate increases with t, whereas if  is less than 1 , the rate decreases with t. If   1 , the rate is constant, in which case the Weibull distribution equals the exponential distribution. The shapes of the rate functions for the Weibull family of distributions are illustrated in the figure below

192

Figure 4.21: Plot of the rate functions for the Weibull family of distributions. Since Rt  equals g t  , we integrate this function to give;

t g t       Clearly for any positive 



and  parameters the function g t 

monotonically increases from 0 to infinity as t increases from 0 to infinity, so it yields a valid cumulative density distribution, namely;

F t   1  e

t     



and the corresponding density distribution is;

t F t       

 1

e

t     



Suppose we have a population consisting of n widgets where n is a large number, all of which began operating continuously at the same time t  0 (Note that this time variable t represents "calendar time"), which is the hours of operation of each individual widget, not the sum total of the operational hours of the entire population the latter would be given by

193

.

We can use a single t as the time variable because we have assumed a coherent population consisting of widgets that began operating at the same instant and accumulated hours continuously. We defer the discussion of non-coherent populations until later). If each widget has a Weibull cumulative failure distribution given by equation for some fixed parameters  and  , then the expected number N (t ) of failures by the time t is; t      N t   1  e     



 n  

Dividing both sides by n, and re-arranging terms, this can be written in the form;   N t  1  e   n t 



Taking the natural log of both sides and negating both sides, we have;      1 t    ln   1  N t        n  

Taking the natural log again, we arrive at;

      1     ln ln   ln( t )   ln( )   1  N t       n    If the number N t  of failures is very small compared with the total number n of widgets, the inner natural log on the left hand side can be approximated by the first-order term of the power series expansion, which

 N t   gives simply N t  / n , so we have ln     ln( t )   ln( ) . However,  n  it's usually just as easy to use the exact equation rather than the approximate final equation.

194

4-5-Application and Weibull Analysis: Given an initial population of n  100 widgets beginning at time t  0 , and accumulating hours

continuously thereafter, suppose the first failure occurs at time t  t1 . Roughly speaking, we could say the expected number of failures at the time

of

the

first

failure

is

about

1,

so

we

could

put

F t1   N t1  / n  1 / 100 .

However, this isn't quite optimum, because statistically the first failure is most likely to occur slightly before the expected number of failures reaches 1. To understand why, consider a population consisting of just a single widget, in which case the expected number of failures at any given time t would be simply F t  , which only approaches 1 in the limit as t goes to infinity, and yet the median time of failure is at the value of t  tmedian  such that F tmedian   0.5 .

In other words, the probability is 0.5 that the failure will occur prior to tmedian and 0.5 that it will occur later. Hence in a population of size n  1

the expected number of failures at the median time of the first failure is just 0.5. In general, given a population of n widgets, each with the same failure density f(t), the probability for each individual widget being failed at time tm is F tm   N tm  / n . Denoting this value by  , the probability

n that exactly j widgets are failed and   are not failed at time t m is;  j n P j, n    j (1   ) n  j  j It follows that the probability of j or more being failed at the time t m is;

n P j, n  n    i (1   ) ni j 1  i  This represents the probability that the j th (of n) failure has occurred by the time t m , and of course the complement is the probability that the j th failure has not yet occurred by the time t m . Therefore, given that the j th 195

failure occurs at t m , the "median" value of F tm  is given by putting P j, n  0.5 in the above equation and solving for n . This value is

called the median rank, and can be computed numerically. An alternative approach is to use the remarkably good approximate formula; F tm  

j  0.3 n  0.4

This is the value (rather than j / n ) that should be assigned to N t j  / n for the j th failure. To illustrate, consider again an initial population of n  100 widgets beginning at time t  0 , and accumulating hours

continuously thereafter, and suppose the first five widget failures occurred at the times

t1  1216 t1 hours,

t2  5029 hours,

t3  13125 hours,

t4  15987 hours, and t5  29301 hours. This gives us five data points in

the table (4.1) below;

ln( t j )

ln[ln(1 /(1 

j  0.3 )] n  0.4

7.103

-4.962

8.523

-4.070

9.482

-3.602

9.680

-3.282

10.285

-3.038

By simple linear regression we can perform a least-squares fit of this sequence of k = 5 data points to a line. In terms of variables x j  ln( t j ) and v j  ln(ln[1 /(1  ( j  0.3) /( n  0.3)]) , parameters are given by; k



k

k

k  x j v j  (  x j )(  v j ) j 1

j 1

j 1

k

k

j 1

j 1

k  x 2 j  ( x j ) 2

196

the

estimated

Weibull

k k k  k    v j )(  x2 j )  (  x j )(  x j v j )  j 1 j 1 j 1    exp  j 1 k    k 2      k  x j  (  x j ) 2   j 1  j 1  

For our example with

k  5 data points, we get

  0.609 and

  (4.14)  106 hours. Using equation, we can then forecast the expected number of failures into the future, as shown in the figured below.

Figure 4.22: Plot of the expected number of failures. Notice that at the time t  0 we expect 63.21% of the population to have failed, regardless of the value of R(t ) . The estimated failure rate for each individual widget as a function of time is;

t R(t )      

 1

 (0.0000568) t 0.391

This function is shown in the figure below.

197

Figure 4.23: Plot of the widget as a function of time. A distribution of this kind is said to exhibit an "infant mortality" characteristic, because the failure rate is initially very high, and then drops off. For an alternative derivation of equation (6) for small N t  / n , notice that the overall failure rate for the whole population of n widgets in essentially n Rt  because the number of functioning widgets does not change much as long as N t  / n is small. Therefore, the expected number of failures by the time T for a coherent population of n widgets is; T

N t   n  R(t )dt  n 0

  1 n t dt   T     0  T

Dividing both sides by n and taking the natural log of both sides gives, again equation. However, it should be noted that this applies only for times t such that N t  is small compared with n . This is because we are analyzing the failure rate without replacement, so the value of n (the size of the unfailed population) is actually decreasing with time. By limiting our considerations to small values of N t  (compared with n ) we make this effect negligible. Now, it might seem that we could just as well assume replacement of the failed units, so that the size of the overall population remains constant, and indeed this is typically how analyses are conducted for devices with exponential failure distributions, for which the 198

failure rate is constant. However, for the Weibull distribution it is not so easy because the failure rate of each widget is a function of the age of that particular widget. If we replace a unit that has failed at 10000 hours with a new unit, the overall failure rate of the total population changes abruptly, either up or down, depending on whether t is less than or greater than 1. The dependence of the failure rate Rt  on the time of each individual widget is also the reason we considered a coherent population of widgets whose ages are all synchronized. This greatly simplifies the analysis. In more realistic situations, the population of widgets will be changing, and the "age" of each widget in the population will be different, as will the rate at which it accumulates operational hours as a function of calendar time. This proper time is then the time variable for the Weibull density function for the j th widget, and the overall failure rate for the whole population at any given calendar time is composed of all the individual failure rates. In this non-coherent population, each widget has its own distinct failure distribution. 4-6-Mixtures

of

gamma

distributions

with

applications:

The

application of generalized Beta and Gamma Distributions to underreported Income Values". A certain mixture distribution arises when all (or some) parameters of a distribution vary according to some probability distribution called the mixing distribution. A well-known example of discrete-type mixture distribution is the negative-binomial distribution which can be obtained as a Poisson mixture with gamma distribution. Let X has a conditional negative-binomial distribution with parameter p , that is, X has a conditional probability mass function (pmf);

 n  x  1 x P( X / p )  P( X  x / p )    p (1  p) n , for x  0,1,2,..  x  and 0  p  1 , n  0 . Now, suppose p is a continuous random variable with probability density function (pdf); g( p ) 

1

 (  , )

p 1 ( 1  p ) 1 for 0  p  1 , (  , )  0 ,

199

Bhattacharya showed that the conditional pmf of X is given by; 



f ( x )  P( X  x )  f ( x / p ) g( p )dp ; 0

The equation above together with (pmf) and (pdf) gives; n  x 1  (   1 ).......(  x  1 )  (   1 ).........(   n  1 )  ; P( X  x )     x  (    )(    1 )............(    n  x  1 )

Taking   a / c ,   b / c , the equation above reduces to the negative Polya-Eggenberger distribution with pmf; n  x 1 a( a  c ).......(a  x  1c )b( b  c ).........( b  n  1c )  ; P( X  x )     x  ( a  b )( a  b  c )............( a  b  n  x  1c )

For x  0,1,2,...The model can be put into different forms for the mathematical convenience and to study some of its properties. The model in terms of ascending factorials can be put as; n  x 1 a [ x ,c ] b[ n ,c ]  , P( X  x )     x  ( a  b )[ n x,c ]

for x  0,1,2,.......

Where a [ x,c ]  a(a  c)..........(a  x  1 c) . Another form of model can be; [ x] n x 1  [ n]   P( X  x )   ,   x  (    )[ n x ]

for

x  0,1,2,...........

Some of the interesting properties of the proposed model has been explored which are described as follows; Where  [ x ]  (   1 )........(  x  1 ) and   a / c ,   b / c . The model represented by simple form has been seen the most workable model, this model, for the mathematical computations in terms of n, p  Q ( 1  p ) 

b and   c can be; (ab) (ab) x 1

n  x 1  P( X  x )     x 

n 1

( p  j ) ( Q  j ) j 0

j 0

n  x 1



( 1  j )

j 0

200

, for x  0,1,2,...........

a (ab)

Lemma: If the mixing distribution is non-negative, continuous and unimodal then the resulting distribution is unimodal. Property (1): Expressing the pmf of the proposed model as; P( X  x ) 

( n  x  1 )!  [ x ]  [ n ] ; ( n  1 )! x! (    )[ n x ]

Taking x  x  1 in the equation above and dividing the resulting equation by (3.1), we get the recurrence relation of the proposed model as; (n x) (  x )  P( X  x  1 )   P( X  x ) ; ( x  1 ) (     n  x )

which yields the difference equation of the proposed model as; n  n   xP   1  x   Px 1  ; n      x( x  1 ) n   n   1       1    1     1  1x    1    

The difference equation above exhibits that the proposed model is a member of the Ord’s family of distribution. Property (2): The proposed model is a unimodal for all values of ( n, ,  ) and the mode is at x  0 if n  1 and for n  1 the mode is at some other n(   1 )  (    ) ( n  1 )(  1 ) point x  M such that . M  (  1) (  1)

Proof: The recurrence relation gives the ratio; P( x  1 ) ( n  x ) (  x ) ;  P( x ) ( x  1 ) (     n  x )

Which is less than one, that is,

P( x  1 )  1 if n  1  ( n,  ,  ) 0 . Hence, P( x )

P( x  1 ) for n  1 , the ratio is a non-increasing function, therefore, the P( x )

mode of the proposed model exists at x  0 . Suppose for n  1 the mode exists at x  M , then the ratio defined by (3.4) gives the two inequalities; P( M  1 ) ( n  M ) (  M )   1 and P( M ) ( M  1 ) (    n  M ) P( M ) n  M  1 (  M  1)  1 P( M  1 ) M (    n  M  1)

201

By the result of inequality we have n(   1 )  (    )  M and the inequality (  1) gives; M

( n  1 )(  1 ) (  1)

On combining the inequalities, we get the result. Mean and variance of the proposed model can be easily obtained by using the properties of conditional mean and variance as follows; Mean: By the conditional mean we have; Mean  E( X )  EE( X p ) ; Where E( X p ) is the conditional expectation of X given p and for given p the random variable X follows with mean and variance given by; E( X / p )  np1( 1  p )  V ( X / p )  np2( 1  p )

The equation together with and gives mean of the proposed model as; E( X ) 

n ;  1 ( 1)

Variance: Similarly, by the conditional variance we have; V ( X )  EV ( X p )  V E( X p ) ;

Using this result in the equation above, we get; V ( X )  nE [ p2 ( 1  p )]  n2 E  p2( 1  p )2  n2E( p1( 1  p ))

2

Since p is varying as, the equation above reduces to; V ( X )

1

1

2 2 n p 11 ( 1  p ) 3dp  n p  2 1 ( 1  p ) 3dp  E( X )  (  , )  (  , )





0

0

By an application of beta integral, the equation above gives variance as; V( X ) 

n  n( n  1 ) (   1 ) (  1) (   1 )(   2 )

  n   (   1 )

2

for   2

Property (3): Let X be a negative Polya-Eggenberger variate with parameters (n ,  ,  ) .If    such that  1   and    n1 as n   then show that X tends to a Poisson distribution with parameter  . Proof: Expressing the pmf of the proposed model as;

202

P( X  x ) 

( n  x  1 )( n  x  2 ).......(n  1 )n (   1 )...(  x  1 ) (   1 )...(  n  1 ) ; x! (    )(    1 )...(    n  x  1 )

Taking limit    such that  1   , the equation above gives; lt  

( n )x ; P( X  x )  1  x  11  x  2 ......1  1  n  n   n  x!( 1   )n  x 

Substituting    and taking limit n   , the equation above reduces to n

the Poisson distribution with parameter  . Property (4): Let X be a negative Polya-Eggenberger variate with parameters ( n , , ) . Show that zero-truncated negative Polya-Eggenberger distribution tends to logarithmic series distribution. Proof: The pmf of the Zero-truncated negative Polya-Eggenberger distribution is; (    )[ n ]  n  x 1  [ x ]  [ n ] P( X  x )   ,  [ n x ] (    )[ n ]   [ n ]  x  (   )

x  1,2,........

Substituting  1   and proceeding to limit    , we get; lt

 

P( X  x ) 

x n( n  x ) 1 ; ( n  1 )( x  1 ) ( 1   )n x 1  ( 1   )n

Taking   t in the equation above, we get; 1  P( X  x ) 

n( n  x ) t x( 1  t )n ; ( n  1 )( x  1 ) 1  ( 1  t )n

Proceeding to the limit n  0 , the equation above reduces to the logarithmic series distribution.

203

4-7- Exercises Exercise (1): Show that the following function is a probability density function for any a, b  0 :

( x  a )  1 e   ( x b ) f a ,b  ,  ,  ; x   ; a ,b  ;  ,  

 ,   IR  ,   0

A random variable X with this density is said to have the generalized gamma distribution with shape parameters  ,  and  . Shows that the family of densities has a rich variety of shapes, (and  is called the shape parameter). Exercise (2): Travelers arrive at the main entrance door of an airline terminal according to an exponential interarrival-time distribution with mean 1.6 minutes. The travel time from the entrance to the check-in is distributed uniformly between 2 and 3 minutes. At the check-in counter, travelers wait in a single line until one of five agents is available to serve them. The check-in time (in minutes) follows a Weibull distribution with two parameters  = 7.76 and  = 3.91. Upon completion of their checkin, they are free to travel to their gates. Exercise (3): In the simulation of the random variable experiment, select the gamma distribution. Then vary the shape parameter and note the shape of the density function. Now with k  3 , run the simulation 1000 times with an update frequency of 10 and watch the apparent convergence of the empirical density function to the true density function. Exercise (4): Suppose that the lifetime of a device (in 100 hour units) has the gamma distribution with parameters   3,   2 . Find the probability that the device will last more than 300 hours. The distribution function and the quantile function do not have simple, closed representations. Approximate values of these functions can be obtained from quantile applet. Exercise (5): Show that the following function is a probability density function for any a, b  0 : 204



( x  a ) 1 e   ( x b ) f a ,b  ,  ,  ; x   ; a ,b  ;  ,  

a, b,  ,   IR  ,   0

Where a ,b  ; ,   denotes the generalized gamma function; a ,b  ;  ,   



 (t  a )

 1  ( t b ) 

e

dt;

a, b,  ,   IR  ,   0

0

A random variable X with this density is said to have the generalized gamma distribution with shape parameters  ,  and  . The following exercise, shows that the family of densities has a rich variety of shapes, and shows why  is called the shape parameter. Exercise (6): Suppose that X has the generalized gamma distribution with shape parameter  ,  ,  a  0, b  0 . Determine; 1)- E ( X ) 2)- V ( X ) . More generally, the moments can be expressed easily in terms of the generalized gamma function. Exercise (7): Suppose that X has the generalized gamma distribution with shape parameter k . Show that 1)- E ( X n )  gama ,b (n  k ) / gama ,b (k ), for n  0. 2)-

E ( X n )  k (k  1)....(k  n  1) if n is a positive integer.

Exercise (8): Suppose that X has the generalized gamma distribution with shape parameter k . Show that E[exp( tX )]  1 /(1  t ) k

for t  1 .

Exercise (9): 1-Resistors used in the construction of an aircraft guidance system have life lengths that follow a Weibull distribution with  

10

  2,   0 (with measurements in thousands of hours).

a) Find the probability that the life length of a randomly selected resistor of this type exceeds 5000 hours.

205

b) If three resistors of this type are operating independently, find the probability that exactly one of the three will burn out prior to 5000 hours use. 2-Is there a relationship between the Weibull distribution and the Exponential distribution?. Exercise (10): The proportion of a brand of televisions sets requiring service during the first year of operation is a random variable having a beta distribution with a = 3 and b = 2. a) -What is the probability that at least 80% of the new models sold this year of this brand will require service during their first year of operation?. b)-What is the expected proportion of television sets that will require service during the first year?. Exercise (11): 1)-If X1, X2, …, Xn are n independent exponential random variables with identical parameter  , then the sum Y = X1 + X2 + ... + Xn is a gamma random variable with parameters (n,  ). Prove this. 2)- If Y has a gamma distribution with parameter  and  , then

E (Y )   , and V (Y )   2 . Exercise (12): Suppose that X has the generalized gamma distribution with shape parameter  ,  ,  , and scale parameter a  0, b  0 . Show that if c  0 then cX has the gamma distribution with shape parameter k and scale parameter bc . More importantly, if the scale parameter is fixed, the gamma family is closed with respect to sums of independent variables. Exercise (13): Suppose you have a Beta (4,4) prior distribution on the probability  that a coin will yield a ‘head’ when spun in a specified manner. The coin is independently spun ten times and ‘heads’ appears fewer than 3 times. You are not told how many heads were seen, only that the number is less than 3. Calculate your exact posterior density for  and sketch it. Hint: Use the following result from calculus: If x  0 and y  0 , then;

206

1

t 0

x 1

(1  t ) y 1 dt 

 ( x ) ( y ) , ( x  y )

where the gamma function has the property that (k )  (k  1)! . Exercise (14): 1-Suppose that X has the generalized gamma distribution with shape parameter  ,  and scale parameter  . Show that if   1 , the mode occurs at (  1)  . 2-Suppose that X has the generalized gamma distribution with shape parameter  ,  and scale parameter a  0, b  0 . Show that: a)- E ( X )  . b)- V ( X )   2 . Exercise (15): 1-Suppose that X has the generalized gamma distribution with shape parameter  ,  and scale parameter a  0, b  0 . Show that; E[exp( tX )]  1 /(1  t ) k

for t  1 / 

2-Suppose that X has the generalized gamma distribution with shape parameter  ,  and scale parameter  . Show that X has density function; f ( x)  x k 1 exp(  x /  ) /[ gam( ,  )k ]

for x  0

Recall that the inclusion of a scale parameter does not change the shape of the density function, but simply scales the graph horizontally and vertically. Exercise (16): For n  0 , the gamma distribution with shape parameter

  n / 2 and scale parameter 2 is called the chi-square distribution with n degree of freedom. 1-Show that the chi-square distribution with n degrees of freedom has density function; f ( x)  x ( n / 2)1 exp(  x / 2) /[2n / 2 gam(n / 2)]

for x  0 .

2-In the random variable experiment, select the chi-square distribution. Vary n with the scroll bar and note the shape of the density function. With

n  5 , run the simulation 1000 times with an update frequency of 10 and

207

note the apparent convergence of the empirical density function to the true density function. Exercise (17): Show that the function given below is a probability density function for any k  0 : f ( x)  kxk 1 exp(  x k )

for x  0

The distribution with the density is known as the Weibull distribution with shape parameter k  0 , named in honor of Wallodi Weibull. Exercise (18): Graph the density function; f ( x)  kxk 1 exp(  x k )

for x  0

In particular, show that; a. f is u-shaped if 0  k  1 . b. f is decreasing if k  1 . c. f is unimodal if k  1 with mode at (k  1) / k  . 1/ k

Exercise (19): 1-Show that the distribution function is; F ( x)  1  exp(  x k )

for x  0

2-Show that the quantile function I;

F 1 ( p)   ln(1  p)

1/ k

for 0  p  1

Exercise (20): 1-In the quantile applet, select the Weibull distribution. Vary the shape parameter and note the shape and location of the density function and the distribution function. 2- k  2 , find the median and the first and third quartiles. Compute the interquartile range. 3- Show that the reliability function is; G( x)  exp(  x k )

for x  0

4- Show that the failure rate function is h( x)  kxk1

for x  0

Exercise (21): 1- Graph the failure rate function h , and relate the graph to that of the density function. In particular show that a.

h is decreasing of k  1

b.

h is constant if k  1 (the exponential distribution).

208

c.

h is increasing if k  2 .

Thus, the Weibull distribution can be used to model devices with decreasing failure rate, constant failure rate, or increasing failure rate. This versatility is one reason for the wide use of the Weibull distribution in reliability. Suppose that X has the Weibull distribution with shape parameter k . The moments of X , and hence the mean and variance of X can be expressed in terms of the gamma function. Exercise (22): 1-Show E ( X n )  gam(1  n / k ) for n  0 . Hint: In the integral for E ( X n ) , use the substitution u  t n . Simplify and recognize the integral as a gamma integral. 2- Use the result of the previous exercise to show that; a.

E ( X )  gam(1  1 / k )

b. Var ( X )  gam(1  2 / k )  gam 2 (1  1 / k ) . 3- In the random variable experiment, select the Weibull distribution. Vary the shape parameter and note the size and location of the mean/standard deviation bar. Now with k  2 , run the simulation 1000 times with an update frequency of 10. Note the apparent convergence of the empirical moments to the true moments. Exercise (23): 1-Show that the density function is; f ( x)  kxk 1 / bk exp( ( x / b) k )

for x  0

Note that when k  1 , the Weibull distribution reduces to the exponential distribution with scale parameter b. The special case k  2 , is called the Rayleigh distribution with scale parameter b, named after William Strutt, Lord Rayleigh. Recall that the inclusion of a scale parameter does not affect the basic shape of the density; thus, the results in Exercises (18) and 10 hold, with the following exception: 2- Show that when k  1 , the mode occurs at b(k  1) / k  . 1/ k

209

Exercise (24): 1-In the random variable experiment, select the Weibull distribution. Vary the parameters and note the shape and location of the density function. Now with k  3 and b  2 , run the simulation 1000 times with an update frequency of 10. Note the apparent convergence of the empirical density to the true density. 2- Show that the distribution function F ( x)  1  exp( ( x / b) k )

for x  0 .

3- Show that the quantile function is

F 1 ( p)  b ln(1  p)

1/ k

for 0  p  1

4- Show that the reliability function is G( x)  exp( ( x / k ) k )

for x  0

Exercise (25): 1- Show that the failure rate function is; h( x)  kxk1 / bk

for x  0

2- Show E ( X n )  bn gam(1  n / k ) for n  0 . 3- Show that a.

E ( X )  b gam(1  1 / k ) .

b. Var ( X )  b2 gam(1  2 / k )  gam 2 (1  1 / k ) v Exercise (26): 1-In the random variable experiment, select the Weibull distribution. Vary the parameters and note the size and location of the mean/standard deviation bar. Now with k  3 and b  2 , run the simulation 1000 times with an update frequency of 10. Note the apparent convergence of the empirical moments to the true moments. 2- The lifetime T of a device (in hours) has the Weibull distribution with shape parameter k  1.2 and scale parameter b = 1000. a. Find the probability that the device will last at least 1500 hours. b. Approximate the mean and standard deviation of T . c. Compute the failure rate function Exercise (27): 1-Show that; a. If X has the exponential distribution with parameter 1, then Y  bX 1/ k has the Weibull distribution with shape parameter k and scale parameter b . 210

b. If Y has the Weibull distribution with shape parameter k and scale parameter b , then X  (Y / b) k has the exponential distribution with parameter 1. The following exercise is a restatement of the fact that b is a scale parameter. 2- Suppose that X has the Weibull distribution with shape parameter k and scale parameter b . Show that if c  0 then cX has the Weibull distribution with shape parameter k and scale parameter bc .

211

Chapter Five: On Extended Generalized Gamma and Beta Functions

5-Introduction: The gamma function was first introduced by the Swiss mathematician Leonhard Euler (1707-1783) in his goal to generalize the factorial to non integer values. Later, because of its great importance, it was studied by other eminent mathematicians like Adrien-Marie Legendre (1752-1833), Carl Friedrich Gauss (1777-1855), Christoph Gudermann (1798-1852), Joseph Liouville (1809-1882), Karl Weierstrass (18151897), Charles Hermite (1822-1901), as well as many others. The problem of interpolating the values of the factorial function n! = 1,2,6,24,... was first solved by Daniel Bernoulli and later studied by Leonhard Euler in the year 1792. The interpolating function is commonly known as the Gamma function. However, this function has an unpleasant property: If n = 0,-1,-2,.., then Gamma( n ) becomes infinite.

Figure 5.1: plot Bernoulli's Gamma function, interpolating z! , in the complex plane.

212

What is not so well known is the fact, that there are other functions which also solve the interpolation problem for n! and behave much nicer. A particular nice one was given by Jacques Hadamard in 1894. The definition above was elaborated for factorial of complex argument. In particular, it can be used to evaluate the factorial of the real argument. In the figure at right, the factorial ( x)  x! is plotted versus real x with red line. The function has simple poluses at negative integer x .This expansion can be used for the precise evaluation of the inverse function of factorial (arc factorial) in vicinity of the branch point. Other local extremums are at negative values of the argument; one of them in shown in the figure above.

Figure 5.2: plot the inverse function and factorial ( x)  x! . Hadamard's Gamma-function has no infinite values and is provable simpler then Euler's Gamma-function in the sense of analytic function theory: it is an entire function. The following figures give a first idea what the Hadamard Gamma-function looks like.

213

Figure 5.3: plot Hadamard's Gamma function, interpolating n! , n integer. Hadamard's Gamma function is defined as;

H ( x) 

1 d (1 / 2  x / 2) ln (1  x ) dx (1  x / 2)

If we introduce the Digamma (Psi) function; ( x ) 

d ln ( x ) dx

we can write in equivalent form,

H ( x) 

(1  x / 2)  (1 / 2  x / 2) (1  x )

There is a second approach which shows the intuitive meaning of Hadamard’s function more clearly. Let us define the function Q(x ) by the next expression: Q( x )  1 

cos x (1 / 4  x / 2)  (3 / 4  x / 2) 2

The graphs of Q(x ) and of 1  Q( x ) are shown in the next plot:

214

Figure 5.4: plot Q(x ) , the 'oscillating factor' in Hadamard's Gamma function. The graph of Q(x ) looks like an approximation to the indicator function of the positive real numbers. Now let us consider the product of this function and Gamma(x). Then we have the following identity for Hadamard's Gamma function:

H ( x)  Gamma( x)  Q( x  1 / 2) For x=0,-1,-2,-3,... H (x ) is defined by the value of the limit. In the plot of factorial of the real argument, the two other functions are



plotted, and Arc Factorial in the complex plane factorial ( x )



1

. These

functions can be useful for the generalization of the factorial and for its evaluation.

Figure 5.5: Arc Factorial in the complex plane. 215

There is another factorial function, proposed by Peter Luschny in October 2006, which is also continuous at all real numbers and which we will compare to Hadamard's Gamma function.

Figure 5.6: plot Luschny's factorial function L(x ) . The following figure compares the Luschny factorial L(x ) with the Euler factorial (x! = Gamma(x+1)) for nonnegative real values.

Figure 5.7: Factorial x! and the L-factorial L(x ) . Luschny's factorial function L(x ) approximates the non integral values x  0 of the factorial function better then Hadamard's Gamma function

H (x ) the non integral values x  0 of the Gamma function. 216

Figure 5.8: plot Gamma(x) - H (x ) and x! - L(x ) . As a generalization of the Gamma function defined for a single complex variable, a new special function called a generalized Gamma function, defined for two complex variables and a positive integer, is introduced, and several important analytical properties are investigated in detail, which

include

regularity,

asymptotic

expansions

and

analytic

continuations. Furthermore, as a function closely related to a generalized Gamma function, a generalized incomplete Gamma function, which is a generalization of the incomplete Gamma function, is also introduced, and some fundamental properties are investigated briefly. A new generalized gamma function is defined involving a parameter in the Kobayashi's (1991) function  (m, n) . The parameter  will relax the restriction on the parameter   0 in all cases using Kobayashi's (1991) type functions. As a generalization of the Gamma function defined for a single complex variable, a new special function called a generalized Gamma function, defined for two complex variables and a positive integer, is introduced, and several important analytical properties are investigated in detail, which

include

regularity,

asymptotic

expansions

and

analytic

continuations. Furthermore, as a function closely related to a generalized Gamma function, a generalized incomplete Gamma function, which is a

217

generalization of the incomplete Gamma function, is also introduced, and some fundamental properties are investigated briefly. A wide range of problems concerning important areas of analytical acoustics are associated with applications of special functions. For instance, Bessel function plays a key role in acoustic problems defined in cylindrical coordinate, and therefore has been enormously studied and used by physicists and engineers as well as mathematicians. In this paper, the author presents a special function called generalized gamma function occurring in mathematical theory of diffraction. A power function has the following form: f ( x)  b0 xb1

where b0 and b1 are the function parameters (Figure 5.9). The power function can be expressed alternatively as f ( x)  xb1 ; where f (x) is proportional to x raised to the power of b1. The parameter b1 is of particular importance, and it is sometimes known as the scaling or algometric factor. y 12 y = 3x0.5

10 8 6 4 2 0 0

5

10 x

15

20

Figure 5.9: Power function. Exponential functions look similar to the power functions which were discussed earlier, with one important difference: that the variable x in exponential functions is now the power, rather than the base. Previously, power functions like f (x) = x3 means that the base is x, and it is raised to 218

the power of 3. But in the case of exponential functions, they have the following general form: f ( x)  b0  ab1 x ;

where b0 and b1 are the function parameters; and the base a is a positive constant other than 1. The exponential function is often expressed in terms of base e, where e is the natural logarithm base, and it has a value of about 2.71828. The exponential function to base e is alternatively written as “exp( )” such as f ( x)  b0  exp  b1x  ; where exp(b1x) is equivalent to eb1x (Figure 5.10). y 6 f(x) = exp(x)

4

h(x) = x

2 0

g(x) = ln(x)

-2 x

-4 -4

-2

0

2

4

6

Figure 5.10: Exponential and logarithmic functions. When the parameter b1 is positive, exponential functions in terms of base e may be used to describe the growth rate over a short time, in particular the growth of a microbial culture. The product of power function with the exponential function that enables it to represent the content of the implicit integration of known gamma function, which is have this function to take a generalized version of complementarily, as an example of this function can be represented by a curve fit y     0.49    exp   x  8 to the following (x, y) data set: (10, 0.49), (20, 0.42), (30, 0.40) and (0.40, 0.39). In the figure:

219

Figure5.11: fitting the function y     0.49    exp   x  8 to the data set. The generalized gamma function is defined in its original form firstly introduced by Kobayashi in 1991. And then, the appearance of this special function in analytical acoustics is briefly explained by formulating the Wiener-Hopf integral equation for a famous diffraction problem by a finite strip or a single slit. The characteristics of present formula is graphically illustrated and numerically compared with existing two formulas. Firstly, the present formula is compared with the Kobayashi’s asymptotic formula (1991) with a discussion about the lower bound of available argument yielding the relative error less than 0.0001. Secondly, the present formula is compared with the Srivastava’s exact formula (2005) from the viewpoint of computational accuracy and efficiency for large argument. And, finally, the author discuss about the limitation of present formula and the future work concerning a further mathematical improvement as well as the practical applications of the generalized gamma function.

220

5-1- The Kobayashi's and Agarwal, Kalla Gamma type functions: The generalized gamma function is defined in its original form firstly introduced by Kobayashi in 1991. Despite its uniqueness and ubiquity in mathematics, the gamma function unfortunately has the characteristic of possessing singularities at negative integer values and zero. It is used by the factorial function to compute non-integral values for the factorial as well. To overcome these difficulties, Kobayashi in 1991 has introduced a new type of generalized gamma function as follows: r (k , n)   x k 1 x  n e  x dx; r, k , n  0 ;……( 1) 

r

0

This function is useful in many problems of diffraction theory and corrosion problems in new machines. However, this function has not been used in statistics until 1996 where Agarwal and Kalla proposed a new generalization of gamma distribution by considering a modified form of the Kobayashi's gamma function. The Agarwal and Kalla gamma function“generalized gamma function” is an integral relationship that is defined as follows: r (k , n,  )   x k 1 x  n e x dx; r, k , n,   0 ;……( 2) 

r

0

The formula (1) proposed by the Kobayashi associated with the form (2) proposed by each Agarwal and Kalla of the parameters r, k and  in terms of the relationship is the content of the following property (1): Property (1): Each of the parameters

r, k and  , the following

mathematical formula unrealized; r (k , n,  )  r k r (k , n)

Proof: Can make sure that the first parameter can not be generalized because the shape change can be adjusted and become the same formula, so: r (k , n,  )   x k 1 x  n e x dx; r, k , n,   0 

0

221

r

Let change variable y  x , then x  ( y /  ) limits

x0 x  

, alors

y0 y

then dy  dx ,

,

then

x k 1 x  n e x   y /   r

k 1

 y /    nr e  y   rk 1

y k 1 y  n  e  y , r

and the integral; 

r (k , n,  )    y /  

k 1

0

 y /    n

e (dx /  )   

r  y

r k



 y y  n  k 1

e dy    r (k , n )

r  y

r k

0

Remark: 0, 0    1  r k lin r ( k , n,  )  lin   r ( k , n )   lin r (k , n ), r  k   r  k   r k     ,   1

 1

Property (2): Each of the parameters r, k ,  and n  0 , the following mathematical formula unrealized; r (k ,0,  )  r k (k  r )

Proof: Can make sure that the first parameter can not be generalized because the shape change can be adjusted and become the same formula, so: r (k ,0,  )   x k 1 x  e x dx; r, k ,   0 

r

0

Let change variable y  x , then x  ( y /  ) limits then dy  dx , then

x k 1 x e x   y /   r

k 1

x0 x  

, alors

y0 y

,

 y /  r e y   rk 1 y k r1e y ,

and the integral; r (k ,0,  )   x k 1 x  0 e  x dx   x k r 1e  x dx  r k  y k r 1e  y dy  r k (k  r ) 

0

r





0

0

Is possible to cheeck the absolute value, eeal part and imaginary partof the extended general gamma function in case of   1 and the variable k  r .

222

Figure 5.12: plot the absolute value, real part and imaginary part of the gamma function. Power law functions are common in science and engineering. A surprising property is that the Fourier transform of a power law is also a power law. But this is only the start- there are many interesting features that soon become apparent. This may even be the key to solving an 80-year mystery in physics. The general form is x  w( 1) , where  is a constant. For example,

u( x)  x k 1 x  n . Unfortunately, there are additional terms that distort r

this simple relation. First, the left side contains the term, u(x ) , the unit step function. This is defined as u( x )  0 for x  0 , and u( x )  1 for x  0 . In other words, this makes the time domain a one-sided power law. Second, the power law in the frequency domain only pertains to the magnitude; there is a phase term that doesn't resemble a power law at all. Third, there is a scaling factor in the frequency domain, (  1) . This is the Gamma function, which is essentially a continuous version of factorials. A graph of the Gamma function is shown below. Don't worry too much about this strange function. Think of it simply as a constant that scales the amplitude of the frequency domain, depending on the value of

 with the formula relation: FT

u( x ) x  Mag  (  1) w( 1) 223

Phase   (  1) / 2

These two graphics show the real part (left) and imaginary part (right) of r (k , n,  );

over the r, k , n,  –plane.

Figure 5.13: plot the generalized gamma function for same values of r, k , n,  .

Definition: A function f (x ) is concave if f (x ) is convex. Equivalently, the line connecting two points on the graph lies below the graph. In symbols, reverse the direction of the inequality for convexity. A function

f (x ) is log concave if log( f ( x)) is concave. The basic properties of convex functions are obvious. It’s easy to show that the sum of two convex functions is convex; the maximum of two convex functions is convex, etc. It would seem that log concave functions would be unremarkable because they are so simply related to convex functions. But they have some surprising properties. The gamma function is finite except for non-positive integers. It goes to   at zero and negative even integers and to   at negative odd

integers. The gamma function can also be uniquely extended to an analytic function on the complex plane. The only singularities are the poles on the real axis. Here’s a plot of the absolute value of the gamma function over the complex plane.

224

Figure 5.14: plot the absolute value of the gamma function on the complex plane. The volcano-like spikes are centered at the poles at non-positive integers. The steep ramp on the right shows how quickly the gamma function increases as the real part of the argument increases.

Figure 5.15: plot the Principal-branch logarithmic gamma function.

Figure 5.16: plot the Extended Generalized Gamma function for same values of parameters. 225

For complex functions, it'd also be nice with a color function separate from the geometry function -- then you could plot the phase as color in the same picture. Color helps for visualizing complicated structure, especially for phase plots. Below, I've plotted the phase (actually the sine of the phase, to make it continuous) of a Jacobi theta function θ3(1+4j/3,q) (restricted to |q| < 0.8, because it gets extremely messy for larger |q|).

Figure 5.17: plot the phase of the Barnes Generalized Gamma.

Figure 5.18: plot the phase of the Generalized Gamma. It is surprising that quantitative linear independence results have applications to the construction of so-called MIMO codes, where some particular results on the exponential function are useful. These applications are studied in the thesis of Olli Sankilampi, and below we see a picture of MIMO Mountains, where the tops give the best coding gain.

226

5-2- The Extended Generalized Gamma function: Since that time many generalization of this generalized gamma function were considered by introducing new parameters. The Extended Generalized Gamma function is defined in its original form firstly introduced by Bachioua. L, in 2004. Despite its uniqueness and ubiquity in mathematics and extende of Agarwal and Kalla generalized gamma function, is an integral relationship that is defined as follows: 



 r (k , p, m, n,  )   x k 1 x m  n 0



 r  x

e

p

dx; p , k , p, m, n,   0; r  IR ;…( 3)

The formula (3) proposed by the Bachioua associated with the form (2) proposed by each Agarwal and Kalla of the parameters p  1, k , p, m, n, r and  in terms of the relationship is the content of the following property (2): Property (3): Each of the parameters

p  1, k , p, m, n, r and  , the

following mathematical formula unrealized;  r (k ,1,1, n,  )  r (k , n,  )

Proof: The result is simple and produces direct compensation directly from the values of parameters, and therefore becomes formula (2) the previous special case of this generalized formula (3). In order to clarify the proposal of a simple generalized gamma function, we show the Figure 5.19 of some cases this function.

Figure 5.19: plot the same case of the Extended Generalized Gamma. 227

New 6-parameters extended form of the Kobayashi's generalized gamma function is defined, several of its properties and recurrence formulas are derived. Based on this new function we introduce a new 6-parameters with a model function extended generalized gamma distribution. Many new distributions are shown to be particular cases of this suggested distribution, including some familiar known distributions. In terms of the model function statistical properties; reliability and hazard functions, and estimation of some parameters of the distribution are studied. Also, the form of the distribution is considered under various forms of the model function. Property (4): For r  IR , p , k , p, m, n,   0 . This 6-parameter function can be regarded as an extension of the Kobayashi's generalized gamma function, since; r  r (k , 1, 1, n,1)   x k 1 x  n e  x dx  r (k , n) ; for k , n , r  0 

0

Also, this function is reduced to the well known gamma function when; 

 0 (k ,1, m, n,1)   x k 1e  x dx  (k ), k  0 0

Theorem (1): The improper integral function is;

0

  (k , p, m, n,  ) 

p k 1 p

p

k ( ); p

k  0. p

Proof: For all k , p, m, n,   0, r  0 ,, we have for x  0 ;





 0 (k , p, m, n,  )   x k 1 x m  n 0



0



e  x dx   x k 1e  x dx p

Let change variable y   x p , then x  p ( y /  ) limits

y0 y

p

x0 x  

then dy   p x p1dx ,

,

x k 1e x   y /  

p

0

k 1 p

1k p

e  y   

k 1

y p

228

e  y , and the integral;

, alors then





0   0 ( k , p, m, n,  )   x k 1 x m  n 0



1 k 1 p

 



p

  

0

p k 1 p

p 1



p  







0

y



0

y

( y / )

e

e

p

p

0

k 1 p y

k p p



e  x dx   x k 1e  x dx 1 p p

dy

  dy 

y

p k 1 p

p





0

y

k 1 y p

e

dy

k k ( )  ,  0. p p

k p

To investigate the properties of the function  r (k , p, m, n,  ) , we first consider the problem of the existence of the function. Theorem (2): The improper integral function  r (k , p, m, n,  ) is; 0   r (k , p, m, n,  )  

Proof: For all k , n, m, r  0 , we have for x  0



0  x k 1 x m  n



r

 x k r m1

Thus, for  , p  0 ; 

0   r (k , p, m, n,  )   x k r m1e  x dx p

0

But,





0

x k r m1e  x dx  p



r m k p

p

(

k  rm )   ; For k  rm  0 . p

Hence, 0   r (k , p, m, n,  )   ; For r  0 whenever

k is large. For all r  0 ; m

k , n, p, m,   0 , x  0 , we have, using binomial formula



0  xm  n



r



 xm  n

    is  n   s

s

s i

x mi ,

i 1

s  r when

where

x

m

r

is a negative integer,



 n  1 , and s   r  otherwise. Thus;

229

s   r   1

when

k i m

 s  p k i m 0   r (k , p, m, n,  )     n s i ( )   p p i 1  i  s

For all k , p, m, n,   0, r  0 , the result of Theorem (1) we have for x  0 ; 0   0 (k , p, m, n,  )  

which proves the theorem?. Theorem (3): The function  r (k , p, m, n,  ) can be written as follows;

1 k m  r ( ,1, , n,  ) , p p p

1.

 r (k , p, m, n,  ) 

2.

 kp   r (k , p, m, n,  )    p     

3.

 r (k , p, m, n,  ) 

1

m

k m  r ( ,1, , n  p ,1) , p p

1 k p  r ( , ,1, n,  ) . m m m

Proof: 1)-Using the substitutions y  x p and for all k , p, m, n,   0, r  0 we have for x  0 ;





 r (k , p, m, n,  )   x k 1 x m  n 0

Then x  p ( y )



x k 1 x m  n



r

limits

x0 x  

e  x  y p



 r ( k , p , m, n ,  )   x 0

k 1 p

k 1

, alors

y0 y



r

e  x dx p

, then dy  p x p1dx , then

r

 mp   y  y  n  e , and the integral;  

x

m

n



r

e

 x



p

1 dx   y p0

k 1 p

r

 mp  y 1pp  y  n  e y dy  

r

k m  1   1 1 k m p   y  y p  n  e y dy   r ( ,1, , n,  ). p0 p p p  

2)-The second form of this theorem shows that by using the substitutions y  x p and for all k , p, m, n,   0, r  0 we have for x  0 ; hen

230

x  p ( y / )

x

k 1

x

m

n



limits r

e

 x

x0 x  

 ( y / )

p







, alors

k 1 p

r

 r  x





rm1k  p 1 1  p

y

p

y

, then dy  p x p1dx , then

m   p ( y /  )  n  e  y , and the integral;   

 r (k , p, m, n,  )   x k 1 x m  n e 0

y0



p

1 dx   ( y /  ) p 0

k 11 p p

0

k 1 p

r

m 1 p   y p p ( y /  )  n  e ( y /  ) dy  

r



k

r

m m m  k  mp  y  y  p p 1  p p p y  n  e dy  y y  n      e dy  p 0    

1

m  r mpk  k m p     p  r ( ,1, , n ,1).   p p  

3)-Using the substitutions y  x m and for all k , p, m, n,   0, r  0 we have for x  0 ; Then x  m ( y ) limits

x

k 1

x

m

n



r

e

 x

x0 x  

y

p



 r ( k , p , m, n ,  )   x

k 1

0



k 1 m

x

, alors

 y  n

m

n

k 11 m m



r

y0 y

, then dy  m x m1dx then;

p

r

e

e

 y m

 x

, and the integral; 

p

1 dx   y m0

 y  n   r e  y



1 y m 0



1 k p  r ( , ,1, n,  ). m m m

p m

k 1 m

y  n 

dy 

r

e

 y

p m

( y)

1 m m

k

dy

p

1 1 m  y  n r e  y m dy y  m0

For the values of parameters (k , m, n, r ) in  r (k , p, m, n,  ) are essential, while the others parameters ( p,  ) , can be regarded as index parameters. Also, when p   , or m   , we have  r (k , p, m, n,  )  0 . Theorem (4): The improper integral function  r (k , p, m, n,  ) is satisfies the following recurrence relations; 1)-  r (k , p, m, n,  ) 

 1  k  k m rm k m m  r 1 (  1, 1 , , n,  )   1 r (  1, 1 , , n,  )   p  p  p p p p p 

231

2)-  r (k , p, m, n,  ) 



r mk p

m m  k  k m r m k m m p  p  1, 1 , , n ,1)   1 r (  1, 1 , , n  ,1)   r ( p  p  p p p p p 

Proof: This follows by applying integration by parts to the form (1) and (2) of  r (k , p, m, n,  ) given in theorem (3), respectively, then: r

k m  1   1 k m 1 p  r ( k , p, m, n,  )   r ( ,1, , n,  )   y  y p  n  e y dy p p p p0  



  1  k k m rm k m m  r 1 (  1, 1 , , n,  )   1 r (  1, 1 , , n,  )   p  p  p p p p p 

 r ( k , p, m, n,  ) 









k p 

p

y 0

k 1 p

m  mp  p  y  n   

r

e  y dy

r m k p

m m  k   k m rm k m m p r (  1, 1 , , n p ,1)   1 r (  1, 1 , , n  ,1)  p  p  p p p p p 

Theorem (5): When p  m , we have;  r mmk   r (k , p, m, n,  )    m 

  r mmk   k  r (  1, 1 ,1, n  , 1)   m   m  

  k     r  ; for   m 

k  r and for small n  . m

Proof: From the result theorem (3), then;

 r (k , p, m, n,  )   r mmk    m 



r m k m

m m  k  k m rm k m m m  m  1  (  1 , 1 , , n  , 1 )   (  1 , 1 , , n  ,1)  r  r m  m  m m m m m 

  r mmk  k   r (  1, 1 ,1, n  , 1)    m  m  

  k     r .  m  

and the proof follows. Remark: This theorem shows that, when p  m ;

0 ; for   1 lim  r (k , p, m, n,  )   k   ; for   1 232

Theorem (6): When n  0 , we have;  r mpk   r ( k , p, m,0,  )    p 

  k k  r.    r . ; for m   m 

Proof: Using the transformation y  x p and for all k , p, m, n,   0, r  0 we have for x  0 ; hen x  p ( y /  ) limits



dy  p x p1dx , then x k 1 x m  n



r

x0 x  

e  x  ( y /  ) p

, alors

k 1 p

y0 y

, then r

m   y p ( y /  )  n  e ,  

and for n  0 the integral;





 r (k , p, m,0,  )   x k 1 x m  0 0





r mk p 

p

y 0



 r  x

e



p

1 dx   ( y /  ) p 0

 r mpk  e dy    p 

k r m 1 y m

k 1 p

r

m 1 p   y p p ( y /  )  e ( y /  ) dy  

  k     r .  m  

and the proof follows. Remark: From this theorem, since 2  1  1 , we have when n  0 ,   p ; for k  r m  p   r ( k , p, m,0,  )   2   ; for k  r m  2 p  p

Another feature of this function  r (k , p, m, n,  ) it is produce several important new sub-types of the extended generalized gamma function, simply by reducing the number of parameters of  r (k , p, m, n,  ) to less than 6-parameters by assigning proper values to the eliminated parameter. In the next section, we shall define a new type of distribution based on the function  r (k , p, m, n,  ) and study many of its properties. The extension, which occurred in the dissemination of the gamma function allows the definition of functions over an extended version of the generalized gamma, which is known in many different cases are similar to each other 233

in special cases, vary from each other in other cases, the form displays under some cases for clarification.

Figure 5.20 :The representing of the Extended Generalized Gamma set of functions. Because of the role of displacement parameters and positioning, shape, and the Commission, the researcher developed a general formula using a public function in general is as the following integral: b



 r (k , p, m, n,  )    ( x) k 1  ( x) m  n a



 r   ( x )

e

p

 ( x) dx; p , k , p, m, n,   0; r  IR …(4)

The  (.) function is required to be derived are defined in the field and check the condition of the values [a, b] of the limits of integration, so that;

 (.) :

a, b   IR 

x   ( x ); with condition  (a )  0,  (b)   The formula (4) extended by proposed function associated with the form (3) proposed by Bachioua of the function  (.) in terms of the relationship is the content of the following property (4): Property (4): Each of the function  (.) , the following mathematical formula unrealized; 234



b

 r (k , p, m, n,  )    ( x ) k 1  ( x ) m  n a





  x k 1 x m  n 0





 r   ( x )

 r  x

e

e

p

 ( x ) dx; p , k , p, m, n,   0; r  IR

dx; p , k , p, m, n,   0; r  IR

p

Proof: Can make sure that the first parameter cannot be generalized because the shape change can be adjusted and become the same formula, so;



b

 r (k , p, m, n,  )    ( x)k 1  ( x)m  n a



 r   ( x )

e

p

 ( x) dx; p , k , p, m, n,   0; r  IR

Let change variable y   (x ) , then d  ( x)   ( x)dx limits

y0 y



, then, then;  ( x ) k 1  ( x) m  n



r



xa xb

e   ( x )  y k 1 y m  n p



r

, alors

e  y

p

 (x )



b

 r ( k , p, m, n,  )    ( x ) k 1  ( x ) m  n a





  x k 1 x m  n 0



r

e  x

 p

r

e   ( x )  ( x ) dx

dx;

p

p , k , p, m, n,   0; r  IR

 x   Remark (1): For  ( x)    , then the extended generalized gamma    function is as the following integral: r

1  r (k , p, m, n,  , ,  )  

k 1 m    x  p  x     x     ( x ) dx; a        n  e   b

… (5)

Where p , k , p, m, n,   0; r,  IR,   IR* , The proposed formula in the formula (5) formulated in terms of eight parameters to help researchers in the removal and the withdrawal of the curve in the possible directions which paves the way for the definition of distributions formulated in proportion with the need to track the change of basic data, and is this form of the proposal, the case of lying to extend Unable to expand the applications targeted. Remark (2): The proposed formula for the function  (.) and the basic conditions can be function as the following:

 xa  ; x  a, b b x

 ( x)  

235

This formula compatible with the basic conditions and allow for expansion of the field matches the generalized beta function as a special case of the extended gamma function, and in view of the proposed formula for alpha function as follows:

 ( x)  x  a b  x 1 ; x  a, b Let change variable  ( x)  x  a b  x 

1

limits

xa xb

,

y0

alors



y

 ( x) k 1  ( x) m  n  e   ( x )  x  a b  x 1 r

p



b

 r (k , p, m, n,  )    ( x ) k 1  ( x ) m  n a





0

x  a b  x   x  a b  x   1 k 1

1 m





 b  a  x  a  0

k 1

then;

k 1

r

ba dx (b  x ) 2

,

 x  ab  x 

1 m

n

e r



  x a b x 1



p

e   ( x )  ( x ) dx

n

k 1  x  a m  n b  x m  x  a    ba   0 b  x k 1 b  x m  

, then d  ( x ) 

p



r

1 p ba e   x a b x   dx ; (b  x ) 2

r

   x a b x 1  p dx  e 

b  x r mk 1 x  a m  n b  x m



r

1 p e   x a b x   dx

Result: Recent modifications that allow the researcher defines a new model function with eight parameters, and smooth allows finding a common formula with the function of the generalized beta function previously proposed, and this function is in the form: 

 r (k , p, m, n,  , a, b)  b  a  x  a  0

k 1

b  x r mk 1 x  a m  n b  x m 

e 

 r   x a b x 1

 dx ; p

where p , k , p, m, n,   0; r, a  IR, b  IR* , a  b . Although the gamma function can be calculated virtually as easily as any mathematically simpler function with a modern computer—even with a programmable pocket calculator—this was of course not always the case. The Gamma function (x ) was discovered by Euler in the late 1720s in an attempt to find an analytical continuation of the factorial function. This function is a cornerstone of the theory of special functions. 236

Thus Γ(x) is a meromorphic function equal to x ! when ( x  1) is a positive integer. Euler found its representation as an infinite integral and as a limit of a finite product. Let us derive the latter representation following Euler's generalization of the factorial. Until the mid-20th century, mathematicians relied on hand-made tables; in the case of the gamma function, notably a table computed by Gauss in 1813 and one computed by Legendre in 1825. Tables of complex values of the gamma function, as well as hand-drawn graphs, were given in Tables of Higher Functions by Jahnke and Emde, first published in Germany in 1909. According to Michael Berry, "the publication in J&E of a threedimensional graph showing the poles of the gamma function in the complex plane acquired an almost iconic status. There was in fact little practical need for anything but real values of the gamma function until the 1930s, when applications for the complex gamma function were discovered in theoretical physics. As electronic computers became available for the production of tables in the 1950s, several extensive tables for the complex gamma function were published to meet the demand, including a table accurate to 12 decimal places. Abramowitz and Stegun became the standard reference for this and many other special functions after its publication in 1964.

Figure 5.21 :The graph of the complex extended generalized gamma function. 237

The plots above show the values of the function obtained by taking the natural logarithm of the extended generalized gamma function ( z )   r (k , p, m, n,  , a, b) . Note that this introduces complicated branch

cut structure inherited from the gamma function in same case of general case Z  (r, k , p, m, n,  , a, b) .

Figure 5.22 :The graphs of the extended generalized gamma function for same cases. This integral for extended generalized gamma function is convergent only for z in the right half plane, but the gamma function can be analytically continued to the left half plane as well, is already defined by the integral above, and so on. This works fine unless z is zero or a negative integer, since then we are eventually forced to divide by zero; the gamma function has simple poles at these points, so that the gamma function is a continuous version of n-factorial. And for example, absolute gamma function, we have:

238

Figure 5.23 :The graphs of the simple extended generalized gamma function cases. These special functions in extended generalized gamma function can usually be evaluated for arbitrary complex values of their arguments. Often, however, the defining relations given in this tutorial apply only for some special choices of arguments. In these cases, the full function corresponds to a suitable extension or analytic continuation of these defining relations. Thus, for example, integral representations of functions are valid only when the integral exists, but the functions themselves can usually be defined elsewhere by analytic continuation.

Figure 5. 24 :The graphs of the extended generalized gamma function. The Extended generalized gamma functions are a extension of ordinary generalized gamma function, whose mathematical properties have been recently investigated by my research group in connection with many 239

physical problems such as spontaneous or stimulated scattering processes to which the dipole approximation cannot be applied: It is worth mentioning the scattering of free or weakly bounded electrons by intense laser fields, the spectral properties of synchrotron radiation by relativistic electrons passing in magnetic adulators, the gain of free-electron lasers operating on higher off-axis harmonics and stimulated nuclear beta decay by very intense electromagnetic fields.

Figure 5.25: Diagram of same extended gamma and beta functions relationships. Owing to the importance and number of these applications, the development of computational methods for extended generalized gamma deserves some interest and is particularly addressed in this work. Moreover, an infinite-dimensional analogue of ordinary Anger function is presented, showing the relevant properties and connections with Fourier series of proper smooth functions. 240

The corresponding generalized beta function is also introduced and the link with the generalized gamma function described. This result may be of particular importance in those problems, frequently occurring in physics, where higher-order harmonics can be neglected. In fact, in these applications, from the convergence rate of the involved Fourier series one can obtain a useful estimate of the number of variables that can be taken different from zero in the associated gamma function. A straightforward extension to the case of one-parameter many-variable Beta functions is also considered, thus allowing an immediate application possible to the crystallographic statistics results mentioned above. Finally, it is possible to express the characteristic functions of both plane, in terms of three or more variable Beta functions, and - at the same time - to provide an accurate evaluation of them, suitable for application purposes.

5-3-Extended Generalized Beta Function: There are two widely used utility functions, the Gamma and Beta functions, which are used in statistics to calculate distribution values. These functions always return a double value and use two double values for input. The beta function was studied by Euler and Legendre and was named by Jacques Binet. A function of four parameters, usually expressed as an improper integral and equal to the quotient of the product of the values of the generalized beta function at each variable divided by the value of the beta function at the sum of the parameters. The classical formula for the Euler beta-function in terms of the gamma-function, the formula for a Vandermonde determinant, and Legendre's classical equation in the theory of elliptic functions are all special cases of general determinant formulas for matrices of multidimensional hypergeometric integrals. The latter formulas express the determinant as a product of critical values of the integrands) . A graph is shown below:

241

Figure 5.26: A graph of 0,1 ( ,  ) on the space( 0    10 ,

0    10 ). The beta function (.,.) is defined to be the integral with two parameters. It can be defined in terms of the gamma function. Many integrals can be reduced to the evaluation of Beta functions. The generalized beta function is defined in its original form firstly introduced. It is used by the gamma function to compute non-integral values introduced a new type of generalized beta function as follows: a ,b ( ,  )   ( x  a ) 1 (b  x )  1 dx;  ,   0, x  a, b b

a

To search for the extended version of the generalized beta function and the researcher tried to constantly re-circulate the final version with the suggestion of re-drafting function using the simple case with the following conditions: The  (.) function is required to be derived are defined in the field and check the condition of the values [a, b] of the limits of integration, so that:

 (.) :

a, b   IR 

x   ( x ); with condition  (a )  a,  (b)  b Through these conditions can be rewritten to become a beta function as extended and as follows: a ,b ( ,  )   ( ( x )  a ) 1 (b   ( x ))  1 dx;  ,   0, x  a, b b

a

242

In order to identify extended formulas take the simple case when the value of the variable of integration between 0 and 1. In this formulation can be written as follows: 0,1 ( ,  )   ( ,  )  ( ( x ) 1 (1   ( x ))  1  ( x)dx;  ,   0, x  a, b 1

0

; with condition  (a)  0,  (b)  1 .

In order to determine the formula are looking for a form of the function so that the area of the intersection and the partnership between the gamma and beta functions, what can be larger, that were not fully comprehensive one hundred percent. This is the link between the definition of functions as simple, and this is the definition of many of the formulas and become a matter for the best research possible. The simple formula of beta function defined by integral finction in interval 0,1 can be adapted to other form in the interval 0, so is the content of the following property (5). A graph is shown below:

Figure5.27: A graph of 0,1 ( ,  ) on the square 0    10 , 0    10 . Property (5): Each of the parameters  ,  , the following mathematical formula unrealized; 1



0

0

 (   ) (    )  1  1  1  1  y  y  y  dy    ,     y 1  y  dy

Proof: For the simple formula of beta function;

243

1

  ,     x  1 1  x 

 1

dx

0

Change variables y 

x

x . Limits 1 x

x0

y0

x 1

y

, then

 1 y y  1  dy  , dx    dy 2  1 y 1  y 2  1  y 1  y  

Useful alternative forms of Beta-Function:  1 y  1  dy  dx    dy 2  1  y 2  1  y 1  y   

 y    ,       1  y  0 

  ,     0

 1

 1    1  y 

 1

1 dy 1  y 2

y  1 dy 1  y    

We may assume; 1

  ,     0



y  1 y  1 dy   dy   1  y   0 1  y 

In the 2nd integral put y  

1 z

y  1 1 1 1 z     12 dy  dz  1 1  y   0 z 1  1   z 2 0 1  z   dz 1   z  1

1

z  1 z  1  dz    0 1  z   dz. 0 1  z  1

1

1

Then   ,     0 1

Then  0

z  1  z  1 dz 1  z   

y  1  y  1 y  1   dy    ,   0 1  y   dy , this gives the important 1  y  

formula:

 y 1

 1

 y  1 1  y 

(    )



dy    ,     y  1 1  y 

0

0

244

(    )

dy

Figure 5.28: Real Beta Function Plot . Property (6): Each of the parameters  ,  , the beta function is linked to form teams for the integration of function between the two periods 0, 1 and 0,   , the following formula: 

 1  y 1  y 

(    )

1

dy   y  1 1  y 

0

(    )

0

1

dy   y  1 1  y 

(    )

dy

0

Proof: As a result of the property (5) and re-arrange the limits for the results of integration we get the result provided by the property (6). And for the generalized formula, we can let the transformation for the generalized cases; a ,b ( ,  )   ( ( x ) 1 (1   ( x ))  1  ( x )dx;  ,   0, x  a, b b

a

Is possible to change variables  ( x ) 

xa xa , then the limits the xb b x

function

 ( x)  0 ,  ( x)  

values

transformed

is

then; x 

 1  ( x)  1  d ( x )  dx    d ( x ) 2  1   ( x)2  1   ( x ) 1   ( x )  

Useful alternative forms of generalized beta-function:  1  ( x)  1  dy  dx    d ( x ) 2  1   ( x)2  1   ( x ) 1   ( x )  

245

b  ( x)  a , 1   ( x)



  ( x)    ,       1   ( x )  0

 1

  1    1   ( x) 

 1

1 d ( x ) 1   ( x ) 2



 ( x ) 1 d ( x )   0 1   ( x ) 

  ,    

We may assume; 

 ( x ) 1  ( x ) 1 d  ( x )  d ( x )          1   ( x ) 1   ( x ) 0 0

1

  ,    

In the 2nd integral put  ( x )  

1 z( x )

 ( x ) 1 1 1 1 z( x )    12 1 1   ( x)  d ( x)  0 z( x) 1  1   z( x) 2 dz( x)  0 1  z( x)  dz( x) 1    z( x)  1

1

z ( x )  1 z ( x )  1  dz    0 1  z( x)  dz( x). 0 1  z ( x )  1

1

Then   ,     z( x ) 1  z( x )  1  1  z( x ) 1

(    )

dz( x ) , and the formula;

0

 a ,b ( ,  )   ( ( x ) 1 (1   ( x ))  1  ( x )dx;  ,   0, x  a, b  b

a

  z ( x ) 1  z ( x )  1  1  z ( x ) 1

(   )

dz ( x );

0 

  z ( x ) 1 1  z ( x )

(   )

d z( x )

0

and for  ( x ) 

xa  ( x  a )(b  x ) 1 then; b x

a ,b ( ,  )    ( x ) 1 (1   ( x ))  1  ( x )dx;  ,   0, x  a, b  b

a



b

  (( x  a )(b  x ) 1 ) 1 1  ( x  a )(b  x ) 1 a



 1

(b  a )(b  x ) 2 dx;  ,   0, x  a, b 

 (b  a )  ( x  a ) 1 (b  x )1 1 2 (b  a )  2 x  dx;  ,   0, x  a, b . b

 1

a

 (b  a )  ( x  a ) 1 (b  x ) (   )1 (b  a )  2 x  dx;  ,   0, x  a, b. b

 1

a

Result: Recent modifications that allow the researcher defines a new model function with four parameters, and smooth allows finding a 246

common formula with the function of the generalized gamma functions previously proposed, and this function is in the form: a ,b ( ,  )  (b  a )  ( x  a ) 1 (b  x ) (   )1 (b  a )  2 x  b

 1

a

dx;

;

where  ,   0, x  a, b . The beta function is symmetric, meaning that when  and  are positive integers, it follows trivially from the definition of the gamma function that: It has many other forms, including: where are a truncated power function and the star denotes convolution. A graphs is shown below:

Figure 5.29: A graph of 0,1 ( ,  ) on the square 0    100 , 0    100 . As an illustration of the shape of this function, the following graph show the variation over a wide range of  , but small  . A graph is shown below:

Figure5.30: A graph of 0,1 (  ) on the real plan ( 0    10 ). 247

Figure5.31: A graph of 0,1 (.,  ) on the real plan ( 6    6 ).

Figure5.32: A graph of 0,1 ( ,  ) on the in complex plan. 248

5-4-Examples with Applications: Example (1): A collection and description of special mathematical functions. The functions include the error function, the Psi function, the incomplete Gamma function, the Gamma function for complex argument, and the Pochhammer symbol. The Gamma function the logarithm of the Gamma function, their first four derivatives, and the Beta function and the logarithm of the Beta function are part of R's base package. For example, these functions are required to valuate Asian Options based on the theory of exponential Brownian motion. Other important functional equations for the gamma function are Euler’s reflection formula;

(1  z ) ( z ) 

 ; sin(z )

and the duplication formula; 1 ( z ) ( z  )  212 z  (2 z ) 2

The duplication formula is a special case of the multiplication theorem;



1 2 3 m 1 ( z ) ( z  ) ( z  ) ( z  )..... ( z  ) 2  m m m m



( m1) / 2

m

1 mz 2

(mz)

A simple but useful property, which can be seen from the limit definition, ____

__

is ( z ) ( z ) . Perhaps the best-known value of the gamma function at a non-integer argument is; 1 ( )   ; 2

which can be found by setting z 

1 in the reflection or duplication 2

formulas, by using the relation to the betafunction given below with xy

1 , or simply by making the substitution u  t in the integral 2

249

definition of the gamma function, resulting in a gaussian integral In general, for non-negative integer values of n we have:  n  1 / 2   1 (2n )! (  n )  n      n! n 2 2       1 / 2   1 ( 4) n n! ( 2) n (  n )       /   n! 2 (2n)! (2n  1)!!    n

where n!! denotes the double factorial and, when n  0, (1)!!  1 1 It might be tempting to generalize the result that ( )   by looking 2

for a formula for other individual values (r ) where r is rational. However, these numbers are not known to be expressible by themselves in terms of elementary functions. It has been proved that (n  r ) is a transcendental number and algebraically independent of  for any integer 1 1 1 2 3 n and each of the fractions r  , , , , , r = 1/6, 1/4, 1/3, 2/3, 3/4, 6 4 3 3 4

and

5 In general, when computing values of the gamma function, we 6

must settle for numerical approximations. Example (2): Derivative of beta action we have;

 ( x ) ( x  y )   ( x, y )  ( x, y )     ( x, y )( x )  ( x  y )  x  ( x ) ( x  y )  where (x ) is the digamma function. The Stirling’s approximation gives the asymptotic formula; x x 1 / 2 y y 1 / 2 ( x, y ) ~ 2 ( x  y ) x  y 1 / 2

for large x and large y . If on the other hand x is large and y is fixed, then; ( x, y ) ~ (y) x  y

250

Figure5.33: A graph of a ,b ( x, y ) on the same values of ( a, b ). Example (3): ( x )  lim n

n! n x n

 ( j  x)

 x   m ( x )  

j 0

So far we have shown that we can generate the E-R function by the use of a fairly simple operational identity; 



1 x x  p 1  Dˆ p 1 e  d   d ( p) 0 1 x ( p ) 0 e  x

 ( x, p ) 

Example (4): Euler Beta Function is case of generalization of binomial define by; 

I ( x, y )   e ( x 1)  (1  e  ) y 1 d , 0 1

e   t   t x 1 (1  t ) y 1 dt  B( x, y ) 0

( x ) ( y ) 1  x  y  1 B( x, y )     ( x  y ) x  y  1 

1

( 1)n 1 ( y )  n! ( x  n ) ( y  n ) n 0 

B( x, y )  

( 1)m 1 ( x ) m 0 m! ( y  m ) ( x  m ) 



251

Example (5): Veneziano just asked what is the simplest form of the amplitude yielding the resonance where they appear on the C.F. Plot, and the “natural” answer was the Euler Beta Function;

A( s, t ) 

(  ( s)) (  (t ))  B(  ( s),   (t )) (  ( s)   (t ))

Schwinger was the first to realize a possible link between QFT diverging integrals and Ramanujan sums; b

I m, b    p m dp  0

b  B2 r bm   nm   a m , r b m 2 r 1 , 2 n 1 r 1 ( 2 r )!

( m  1) , ( m  2 r  2) m I m, b   I ( m  1, b)   ( m ) 2  B2 r  a m , r ( m  2r  1) I ( m  2 r, b) r 1 ( 2 r )!

am, r 

Example (6): Consider the integral; 

e

 xy

dy 

0

1 ; x

 x  0

If x  1 , this integral converges uniformly, since we have for positive values of A; 



 xy y A  e dy   e dy;  e 0

;

A

where the right hand side no longer depends on x and can be made as small as we please, if we choose A sufficiently large. The same is true of the integrals of the partial derivatives of the function with respect to x. By repeated differentiation, we thus obtain; 

 xy  ye dy  0

1 ; x2



2  xy  y e dy  0



2 n! ,...,  y n e  xy dy  n 1 3 x x 0

Zeta and prime numbers Euler;

 1  ( s )   1  s  p  p  252

1

The Beta function once more; n   a ( D / 2  a ) ( n  D / 2  a ) B ( x, y ) (  L)  n  D / 2  a  (  L) 2 D ( D / 2 ) ( n ) ( ) 2 D

The main object of the present paper is to derive a number of key formulas for the fractional integration of the multivariable H-function (which is defined by a multiple contour integral of Mellin-Barnes type). Each of the general Eulerian integral formulas (obtained in this paper) are shown to yield interesting new results for various families of generalized hypergeometric functions of several variables. Some of these applications of the key formulas would provide potentially useful generalizations of known results in the theory of fractional calculus. Example (7): In the theory of Gamma and Beta functions, it is well known that the Eulerian Beta integrals; 1

B( ,  ) :  t  1 (1  t ) 1 dt  0

b

 (t  a )

 1

a

( ) (  ) , (   )

(b  t )  1 dt  (b  a ) _  1 B( ,  ),

(  )l  (t  a )u   , l !  au  v  l 0 l



(ut  v )  (au  v ) 

 (t  a)u

 (au  v) ;

t  a, b,

where ( )   (   ) / ( ) ; b

 (t  a ) a

 1

(b  t )  1 (ut  v ) dt  (b  a ) _  1 (au  v ) B( ,  ),

Example (8): The q  gamma function is defined for positive real numbers x and q  1 by; 1 x

q ( x )  (1  q)

1  q n 1 ; 0  q 1  n x n 0 1  q 

1  q n 1 ; q 1 n x n 0 1  q 

q ( x )  ( q  1)1 x q (1 / 2 )( x )( x 1)  

253

We note here according to that the limit of q (x ) as q  11 gives back the well-known Euler's gamma function; 

lim q ( x )  ( x )   t x e t dt

q11

0

Note also that from the definition we have for positive x and for

0  q  1 , 1/ q ( x)  q ( x1)(1 x / 2) q ( x) . we see that lim1 q ( x )  ( x ) . For q1

historical remarks on gamma and q  gamma functions, we refer the reader to result. Example (9): The reply of Robin Chapman, then the gamma function is the Mellin transform of the exponential function; 

( x )   t x e t dt 0

The Mellin transform of f being; 

M ( f )( s )   t s f (t ) 0

dt t

Well you might ask, why not absorb the final 1 / t into the t s and change s  1 to s ?. The point though is that dt/t should be inseparable in this

context as dt / t is the Haar measure on the multiplicative group of the positive reals. That is; 

 0

f (at )

dt  t



s  t f (t ) 0

dt  t





f (1 / t )

0

dt t

This becomes important when studying the zeta function and its functional equation. But note also that for Re( s)  0 the Mellin transform can be written as;

 (S ) S



1  1  s dt   t t s! 0  e  1  t

and for 0  Re( s)  1 the Mellin transform is;

 (S ) S



   1 / t t s 0

254

dt t

Here x is the fractional part of x . Thus the Mellin transform does not necessarily suggest the use of the Gamma function over the factorial function. However, again this is not absolutely convincing because of the following formula which might be known to Euler and is also a very useful representation, although the right hand side does not bear a special name; 1

 i 0

mn m! n! i   ( m, n )  m  n  ( m  n )!  1   1   i  i  1

Example (10): Another important function defined by an improper integral involving a parameter is Euler's beta function. The beta function is defined by; 1

( x, y )   t x 1 (1  t ) y 1 dt 0

If either x or y is less than 1, the integral is improper. However, by the criterion of converges uniformly in x and y , provided we restrict ourselves to intervals. It therefore represents a continuous function for all positive values of x and y . We obtain a somewhat different expression by using the substitution 1 / 2  t  : 1/ 2

( x, y ) 

 1 / 2  t 

x 1

(1 / 2  t ) y 1 dt

1 / 2

or, in general, if we now put t  t / 2s  , where s  0 , s

(2s ) x  y 1 ( x, y )   ( s  t ) x 1 ( s  t ) y 1 dt s

Finally, if we set t  sin   in the original formula, we obtain; 2

 /2

( x, y )  2  sin   0

255

2 x 1

cos  2 y 1 d

We shall now show how the Beta function can be expressed in terms of the Gamma function by using a few transformations which may seem strange at first sight. If we multiply both sides of the equation; (2s)

x  y 1

s

( x, y )   ( s  t ) x 1 ( s  t ) y 1 dt s

by e 2 s and integrate with respect to s from 0 to A , we have; A

( x, y )  e 2 s 2s 

x  y 1

A

s

ds   e 2 s ds  ( s  t ) x 1 s  t 

0

0

y 1

dt

s

The double integral on the right hand side may be regarded as an integral of the function;

e2 s ( s  t ) x 1 s  t 

y 1

The region of integration being the isosceles triangle bounded by the lines; s  t  0, s  0

If we now apply the transformation;

  s  t, z  s  t This integral becomes;

1 2

e

  y

 x 1 z y 1d dz

R

We now have as the region of integration the triangle in the  z -plane bounded by the lines   0 , z  0 and   z  2 A . If we now let A increase beyond all bounds, the left hand side tends to the function; 1 ( x, y )( x  y ) 2

The right-hand side must therefore also converge and its limit is the double integral over the entire first quadrant of the  z -plane, the quadrant being approximated to by means of isosceles triangles. Since the integrand is positive in this region and the integral converges for a monotonic sequence of regions, this limit is independent of the mode of approximation to the quadrant. In particular, we can use squares of side A and accordingly write; 256

( x, y )( x  y )  lim  A0

e

  y

 x1 z y 1d dz  lim  e   x1d  e  y x1 z y 1dz A0

R

R

Hence, we obtain the important relation;

( x, y ) 

( x )( y ) ( x  y )

We see from this relation that the beta function is related to the binomial coefficients;

 n  m  (n  m)!    n! m! n  In roughly the same manner as the Beta function is related to the numbers n! In fact, for integers x  n, y  m , the function;

1 ( x  y  1)( x  1, y  1) Has the value ;

n  m   n  This equation can also be obtained from Bohr’s theorem. We first show that ( x, y ) satisfies the functional equation;

( x  1, y ) 

x ( x, y ) x y

So that the function;

U ( x, y )  ( x  y ) ( x, y ) Considered as a function of x, satisfies the functional equation of the function;

U ( x  1)  x U ( x) Since it follows from the earlier theorem that log U ( x, y ) is a convex function of x , we have;

( x  y ) ( x, y )  ( x) ( y ), And, finally, if we set x  1,  ( y )  ( y ) . Finally, we note that the definite integrals; 257

 /2

a  sin t  dt

 /2

and

0

 cos t  dt a

0

which are identical with the functions; 1 a 1 1 1 1 a 1 ( , )  ( , ) 2 2 2 2 2 2

Can be simply expressed in terms of the  -function:  /2

 /2

 sin t  dt a

=

 cos t  dt = 0

0



a

a

(

( a  1) ) 2 a ( ) 2

Example (11): Using our knowledge of the Gamma function, we shall now carry out a simple process of generalization of the concepts of differentiation and integration. We have already seen that the formula; ( x  t ) n1 1 f (t ) dt  ( x  t ) n1 f (t ) dt  ( n  1 )!  ( n ) 0 0 x

x

F ( x)  

yields the n -times repeated integral of the function f (x ) between the limits 0 and x . If D denotes symbolically the differentiation operator x

and D 1 the operators

 ... dx which is the inverse of differentiation, we 0

may write; F ( x)  D n f ( x)

The mathematical statement conveyed by this formula is that the function

F (x ) and its first (n  1) derivatives vanish at x  0 and the n-th derivative of F (x ) is f (x ) . It is now very natural to construct a definition for the operator D   even when the positive number  is not necessarily an integer. The integral of order  of the function f (x ) between the limits 0 and x is defined by; D  f ( x ) 

x

1 ( x  t )  1 f (t ) dt  (  ) 0

258

This definition may now be used to generalize nth-order differentiation, symbolized by the operator D n or d n / dx n , to  th order differentiation, where  is an arbitrary non-negative number. Let m be the smallest integer larger than m , so that   m  p , where 0  p  1 . Then our definition is; x

dm ( x  t )  1 f (t ) dt dx m (  ) 0

D  f ( x )  D m D  f ( x ) 

A reversal of the order of the two processes would yield the definition; 



x

( x  t) (  ) 

D f ( x)  D D f ( x)  m

 1

f ( m ) (t ) dt

0

It may be left as an exercise for the reader to employ the formulae for the

 -function to prove that; D D  f ( x )  D  D f ( x )

where  and  are arbitrary real numbers. He should show that these relations and the generalized process of differentiation have a meaning whenever the function f (x ) is differentiable in the ordinary way to a sufficiently high order. In general, D n f (x ) exists if f (x ) has continuous derivatives up to and including the m-th order. In this context, we may mention Abel's integral equation, which has important applications. Since

(1 / 2)   the integral of a function f (x ) to the order 1 / 2 is given by the formula; D



1 2

f ( x) 

1



x

 0

f (t ) dt   ( x ) xt

If we assume that the function  (x ) on the right-hand side is given and that it is required to find f (x ) , then the above formula is Abel's integral equation. If the function  (x ) is continuously differentiable and. vanishes at x  0 , the solution of the equation is given by; 

1

f ( x )  D 2 ( x ) or

259

f ( x) 

d



x

dx  0

 (t ) xt

dt

Example (12): We see at once that log (x ) is convex. In fact, if we write

(x ) in the form; 

( x )   e t / 2 t ( x 1h ) / 2 .e t / 2 t ( x 1h ) / 2 dt 0

Where h has any positive value and x any value larger than h , and apply Schwarz’s inequality, we have;

( x)2  ( x  h) ( x  h) Whence;

log ( x  h)  log ( x  h)  2 log ( x)  0 Applications: In just the past thirty years several new special functions and applications have been discovered. This treatise presents an overview of the area of special functions, is one of the most important functions in analysis and its applications, the history and the development of this function in Mixture Model and the Estimation of Hazard Rate and Reliability General Mixture Gamma Distribution Model, are described in detail in a paper [References]. The Extended Generalized Gamma Function, which is the derivative of this function, is also commonly seen, and was discovered in theoretical physics. It has other applications as well, for example it's extensively in probability, and for example it's extensively in probability theory for reasons quite unrelated to its factoral connection, a mathematician recommended somebody as being very bright, very knowledgeable, and interested in applications. The Extended Generalized Gamma Function has important applications in probability theory, combinatory and most, if not all, areas of physics for same typical applications for displays as visual stimulators and factors and its application to Stirling's formula. One of the principal applications of these functions was in the compact expression of approximations to physical problems for which explicit analytical solutions could not be found. Formal grammar, while 260

interesting for its own sake, is rarely useful to those who use natural language to communicate.

Figure 5.34: A graph EGGF function in the hierarchy of diffraction catastrophes. Arguing by analogy, I wonder if that is why the formal classifications of special functions have not proved very useful in applications. Kelvin's ship-wave pattern, calculated with the Airy function, the simplest special function in the hierarchy of diffraction catastrophes, a cross section of the elliptic umbilicus, a member of the hierarchy of diffraction catastrophes. The cusp, a member of the hierarchy of diffraction catastrophes , just as new words come into the language, so the set of special functions increases. The increase is driven by more sophisticated applications, and by new technology that enables more functions to be depicted in forms that can be readily assimilated.

261

5-5-Exercises Exercise (1): We present several new inequalities for Euler's beta function, ( x, y ) . One of our results states that the beta function can be approximated on 0, 1  0, 1 (0, 1] by rational functions as follows;



(1  x )(1  y ) 1 (1  x )(1  y )   ( x, y )   x(1  x )(1  y ) xy x(1  x )(1  y )

Prove this result. Exercise (02): Prove that;

xm ym z   , ( m  0) a m bm c 2

1   m  1  m h  abh   c   2  2    m  Exercise (03): Prove that;



 x 2 y 2 z 2  p 1 q1 r 1 f  2  2  2  x y z dxdydr b c  a

Taken throughout the positive octant of the ellipsoid;

x2 y2 z2   1 a 2 b2 c 2 is equal to ;

a pbqh r

 p q  r        1 p  q r  2   2   2  f ( )  2 1d  p  q  r  0   2  

(Hint: Introduce new variables x, y, z by writing;

x2 y2 z2     ; x  a  (1   ) a 2 b2 c 2 y2 z2    ; y  b  (1   ) b2 c 2 z2   ; z  a  c2 262

and perform the integration with respect to  , ,  ). Exercise (04): Find the x-co-ordinate of the centre of mass of the solid; 1

1

1

 x n  y n  z n          1, a b c

x  0, y  0, z  0.

Exercise (05): Prove that;

22 x

1    x   x   2  2  2 x 

Exercise (06): Prove that; 0   r (k , p, m, n,  , a, b)  

Where; 

 r (k , p, m, n,  , a, b)  b  a  x  a 

k 1

0

b  x r mk 1 x  a m  n b  x m 

For p , k , p, m, n,   0; r, a  IR, b  IR* , a  b . Exercise (07): show that; 1

1 1 1 B(c  1, )  c c c 1

Exercise (08): Evaluate 

x

2

2

(1  x 2 ) 5 e  x dx

0

Exercise (09): Evaluate 

 ( x  1)

3/ 2

(2  x 2 )5 / 2 e  x

2

2

dx

0

Exercise (10): Evaluate  2 (1,1,1,1,1) , 2,5 ( 5 / 2, 5 / 2)

Exercise (11): Evaluate 

 ( x  3)

5/ 2

(1  x 5 ) 5 e 2 x dx

0

Exercise (12): Evaluate

263

e 

 r   x a b x 1

 dx ; p

1

 ( x  1)

2/5

(1  x 5 ) 2 / 3 dx

0

Exercise (13): Evaluate 1

 [1 /

4

( x  2) 3 / 3 (1  x 5 ) ]dx

0

Exercise (14): Evaluate 1

[

5

( x  1)1 / 5 (1  x 5 ) 2 / 5 ]dx

0

Exercise (15): Evaluate 1

 [x

37

(1  x 3 ) ]dx

0

Exercise (16): Evaluate 2

x  2 1 / 2 ) 2

 [( 0

2  x 5/ 2 ) ]dx 2

5

(

7

(1  x 3 ) 7 / 3 ]dx

Exercise (17): Evaluate 1

 [( x  1)

1/ 2

0

Exercise (18): Evaluate 1

 ( x  1)

1/ 2

(1  x 2 ) 8 / 3 dx

0

Exercise (19): Evaluate 1

 (x

2

 1) 1/ 2 (1  x 5 )8 / 3 dx

0

Exercise (20): Evaluate 

 (x

8

 1) 1/ 2 (9  x 5 )8 / 3 )e 3( x 3) dx 8

0

Exercise (20): Show that the integral defining the extended generalized gamma function  r (k , p, m, n,  ) converges for any r  IR ,and

k , p, m, n,   0, for sample cases: 264

1)- k  1, p  1, m  1, n  1,   0.5, 2)- k  0.1, p  1, m  1.5, n  1,   1.5, 3)- k  10, p  1, m  2, n  1,   10, 4)- k  2, p  1, m  1.5, n  4,   0, Exercise (21): Draw a careful sketch of the extended generalized gamma function  r (2,1,1,1,1) in each of the following cases: a.

1  r  1

b.

r  2

c.

r  1.

d. Show that the graph of  r (k , p, m, n,  ) at r  0 . Exercise (22): For what each of the parameters  ,  , then the beta function is linked to form teams for the integration of function between the two periods 0, 1 and 0,   , the following formula: 

 1  y 1  y 

(    )

1

dy   y  1 1  y 

0

(    )

0

1

dy   y  1 1  y 

(    )

dy  constent

0

Exercise (23): For what each of the parameters k , p, m, n,  , r the improper integral function  r (k , p, m, n,  ) is satisfies the following recurrence relations; 1)-

2)-

 1  k  k m rm k m m  r 1 (  1, 1 , , n,  )  A   1 r (  1, 1 , , n,  )   p  p  p p p p p 



r m k p

m m  k   k m rm k m m p r (  1, 1 , , n p ,1)  B   1 r (  1, 1 , , n  ,1)  p  p  p p p p p 

Where A , B are constant values?.

265

Chapter Six: On Extended Generalized Gamma and Beta Distributions

6-Introduction: According to the links, Karl Pearson was first to formally introduce the gamma distribution. However, the symbol gamma for the gamma function, as a part of calculus, originated far earlier, by Legrenge (1752 to 1853). The beta and gamma functions are related. In statistics, there are many family distributions which can be used in various areas. Such distributions include the Normal, Chi-squared, Exponential, Rayleigh, Weibull, Erlang, Gamma, Extreme-Value, Lognormal and others. The relationships between various distributions, shown in Figure.6.1, It also demonstrates the relation of the above continuous family distributions. Lognormal Family

Normal Family

Distributions

Distributions

Chi-Squared Family Distributions

Rayleigh family Distributions Exponential Weibull family

Family

Distributions

Distributions

Generalized Gamma family Distributions

Extreme-Value

Distribution Erlang

Family

Family

Distributions

Distributions

Figure.6.1: Same family distributions which can be used in various areas. It’s important to have a passing familiarity with the Gamma distribution. The sum of exponential distributions is a gamma distribution. The exponential distribution is tested very heavily on the exam, and there has 266

been at least one recent exam question where it would have been helpful to know that the sum of two exponentials is a gamma. That’s all about what you’ll need to know, but you might get tested on the gamma outright. Listed below are some relevant formulas for the gamma. If skip this and focus on the basics instead. The valuation function of double-bounded model involves dichotomous choice elicitation questions, resulting in the interval censoring of individual subject value. The censored survey can use the survival analysis to provide a wide parametric distribution. This study follows Stacy and obtains a family of distributions from the generalized gamma distribution density function. The history of this family of distributions was reviewed and further properties were discussed in 1962 by Stacy we employ a simple model and statistical-mechanical method(s) to derive the three-parameter generalized gamma distribution. Subsequent work on statistical problems associated with the distribution has been by Bain and Weeks. Special cases of the generalized gamma distribution include the Weibull, gamma, Rayleigh, exponential and Maxwell velocity distributions. Kullbuck-Leibler discrepancy measure polygamma function, recently a new distribution, named as generalized exponential distribution or exponentiated exponential distribution introduced and studied quite extensively by the authors. It is observed that the generalized exponential distribution can be used as an alternative to the gamma distribution in many situations. Different properties like monotonicity of the hazard functions and tail behaviors of the gamma distribution and the generalized exponential distribution are quite similar in nature, but the later has a nice compact distribution function. It is observed that for a given gamma distribution there exists a generalized exponential distribution so that the two distribution functions are almost identical. Since the gamma distribution

267

function does not have a compact form, efficiently generating gamma random numbers is known to be problematic. We observe that for all practical purposes it is possible to generate approximate gamma random numbers using generalized exponential distribution and the random samples thus obtained cannot be differentiated using any statistical tests. Moreover, if there is a skewed data set where gamma distribution fits very well, the generalized exponential distribution also can be used. We use two real life data sets and observe that the fitted distribution functions are (almost identical) in many respects in both cases. The most common way to describe the universe of distributions is to partition them into categories. The image below shows some of the relations between commonly used distributions. Many of these relations will be explored later. The probability distribution function is defined in the context of a random variable, or a vector, and provides likelihoods that the random variable may be observed within a specific sub-set of the space of its possibilities. For instance, throwing a dart at a dartboard generates a score (integer value), indicating the landing location’s proximity to the center of the dartboard (bull’s eye), which naturally leads to a definition of the probability distribution of the dart landing in a specific area. Figure 6.2 shows the relation of the above continuous family distributions. The Pearson system was originally devised in an effort to model visibly, Pearson (1895, p. 360) identified four types of distributions (numbered I through IV) in addition to the normal distribution (which was originally known as type V). The classification depended on whether the distributions were supported. The Beta distribution gained prominence due to its membership in Pearson's system and was known until the 1940s as the Pearson type I distribution.(Pearson's type II distribution is a special case of type I, but it is usually no longer singled out).

268

Figure.6.2: Relationship between same family distributions. The gamma distribution originated from Pearson's work (Pearson 1893, p. 331; Pearson 1895, pp. 357, 360, 373–376) and was known as the Pearson type III distribution, before acquiring its modern name in the 1930s and 1940s. Pearson's 1895 paper introduced the type IV distribution, which contains student’s (t) distribution as a special case. Figure 6.3 shows the Pearson system family distributions.

Figure 6.3: The Pearson system family distributions. 269

The Johnson System of distributions [Johnson 1949] is composed of three distribution families.

These three families cover the entire allowable

skewness and kurtosis plane. The basis for the Johnson System is that a distribution being approximated may be transformed in such a way that it can be considered to be normally distributed. Making this transformation allows normal distribution tables to be used. The disadvantage of the Johnson System is that the methods for fitting the distribution depend on which of the three families are appropriate. Johnson SL family:

e

f 1 (x) 



1 2    ln  x   2

2 (x   )   x    0       ;

Johnson SB family:  2  (x   )  1      ln    2  (     x )  

 e 2  (x   )(     x)   0   0            ;

f 2 (x) 

Johnson SU family:

f 3 (x) 

e

2 1  ( x  )     sinh 1       2 



2  2  x   2   0  0



  

     ;

Both matching of moments and matching of percentiles may be used for each family. However, matching of moments for the Johnson SB family is very complex. Once the parameters of the particular Johnson family are found, standard normal tables may be used to calculate quantiles or percentiles. The Pearson System [Amos 1971] is composed of twelve families of distribution(s), all of which are solutions to the differential equation;

df (x) (x  a) f (x)  2 dx b  cx  dx 270

The solutions differ in the values of the parameters a, b, c, and d. The Pearson system includes the normal, Gamma, and Beta distributions among the families. The twelve families cover the entire skewness and kurtosis plane. The advantage of this system is the tabulated percentile values indexed as a function of skewness and kurtosis. However, if the actual probability density or quantiles are needed, the Pearson System is not convenient. This inconvenience stems from the number of families included in the system and the logic needed to select a specific family and match distribution parameters. Unfortunately, the calculation of a quantile is precisely what is required by tolerance analysis applications. An empirical fit is needed by tolerance analysis to estimate the fraction of assemblies which meet the design limits. Figure 6.4 shows the Johnson Distribution.

Figure 6.4: Johnson Distribution. The adequacy of the Gamma distribution (GD) for monthly precipitation totals is reconsidered. The motivation for this study is the observation that the GD fails to represent precipitation in considerable areas of global observed and simulated data. This misrepresentation may lead to erroneous estimates of the Standardised Precipitation Index (SPI), 271

evaluations of models, and assessments of climate change. In this study, the GD is compared to the Weibull (WD), Burr Type III (BD), exponentiated

Weibull

(EWD)

and

generalized

Gamma

(GGD)

distribution. These distributions extend the GD in terms of possible shapes (skewness and kurtosis) and the behavior for large arguments.

6-1-Extended Generalized Gamma Distribution: The Extended Generalized

Gamma

Distribution

function

characterized

by

six

parameters, r  IR , p , k , p, m, n,   0 and is defined in its original form firstly introduced by Bachioua, in 2004. A random variable X that has a probability density function is said to have an extended generalized gamma distribution of the following form:





 x k 1 x m  n r e  x p  f X ( x )    (k , p, m, n,  ) , r  0,  

for 0  x   , r  IR , k , p, m, n,   0; otherwise ,



Where  r (k , p, m, n,  )   x k 1 x m  n 0



r

e  x dx; p , k , p, m, n,   0; r  IR p

It implies from the definition of X that f X ( x)  0 , 0  x   , and r  IR , 

p , k , p, m, n,   0 . Also we have  f X ( x )dx  1 . 0

In terms of the model function statistical properties, reliability and hazard functions, and estimation of some parameters of the distribution are studied. Also, the form of the distribution is considered under various forms of the model of extended generalized gamma distribution.

Figure 6.5: Illustration of the Gamma pdf . 272

Property (1): For r  IR , p , k , p, m, n,   0 . This 6-parameter extended generalized gamma distribution can be regarded as an extension of the Kobayashi's generalized gamma distribution, of the following form:

 x k 1 x  n  r e   f X ( x)   r (k , n ) 0,

x

for 0  x   , r  IR , k , n  0

,

; otherwise ,

where r (k , n)   x k 1 x  n e  x dx; r  IR, k , n  0 . 

r

0

Proof: Also, this case is reduced to the well-known gamma function when  r (k , 1, 1, n, 1)   x k 1 x  n e  x dx  r (k , n) ; for k , n , r  0 , then the 

r

0

Kobayashi's generalized gamma distribution is special cases of extended generalized gamma distribution. Property (2): For r  0 , p , k , m, n,   0 . This 6-parameter extended generalized gamma distribution can be regarded as an extension of the generalized gamma distribution, of the following form:

 x k 1e   x ,  f X ( x )   p 1   p p k 1 ( k / p ) 0,  p

for 0  x   ,

k , p, ,   0

;

otherwise ,



where (k )   x k 1e  x dx, k  0 . 0

Proof: Also, this case is reduced to the well-known gamma function when

 0 (k , p, m, n,  )  p 1  

p k 1 p

(k / p); k / p  s  0. and the graph of

pdf with p  1 are:

273

Figure.6.6: Illustration of the Gamma pdf for parameter values over

k  10,   10,9,16,25,36,48,64,81 For special case k / p  s then p 1  

p k 1 p

(k / p)  p 1   ( s); s  0 , s

then the random variable X that has a probability density function is said to have a generalized gamma distribution with three parameters of the following form:  x k 1e   x  , s f X ( x)   (k / s ) 1   ( s ) 0,  k/s

for 0  x   ,

k , s,   0

; otherwise ,



where ( s)   x s 1e  x dx, s  0 . 0

The probability density function of the gamma distribution can be expressed in terms of the gamma function parameterized in terms of a shape parameter k and scale parameter θ. Both k and θ will be positive values. Alternatively, the gamma distribution can be parameterized in terms of a shape parameter   k and an inverse scale parameter   1 /  , called a rate parameter;   x  1e   x  , f X ( x)   ( )   0,

for 0  x   ,

,   0

;

otherwise ,

274

Figure 6.7: Illustration of the Gamma pdf for same parameter values. For special case k / p  s  1, then ( s)  1 , and density function is said to have a Weibull distribution with two parameters of the following form:

 k k 1  x p x e , f X ( x)     0,

k , s, ,   0 ; otherwise ,

for 0  x   ,

6-2-Extended Generalized Function of Gamma Distribution: A continuous random variable X is said to have an extended generalized gamma distribution with 6-parameters and a model function  (x ) , defined

  (k , m, n, r, , p) , denoted by X ~ EGG( ,  ( x)) iff its pdf is given by; p r  ( x )  ( x ) k 1  ( x ) m  n e   ( x ) , for x  (a, b) ,  f X ( x )  f X ( ,  ( x ))   ( )  0, otherwise ,





;

Where k , p, m are shape parameters,  is a scale parameter, n and r are, respectively, displacement and intensity parameters, and the function;





( )   r (k , p, m, n,  )   x k 1 x m  n 0



r

e  x dx;   (k , m, n, r,  , p) p

From this definition it follows that the pdf of the random variable

Y   ( X ) will be;





p r  1 y k 1 y m  n e  y ,  f Y ( y )  f Y ( , y )    ( )  0, 

for

y  (0,) , otherwise ,

And the following relation is true for all, x1, x2  (a, b); x1  x2 then; 275

;

 x 2 

x2

 f  x  dx    f  y  dy .

x1

X

x1

Y

The random variable Y   ( X ) ~ EGG( ) will play an essential role in derivation of many statistical properties of X . Theorem (1): The function f X (x ) is a pdf . Proof: It implies from the definition of X , that f X ( x)  0; x  (a, b) . Also we have; 

b

 f  x  dx   f  y  dy  1 a

X

0

Y

The distribution function of X will be defined by; x

F  x ; ,  x     f t  dt ; for x  a,b  . X

X

0

Hence, we get for a given x ; x  a, b  , then;

FX  x  

y   x 



f t dt  Fy  y   Y

0

 y     

, for y  (0,) .

Theorem (2): If Y   ( X ) ~ EGG ( ) , then the s-the moment about the origin is;

s  EY s  

Proof: Since, EY

s

 0

  k  s , m , n , r , , p  ; for s  1, 2,3,...    1

r

y s  y k 1  y m  n  e   y dy The proof is    p

completed. Theorem (3): If Y   ( X ) ~ EGG ( ) , then its respective mean and variance are; 1. mean 

  k  1, m , n , r ,  , p    

variance 

2.

     k  2, m , n , r ,  , p    2  k  1, m , n , r ,  , p   2  

276

Proof: From theorem 3.2 for s  1, 2 , the proof is followed. Theorem (4): If X ~ EGG( ,  ( x)) then its respective reliability and hazard (or failure rate) functions are ; 1. R  x   R  x ; ,   x    2. h  x   h  x ; ,   x   

      x      

; for x  a, b 

p r 1    x   k 1  x   m  x   n  e    x  ; R x 

x   a, b  .

Proof: By substitution f and F in the definitions; X

X

R x   1 F x  ; h x   X

f x  X

R x 

6-3-Same case of Extended Generalized Gamma Distribution: Many distributions can be derived as special cases of the extended generalized gamma distribution. This can be done by reducing the number of parameters of X to less than 6-parameters, by assigning proper values for some parameters of  . For example, the following are some important new types of pdf of 5-parameters extended generalized gamma function.

f x   1.

X

f x   2.

X

f x   3.

X

f x   4.

X

f x   5.

X

x 

  k , m ,0, r ,  , p 

 k r m 1  x e  

x 

p

r

x 

 m  x   n  e    1, m , n , r ,  , p 

x 

  k , m , n , r , 0, p 

x 

x 

 k 1  x  1  n  e  

  k , m , n , 0,  , p 

277

x 

 k 1  x   m  x   n  r

  k , 0, n , r ,  , p 

p

 k 1  x  e  

p

x 

p

r

x 

for

f x  

x 

r

 k 1  x   m  x   n  e  

X   k , m , n , r , , 0 6. Notice that, all of these distributions and others are still defined in general,

since the model function is yet unspecified. Many   x  can be proposed, for example if we take; 

 x    x     ; for x   ,   0 ,   1 ,    Where  ,  ,  are, respectively, location, scale, and shape parameters, then the pdf of X ~ EGG( ,  ( x)) becomes with 9-parameters and is given by;



 x   f x   X      

 k 1

r

p

 x     m    x    n ; for x   .   e     

The following distributions are some particular cases of this distribution. Case (1): when n  0 , then;

 p

k  rm p

 x   f x     X  k  rm         p 

  k  rm  1    k       

p

; for r 

e

 p   x   k  rm  p , f  x   X    

 p 1    x       

e

k , and for m

p

, x  ,

This form can be considered as an extended generalized Weibull distribution, and when  p  1 , then  x    

  f x   e  X 

, x  ,

which is the generalized 3-parameters exponential distribution.

278

Figure 6.8: Illustration of the generalized 3-parameters exponential distribution. For   k  rm   1 , then; 1

f x   X

 p

p

 1     p 

e

 k       

p

, x  ,

This form is an extended generalized normal distribution, and when

1 2

 p  2 ,   , then f  x  is a half-normal distribution. X

Case (2): when   0 , from theorem, we have r

k

mn m f x   X k k    r  , m m 

 x         

 k 1

r

 x     m     n  ; for x   ,    

which can be considered as an extended generalized beta distribution. Case (3): when r  0 , from theorem, we have; k

 p  p  x   f x     X k       

 k 1

e

 x       

p

; for x  

p

This can be considered as an extended generalized gamma distribution, and when  k  1 , and then f  x  can be regarded as extended X

generalized exponential distribution. 279

Case (4): when m  p , we have ; k r m

m   x   f x     X k       r   m 

for x   and

 k 1

r

m

 x     m     x   ;   n e     

k  r , and for small n  , which is another form of the m

generalization of gamma distributions.

6-4-Extended

Generalized

Beta

Distribution:

The

Extended

Generalized Beta Distribution function is characterized by four parameters, a, b  IR ,  ,   0 . A random variable

X

that has a

probability density function is said to have an extended generalized beta distribution of the following form:

 ( x  a ) 1 (b  x )  1 ,  f X ( x)   a ,b ( ,  )  0,

for a  x  b ,   0 ,   0 ; otherwise ,

Where a ,b ( ,  ) the generalized beta function is defined in its original form firstly introduced. It is used by the gamma function to compute nonintegral values introduced a new type of generalized beta function as follows: a ,b ( ,  )   ( x  a ) 1 (b  x )  1 dx;  ,   0, x  a, b b

a

To search for the extended version of the generalized beta function, the researcher tried to constantly re-circulate the final version with the suggestion of re-drafting the function using the simple case with the following conditions: The  (.) function is required to be derived, defined in the field and the condition of the values [a, b] of the limits of integration should be checked, so that:

 (.) :

a, b   IR 

x   ( x ); with condition  (a )  a,  (b)  b 280

Though these conditions, it can be rewritten to become a beta function as extended and as follows: a ,b ( ,  )   ( ( x )  a ) 1 (b   ( x ))  1 dx;  ,   0, x  a, b b

a

In order to identify extended formulas take the simple case when the value of the variable of integration is between 0 and 1. This formulation can be written as follows. A continuous random variable X is said to have an extended generalized gamma distribution with 4-parameters and a model function  (x ) , defined   (a, b,  ,  ) , denoted by X ~ EGB( ,  ( x)) if and only if its pdf is given by;   ( x )  1 ( ( x )  a ) 1 b   ( x ) , for x  (a, b) ,  f X ( x )  f X ( ,  ( x ))   ( )  0, otherwise ,

;

Where  is a shape parameter,  is a scale parameter, a and b are, respectively, displacement and intensity parameters. And the function; B( )  a ,b ( ,  )   ( ( x )  a ) 1 (b   ( x ))  1 dx;  ,   0, x  a, b ; b

a

It is implied from the definition of X

that f X ( x)  0 , 0  x  1 , and

1

 ,   0 . Also we have  f X ( x )dx  1 . 0

Figure 6.9: Illustration of the family of Uniform Distributions.

281

6-5-Cases of Extended Generalized gamma and Beta Distributions: 6-5-1- Weibull Distribution: The Weibull probability distribution has three parameters, ηβ, and t0.

It can be used to represent the failure

probability density function ( pdf ) with time, so that; for η > 0, β > 0, t > 0, -∞ < t0 < t, where β is the shape parameter (determining what the Weibull pdf looks like) and is positive and η is a scale parameter (representing the life characteristic at which 63.2% of the population can be expected to fail) which is also positive t0 is a location or shift or threshold parameter (sometimes called a guarantee time, failure-free time or minimum life). t0 can be any real number. If t0 = 0 then the Weibull distribution is said to be a two-parameter distribution. Figure 6.7 shows the diverse shape of the Weibull pdf with t0 = 0 and various values of η and β (=0.5, 1, 2, 3,…).

  t  t0   f w t       







 1   t t0       

e



Note that the figures are all based on the assumption that t0 = 0.

Figure 6.10: Illustration of the Weibull pdf . 282

The cumulative distribution function (CDF), denoted by F(t), is:

Fw t   1  e

  t t   0    

  



   

Fig. 6.8 shows the Weibull CDF with t0 = 0 and various values of η and β (=0.5, 1, 2, 3, 5). All curves intersect at the point of (1, 0.632), the characteristic point for the Weibull CDF.

Figure 6.11: Illustration of the Weibull cdf . Weibull distribution has two parameters, m and t0 is; 

R(t)= e

tm t0

Q(t)= 1  e

f(t)=



tm t0

m m-1 t .e t0

 (t) =



tm t0

m m-1 t t0

1  TS= t0m 

1   1 m  283

2   D= t 0m  

2   1  1   2   1  .  m   m 

Figure 6.12: Weibull distribution has two parameters. Weibull distribution contains exponential and Rayleigh distribution as a special case(s). for m=1  it is exponential distribution for m  1  Rayleigh distribution. Weibull distribution for m  1 is very good approximation of the product the early stage. Weibull distribution is the second most used distribution (after exponential). Example (Lifetime data analysis): These notes are based on the description given in Horthy the data below give the failure times (Death) of components. The other times are censoring times, with the censoring being also random because they are due to failures occurring in other parts of the system. Suppose that the Weibull distribution is to be used for the failure times: f ( x; a, b)  abxb 1 exp( axb ), 0  x   ; with a, b  0 .

Suppose that a second Weibull distribution is to be used for the censoring times: g ( y; c, d )  cdy d 1 exp(cy d ), 0  y  

With c, d  0 . The survival probabilities for the two distributions are: 284

F ( x; a, b, )  1  F ( x; a, b)  exp( axb ) and G ( y; c, d )  exp( cy d )

The loglikelihood is therefore;

L   log( f ( xi ; a, b)   log F ( y j ; a, b)   log( G ( xi ; c, d )   log g ( y j ; c, d ) i

j

i

j

The quantity of interest is the probability that a component will survive h hours  (h; a, b)  F (h; a, b). 6-5-2- Family of Normal Distributions: The essence of lognormal distribution is treated in detail form example in Aitchison and Brown (1957). Use of lognormal distribution in connection with wage or income distributions is described in Bartošová (2006) or Bílková (2008). The PDF for 3-parameter lognormal distribution is:

f L t  



2 t  t 0 

e

   2    ln  t t0                 2        

for  ,   0, t  0,  t0  t , ;

where  is the shape parameter, θ is the scale parameter and t0 is the location parameter.

Figure 6.13: The family of normal distributions pdf . The units of  ,  and t 0 are the same as in the Weibull case.

The

Lognormal is said to be a 2-parameter distribution when t0  0 . Figure 6.10 shows the diverse shape of the Lognormal PDF with t0  0 and various values of θ and ρ (=0.4, 0.8, 1.6, 2.5, 4).

285

Figure 6.14: The Lognormal distributions pdf . The corresponding Lognormal cdf is the integral of the pdf from 0 to time-to-failure t. It can be written in terms of the standard Normal cdf as:

  t  t0    FL t    ln   ;      where Φ(•) is the cdf of the standard Normal distribution defined as:  z   

 2 

z



1   2  e d ; 2

where μ=lnθ. Φ(•) is tabulated in many publications. Figure 6.11 shows the Lognormal cdf with t0 = 0 and various values of θ and ρ (=0.4, 0.8, 1.6, 2.5, 4). It is clear that all curves intersect at the point of (1, 0.5), the characteristic point of the Lognormal cdf .

Figure 6.15: The Lognormal distributions ( pdf , cdf ).

286

The Logarithmic – normal distribution is defined by logarithm of random variable with normal distribution, and the random variable X has normal distribution with parameters ,  and x=ln(t), so x  - ; and t > 0. The normal distribution need not to be truncated by probability density.;

f x   f t  

1 e  2



M e t 2

x -  2



2 2

where: x  - ; ; with respect to variable t:

log(t)-log(t 0 ) 2 2 2

and

  log( t0 )

where constant M  0,4343 is for transformation ln  log.

Figure 6.16: normal distribution parameters ( pdf , cdf ). 2 2

Mean time to failure: MTTF  TS  t 0  e 2 M can be evaluated

log(TS )  t 0  1.1513 2 . The Dispertion is:

D

2 2 tS tS 2 t0

1

For small size of   0.1 is similar to normal distribution and can be used for time of system back up, previous simple distribution cannot approximate the whole system for the life time of product  combination of distributions.

287

The Gaussian q-distribution is symmetric about zero and is bounded, except for the limiting case of the normal distribution. The limiting uniform distribution is on the range -1 to +1.

Figure 6.17: Plot of density and cumulative Gaussian q-distribution ( pdf ,

cdf ). 6-5-3- Exponential distribution: has one parameter  , can be easily expressed as analytic functions; R(t)= e

 t

Q(t)= 1  e

 t

f(t)=   e  t

 (t )   TS=

1 1 , and D=  2

Figure 6.18: Plot of density and cumulative Exponential distribution ( pdf ,

cdf ). 288

The most used distribution is MTTF when 1 /  , the distribution is defined by mean value [Very good approximation of systems in time of normal operation]. Do not match the early stage and end-of-life stage. Some systems with flexible failure rate are not convenient for exponential distribution. 6-5-4- Rayleigh distribution: The Rayleigh distribution is often used where two orthogonal components have an absolute value, for example, wind velocity and direction may be combined to yield a wind speed, or real and imaginary components may have absolute values that are Rayleigh distributed. Is defined by one parameter k and the failure rate is increasing linearly; k  t Suppose k  0 , then R(t)= e 2 k  t Q(t)= 1  e 2 k  t2 f(t)= k  t  e 2

 (t )  kt TS=

 2k

2 D=



1.253 k



2  0.429 k k

Figure 6.19: Plot of density and cumulative Rayleigh distribution ( pdf ,

cdf ). 289

6-5-5-Gamma distribution: The Gamma distribution is a continuous probability distribution. The Gamma distribution models the sum of multiple independent, exponentially distributed variables. The Gamma distribution can be viewed as a special case of the Exponential distribution. The distribution of two parameters m  0, c  0 , system with backup; f(t)=

t m -1 c m m 

e



t c

TS= m.c, and D=m.c2 if m is a natural number the definition on m  m - 1! so: t

f(t)=

 t m-1 e c m c m - 1!

Figure 6.20: Plot of density and cumulative Gamma distribution ( pdf ,

cdf ). Where: m change the shape of function f (t ) , and c change the scale factor of time, and gamma distribution is for m  1 equal to exponential distribution with   1 / c .

290

6-5-6-Truncated normal distribution: The two parameters   0,   0 ;

f(t)=

c e  2



t -  2 2 2

1

    Where c      . The reliability theory use only t  0 which means     that for  t  0 the probability of failure is zero. The characterization of reliability with truncated normal distribution t  0 ,  is the distribution function of normal distribution n  0,  n2  1 ; then ; 2

1 x   2

x t e 2 dt





The probability density of normal distribution is:

 x  

dx  1  e dx 2



x2 2

-t  R(t)    /        Q(t)  1 - R(t)

 t  f(t)     /          t      (t)    t        



  t   TS        / 2  1      

291

Figure 6.21: Plot of density and cumulative truncated normal distribution (

pdf , cdf ).

  2 is TS very similar to  , the truncated normal distribution is suitable for approximation of reliability of product in end-of-life stage (see figure). 6-5-7- A Derivation of the standardized SGT : The probability density function for the Skewed Generalized t (SGT) distribution introduced by the Odossiou (1998) is represented as follows:

  x     f x  , , , n  C1     n  2 1  sign ( x )     



2





1

Where



n 1 

;

1



1

1     1 n 2  3 n  2  2     B ,  B ,  , S() S()  n  2         

 1  32  4A 2 2 , 

1

 2 n 1  1 n  2  3 n  2  A  B , B ,  B ,          1





1 2

3

 S()  3 n  2  2  1 n  2 C  B ,  B ,  . 2      

The expected value of x is   E(x)  2A / S() and the variance of x is var(x)  E(x 2 )   2   2 . For simplicity, the derivation of parameters, C ,  and  , for the standardized SGT distribution is accomplished

using the transformed random variable   (x  ) /  , which has a mean 292

of E()  0 and a variance of var()  1 . The random x thus can be and let    /   2A / S() , there fore

expressed as x    

dx / d   , x /      /      and the equation is;

sign (x)  sign (   )  sign    /   sign    /   sign    . Substitute the above expressions and equation into the following equation.    x dx     f    f x   C1    d  n  2 1  sign ( x )     

       C1    n  2 1  sign (  )       





1 2

 S()  3 n  2   1 n  B ,  B ,  2       1 2

S()  3 n  2   1 n  B ,  B ,  2     





3 2

3 2





n 1 



n 1 



       1     n  2 1  sign (  )              1    n  2 1  sign (  )      





n 1 

 n 1 

.

Where; 1

 S()  3 n  2  2  1 n  C  B ,  B ,  2 t         1



3 2

1



1

1     1 n 2  3 n  2  2     B ,  B ,  , S()  n  2         

Letting

1 1    (n  2)   1  1 n  2  3 n  2  2      B ,  B ,   S()            



and

substituting the expression into Eq. (A2) gives the equation which is the standardized SGT distribution. f   

1 2

S()  3 n  2   1 n  B ,  B ,  2     

293



3 2

      1      1  sign (  )    



n 1 

       C1      1  sign (  )    



n 1 

;

Where; 1



1

1  1 n 2  3 n  2 2 2 2 2  B ,  B ,  , S()  1  3  4A  , S()        

 2 n 1  1 n  A  B , B ,      

0.5

 3 n 2 B ,    

1

S()  3 n  2  2  1 n  C B ,  B ,  2     



3 2

0.5

, 

2A , S()

 1 n  B ,  2    

1

Particularly, the SGT distribution generates the student t distribution for   0 and   2 . Using the recurrence identities of Beta function,

Ba, b  Bb, a  Ba  1, b  a  Ba, b / a  b , and Ba  1, b  

a  b 1  Ba, b  , we can a 1

obtain;

1 3 n2 1 n B , B ,  .  2 2  n2 2 2 Therefore   2A / S()  0 , S()  1  32  4A 2 2  1 and 1

1  1 n 2  3 n  2   B ,  B ,  S()         1

 1 n 2  1 n   B ,  B ,  2 2 2 2



1 2



1 2

1

 1 n 2  3 n  2   B ,  B ,  2 2 2 2 



1 2

n 2  n2

 n  1  n  1      1 n 2 1 2  2  1 n   C  B ,   B ,    2     n 2 n 2 2 2 n  2  1  n  n  2      2  2 2 1

1

294

The probability density function for the standardized student t distribution thus can be represented as follows;       f    C1      1  sign (  )    



n 1 

0.5(n  1)   2   1   0.5 n  n  2  n  2 



n 1 2

.

6-5-8- Superimposing of two exponential distributions: approximation of reliability in early stage and normal working product; R(t)= c1e 1t  c2e 2 t f(t)= 1c1e

 1t

 2 c2 e 2t

As the probability distribution is probability it need(s) to satisfy; 

 f t dt  c

1

 c1 1 from this we can calculate

0

 t  

c11e 1t  c22e 2 t c1e 1t  c2e 2 t

Start point =1c1+2c2 and Mean time to failure = TS 

c1

1



c2

2

Figure 6.22: The Superimposing of two exponential distributions ( pdf ,

cdf ). Typically we have 10

a)-Find the density function . b)-Show that if W follows a Weibull distribution , then X= (W )  follows



an exponential distribution . c)-How could Weibull random variable be generated from a uniform random number generator ?. Solution: X ~ Weibull , x  0,α>0,β>0, a)- F (x) =, x  0,α>0,β>0;

f ( x)  e

x ( ) 



( )  x 1    ( )  1   x  1e  , x  0,α>0,β>0. x







1

 1 b)-X= (W )  , X ~ Weibull ( ,  ) and W   (x)  , J  x  ; 

1



1

 f X ( x)   (x  )  1 e  1

x   ( ) 

  1 x  e  x ~ exp(1)  1

 F W ( w)  1  e c)- F Z ( z )  z  ?

let z= 1  e

w ( ) 



1-Z= e

w ( ) 



w ( ) 



w ln(1  z )  ( )   w 



 ( ln(1  z )) 

If Z~ uniform[0,1] then 1-Z~uniform[0,1]. we can firstly generate U ~uniform(0,1); let W=

 then X ~ Weibull ( ,  ) . ( ln U ) 

Example (8): Show that gamma distribution belongs to a natural general exponential family (r is fixed): 302

f ( y |) 

Solution:

Let

us

 r y r 1 ( r  1)

exp( y )

write

this

distribution

as:

f ( y |  )  exp( y  r log(  )  (r  1) log( y )  log( (  1)) If we define the parameters as:

   B( )  r log(  )  log( (   1) D( y )  ( r  1) log( y ) Then we can write:

f ( y |  )  exp( y  B( )  D( y )) That is the expression of natural exponential family. Example (9): For the natural general exponential family:

f ( y |  ,  )  exp(( y  B( )) / S ( )  D( y,  )) Find the moment generating function. Find the first and the second moments. Solution: Let us assume that moment generating function exists (It would be much better to work with the characteristic function). Note that this integral is over domain of definition. For example for normal distribution it is between minus and plus infinities, for exponential distribution it is between 0 and plus infinity. It has this form:

M (t )   exp( ty ) f ( y |  ,  )dy Let remind that if the function is the density of distribution then its integral over. The whole domain of definition of y should be equal to 1:



exp(( y  B( )) / S ( )  D( y,  ))dy  1  exp(B( )/S(  ))  exp( y / S ( )  D( y,  ))dy  1 

 exp( y / S ( )  D( y,  ))dy  exp(  B( ) / S ( ))..(1) Now let us write moment generating function for this family:

303

M (t )  exp( ty ) exp(( y  B( )) / S ( )  D( y,  ))dy   exp( yt  y / S ( )  D( y,  )  B( ) / S ( ))dy  exp( B( ) / S ( ))  exp( y (  tS ( )) / S ( )  D( y,  ))dy (2)

If we recall the equation (1) then we see that the integral is equal to:

 exp( y(  tS ( )) / S ( )  D( y,  ))dy  exp( B(  tS ( )) / S ( )) (3) Putting (3) in (2) we will have:

M (t )  exp(( B( )  B(  tS ( )) / S ( )) Moments of the distributions are related to the moment generating function like this:

E( y m ) 

d m M (t ) dt m t  0

Let us calculate the first two moments:

dM (t )   B' (  tS ( )) exp(( B( )  B(  tS ( )) / S ( ))   B' ( ) t 0 dt t  0 and: d 2 M (t )  (  S ( ) B' ' (  tS ( )) exp(( B( )  B(  tS ( )) / S ( )  dt 2 t  0 ( B' (  tS ( )) 2 exp(( B( )  B(  tS ( )) / S ( )))

t0 Thus for the first and second moments we can write:

  S ( ) B' ' ( )  ( B' ( )) 2

E ( y )  B' ( ) E ( y 2 )   S ( ) B' ' ( )  ( B' ( )) 2 And for the second central moment (variance) we can write: E ( y 2 )  ( E ( y )) 2  S ( ) B' ' ( ) .

It is worth noting that the first moment does not depend on . Example (10): However the second moment (central moment) depends on it. It is a feature of natural general exponential family. The time between severe earthquakes at a given region follows a log-normal distribution with a coefficient of variation of 40%. The expected time between severe earthquakes is 80years. 304

a)-Determine the parameters of this log-normally distributed interval time T. (ans. 4.308, 0.385). b)-Determine the probability that a severe earthquake will occur within 20 years from the previous one. (ans. 0.00033). c)-Suppose the last severe earthquake in the region took place 100 years ago. What is the probability that a severe earthquake will occur over the next years? (ans. 0.034). Solution: (a) The "parameters"  and  of a log-normal R.V. are related to its mean and standard deviation  and  as follows: 2

  2   ln(1    ),   ln   2  2

Substituting the given values T 

T  0.4, T  80 , we find T

 2 = 0.14842 and  = 4.307817, hence   0.385 and   4.308. The importance of these parameters is that they are the standard deviation and mean of the related variable X  ln(T), while X is normal. Probability calculations concerning T can be done through X, as follows: P(T  20) = P(ln T  ln 20)= P(X - ln 20) = P(

X 





ln 20  



)= P(Z  -3.405771737) 0.00033

(c)-"T > 100" is the given event, while "a severe quake occurs over the next years" is the event "100 < T