Computing the characteristic function for sums of ...

5 downloads 0 Views 216KB Size Report
John E. Gray ∗ and Stephen R. Addison †. Abstract. In this note we develop the methodology for com- puting the moments of the characteristic function for.
Proceedings of the 38th Southeastern Symposium on System Theory Tennessee Technological University Cookeville, TN, USA, March 5-7, 2006

MC4.3

Computing the characteristic function for sums of sinusoidal random variables John E. Gray ∗ and Stephen R. Addison

Abstract



then the Fourier transform of a function f (t) can be defined in Hilbert space notation as

In this note we develop the methodology for com! ∞ " −iωt # puting the moments of the characteristic function for F (ω) = e , f (t) = eiωt f (t) dt. (2) the superposition of sinusoidal transformed random −∞ variables. Thus we solve an important case of the Rayleigh problem, and point how to use this tech- The inverse Fourier transform of a function F (ω) is nique to completely solve it. defined as ! ∞ # 1 1 " iωt e , F (ω) = f (t) = e−iωt F (ω) dω. 1 Introduction 2π 2π −∞ (3) In previous work, we established a method comput- The definition of the Dirac delta function in this noing both the probability density function (PDF) and tation is δ(t) = 1 "eiωt , 1# . 2π characteristic function (CF) for the sinusoidal transIn many engineering applications, difficulty occurs forms. In this paper, we extend the work to suwhen we have a change of variables. Given one has perpositions of sinusoidal transforms of random varia random variable x ˆ with a density P (x), to find the ables. These types of sums arise in numerous applidensity of a new variable, say uˆ, that is a function of cations including remote sensing, radar clutter modthe old variable eling, communications, radar waveform analysis, and u $= f ($ x). (4) random walks [5]. Several methods for defining Fourier transforms are used in the literature; the differences involve the placement of the factor 2π that appears in the transform. If we adopt the Hilbert space notation for the inner-product of two functions (where ∗ denotes complex conjugate) ! ∞ f ∗ (t)g(t) dt, (1) "f (t), g(t)# =

is a problem that is often encounterd in many different application areas. The standard method for solving this type of problem is found in [9]. Once the characteristic function is known, then the distribution can be obtained by inversion [8]. Another way to define the characteristic function is as the expected value of eiω f (x) & % & % Mu (ω) = eiωf (x) = e−iωf (x) , Px (x) .

−∞

∗ J.E.

Gray is with the Advanced Science and Technology Division, Code B-23, Naval Surface Warfare Center, Dahlgren Division, Dahlgren, VA 22448-5150, USA.

[email protected] † S.R.

Addison is Chair of the Department of Physics and Astronomy, College of Natural Sciences and Mathematics, University of Central Arkansas, Conway, AR 72035, USA.

Now if we substitute this definition of Mu (ω) into the definition of P (x), this gives

[email protected]

0-7803-9457-7/06/$20.00 ©2006 IEEE.

(5)

338

Pu (u) =

&& 1 % iωu % iωf (x) e , e , Px (x) 2π

(6)

If u $=x $ + y$, what is Pu (u)? We can use Rule 2 to which upon rearrangement of the order of integration give gives ! ! & 1 %% iωu −iωf (x) & P (u) = δ(u − (x + y))P (x, y) dx dy P (u) = e e , 1 , Px (x) . (7) 2π ! = P (x, u − x) dx It follows from the definition of the delta function [2] ! that = Px (x)Py (u − x) dx. (Rule 2a) independence

1 % iω(u−f (x)) & e ,1 = 2π

=

δ (u − f (x)) ' δ(x − xi ) i

(

(8) Now, we can use Rule 1 so we have 1

δ(u − (x + y)) = δ((u − x) − y)1.

|f $ (xi )|

1 i δ(x − xi ) |g! (xi )|

x$i s

since [6] δ (g(x)) = where the are the solution to g(xi ) = 0, e.g. the zeros of g, so P (u) = "δ (u − f (x)) , Px (x)# =

' P (xi ) . |f $ (xi )| i

If u $=x $y$, what is Pu (u)? We can use Rule 2 to give P (u) =

(9)

(10)

so the PDF is 1 2π 1 = 2π 1 = 2π

P (u) =

" iθu # e , Mu (θ) % % && eiθu , e−iθf (x,y) , Pxy (x, y) %% & & eiθ(u−f (x,y)) , 1 , Pxy (x, y) .

! ! !

δ(u − (xy))P (x, y) dxdy

1 ) u* dx P x, |x| x ! )u* 1 = dx. (Rule 2b) Px (x)Py independence |x| x

=

We will call equation (9) Rule 1. Now if we have u ˆ = f (ˆ x, yˆ), the CF is % & Mu (θ) = e−iθf (x,y), Pxy (x, y)

(14)

Now, we can use Rule 1 so we have * 1 )u −y . δ(u − (xy) = δ x |x|

(15)

If u $ = xybb , what is Pu (u)? We can use Rule 2 to give

, + x P (x, y) dxdy P (u) = δ u− y ! ) u* dx = |x| P x, x ! )u* = |x| Px (x) Py dx. (Rule 2c) independence x ! !

(11)

Integrating out θ gives the delta function, so we have P (u) = "δ[u − f (x, y)], P (x, y)# ,

(12) Now, we can use Rule 1 so we have + , ) x u* or δ u− =δ y− |x| . (16) y x ! ∞! ∞ P (u) = δ(u − f (x, y))P (x, y) dx dy. (13) Note, it is reasonable to assume that all combina−∞ −∞ tions of functions of random variables can be broken We will call this Rule 2. It is now easy to determine down into repeated applications of the usage of Rule three common rules for combining random variables. 1 with Rules 2a-c.

339

The transformation zˆ = A sin(ˆ ϕ) is an example of a nonlinear transformation that occurs in many guises in different applications. It is extremely important to be able to understand the many analytical nuances associated with analyzing this transformation. While the PDF for ϕ ˆ , fϕ (ϕ), may be both one-to-one and onto, the PDF for fz (z) is onto but not one-to-one over the interval [−∞, ∞]. Thus it has an infinite number of zeros. It is more convenient to determine the CF directly, so the Fourier transformation of the PDF is % & Mz (ω) = eiωA sin(ϕ) , fϕ (ϕ) . (17)

so Mϕsin (ω) = Mϕcos (ω) = J0 (ωA). From the characteristic function, we can determine the PDF of the transformation yˆ = A sin(ˆ θ) by using a Fourier transform identity found in [1] which enables us to determine the PDF of the sinusoidal transforms from the expression for Mϕ (ω) to give fysin θ (y)

where

The exponential can be written as ∞ '

Jn (ωA)einϕ = eiωA sin(ϕ) ,

(18)

n=−∞

∞ '

Usin(θ) (n) = [F (n) + (−)n F (−n)] (−i)n .

(24)

and Tn (·) are the Chebyshev polynomials. For another example, the characteristic function for a sinusoidal transformation of a zero mean normal distribution has a CF given by

so the CF is given by [8], Mz (ω) =

/ . (∞ y 3y3 2 F (0) + n=1 Usin(θ) (n)Tn ( A ) 3 3 0 = Θ(1− 3 3) 1 y 22 π A (1 − A ) (23)

Jn (ωA)F (n);

(19)

n=−∞

θ2

√ 1 2 e− 2σθ 2 ⇔ e−σθ ω 2

2

/2

(25) where F (n) is the Fourier transform of the PDF for −σθ2 n2 /2 thus F (n) = e the angle variable fϕ (ϕ) which is evaluated n. Noting the Bessel functions can be rewritten as (J−n (x) = (−)n Jn (x)), we have which is even. Thus the characteristic functions for sine and cosine transformations are the same, so the ∞ ' sin Mz (ω) = J0 (ωA)Fs (0) + Jn (ωA)S(n) (20) characteristic function is 2σθ π

n=1

n

where S(n) = [F (n) + (−) F (−n)]. If F (n) is even or odd, this formula can be simplified further. For the transformation x $ = A cos(ˆ ϕ) = A sin(ˆ ϕ − π2 ), π which amounts to replacing ϕ by ϕ − 2 in the exponential, so the CF is Mzcos (ω) = J0 (ωA)Fc (0) +

∞ '

Jn (ωA)C(n).

MσGθ (ω) = =

Mϕsin (ω) = Mϕcos (ω) ∞ ' 2 2 J0 (ωA) + 2 J2m (ωA)e−2σθ m . m=1

(26)

(21) For non-zero mean Gaussian, the CF is

n=1

where C(n) = [(−)n F (n) + F (−n)]. The expressions for C(n) and S(n) can be simplified further if the functions are even or odd. A uniformly distributed angle in the range (0, 2π) has a CF thus , + θ 1 1 n=0 (22) ⇔ Θ (θ) Θ 1 − 0 otherwise 2π 2π

340

4

2 π 2 σθ2

e−ω

2

2

F (n) = e−n

σθ2 /2 −iωθ0

(27)

σθ2 /2 −inθ0

(28)

e

e

,

distributions Pz1 (z1 ) and Pz2 (z2 ) (assuming they are uncorrelated), then the distribution of z$ which is denoted by Pz (z) is ! ∞ Pz (z) = Pz1 z2 (z1 , z − z1 ) dz1 −∞ ! ∞ = Pz1 (z1 )Pz2 (z − z1 ) dz1 (uncorrelated)

so the characteristic function for the sine is Mzsin (ω) =

J0 (ωA) ∞ ' n2 σ 2 θ + Jn (ωA)e− 2 × n=1

=

. −inθ0 / e + (−)n einθ0 ∞ ' J0 (ωA) − 2i J2m+1 (ωA) ×

−∞

(32)

m=0

2

σθ2 /2

e−(2m+1) sin ((2m + 1) θ0 ) ∞ ' 2 2 +2 J2m (ωA)e−2m σθ cos (2mθ0 ) ,

$ we just continue the proTo compute the PDF of Z, cess z$123 = z$12 + +$ z3 , (33)

m=1

(29) so

$ = z$123...N −1 + z$N . Z

and the characteristic function for the cosine is Mzcos (ω)

In principle the problem is solved provided the PDF for fz (z) can be calculated, so it is then possible to calculate PZ (z). An alternative to the n-fold calculation of the sums is to work with the characteristic function directly [7]. To determine the CF of the sum

= J0 (ωA) ∞ 2 ' n 2 σθ Jn (ωA)e− 2 × + .

n=1 n −inθ0

/ + einθ0 ∞ ' = J0 (ωA) + 2i J2m+1 (ωA) × (−) e

yˆ =

m=0

e

−(2m+1)

−2

∞ '

2

σθ2 /2

2

m=1

σθ2

Ai cos ˆ θi

(35)

we multiply the CF’s of the individual components. Thus the CF for the sum of N sinusoidal uniformly N random variables is simply MZˆ = [J0 (ωA)] . An (30) alternative definition of the CF is as a Taylor series

cos (2mθ0 ) .

Other sinusoidal transform of random variables are handled in the same manner.

2

N ' i=1

sin ((2m + 1) θ0 )

J2m (ωA)e−2m

(34)

∞ " # ' (iω)n n MPx (ω) = e−iωx , Px (x) = "x # . n! n=0

(36)

Statistical Characterization of Sums of Random Variables

If we take the nth derivative of MPx (ω) with respect to ω and sets ω = 0, then we can determine the moments as 3 Given the individual PDF of the nonlinear trans1 ∂ n MPx (ω) 33 n "x # = n (37) formed random that is defined as z$, what is the PDF 3 i ∂ω n ω=0 of N random sums $ = z$1 + z$2 + ... + z$N . Z

(31)

so the desired representation is

To evaluate such sums, it suffices to calculate them two-fold at a time, so sums such as z$12 = z$1 + z$2 , with

341

3 ∞ ' (ω)n ∂ n MPx (ω) 33 . MPx (ω) = 3 n! ∂ω n ω=0 n=0

(38)

Now the recursion relation for the Bessel function is Thus the characteristic function can be written as (note (N ) means N th derivative) N A2 ω 2 MZˆ (ω) = 1 + + (39) Jp−1 (x) − Jp+1 (x) = 2Jp(1) (x), + 2 , 3 3 4 A N (N − 1) + N ω 4 + ...(46) so we can evaluate higher order derivatives of the 4 8 Bessel function using this recursion relation to give In general, for the zero mean normally distributed the expression for the higher derivative as: sinusoid, the CF of the sum of N random variables 1 has the form [Jp−2 (x) − 2Jp (x) + Jp+2 (x)] = Jp(2) (x), (40) 4 N MZˆ (ω) = [Mzcos (ω)] 8N 7 ∞ 1 ' −2σθ2 m2 [Jp−3 (x) − 3Jp−1 (x) + 3Jp+1 (x) − Jp+3 (x)] = J0 (ωA) + 2 J2m (ωA)e . 8 m=1 (3) = Jp (x), (41) (47) 5 6 1 Jp−4 (x) − 4Jp−2 (x) + 6Jp (x) Now = Jp(4) (x), 3 −4Jp+2 (x) + Jp+4 (x) 16 N3 cos (42) "x# = 1 ∂ [Mz (ω)] 33 3 i ∂ω We can continue with this process to evaluate arbiω=0 8 7 trary derivatives of the Bessel functions to arbitrary ∞ ' (1) $ −2σθ2 m2 order as × = AJ0 (ωA) + 2A J2m (ωA)e + , m=1 ω=0 1 N 8N −1 7 2N Jp(N ) (x) = Jp−N (x) + (−) Jp+2−N (x) ∞ ' 1 2 2 −2σ m + , J0 (ωA) + 2 J2m (ωA)e θ N m=1 + Jp+4−N (x) + ω=0 2 = 0 (48) + , 3 N (−) Jp+6−N (x) + ... + since Jm (0) = 0 for m )= 0 and Jp−1 (x) − Jp+1 (x) = 3 (1) + , 2Jp (x) N N −1 3 (−) Jp+N −2 (x) N N −1 " 2# 1 ∂ 2 [Mzcos (ω)] 33 x = 2 3 N 3 i ∂ω 2 + (−) Jp+N (x), (43) ω=0

so

(N ) Jp (x)

th where denotes derivative of the " 2 # A2 −2σ2 1 2 the N e θ (49) x = Bessel function and N is the binomial symbol where m 1N 2 4 it is understood that m = 0 if m > N . Noting that Note, the odd higher moments are zero. The even J0 (0) = 1 and Jn (0) = 0 for n not equal to zero, then higher moments can be calculated by continuing the only the even derivatives of J0 (x) are not equal to use of derivatives, only the first term will contribute 2 2 zero. So we have the first two moments are from the term (J2m−1 (ωA)) e−2σθ m for when 2m − 3 N 3 2k = 0, so ∂ [J0 (ωA)] 33 3 N −1 (1) " 2k # 2 = N A [J0 (ωA)] J0 (ωA)3 A2 3 3 ∂ω x = 2k e−2σθ k . ω=0 (50) ω=0 2 = 0, (44)If we have combinations that come from M superpo3 sitions of one type with N superpositions of another N M N ∂ 2 [J0 (ωA)] 33 N A2 type, the CF is just the product (CF1 ) (CF2 ) and = − , (45) 3 3 ∂ω 2 2 the same trick used above works for the product. ω=0

342

3

Conclusions

[8] J. E. Gray and S. R. Addison, ”A methodology for characterizing phase noise in radar waveforms: an alternative ”Terrain” characterization method”, Proceedings of SPIE, Vol. 5410: Radar Sensor Technology VIII and Passive MillimeterWave Imaging Technology VII, April 15, 2004.

It may be possible to develop a recursion formula for the general case of arbitrary superpositions of random variables based on similar arguments for the zero mean normal distribution. While we have worked out the PDF for the product zˆ = Aˆ sin(ˆ ϕ), we have not determined the equivalent CF for this product. Then [9] A. Papoulis, Probability, Random Variables, and Stochastic Processes, Third Edition, McGraw-Hill the methodology would allow us to determine the moBook Company, 1991. ment characterization of the sum zˆ =

N '

Aˆi sin(ˆ ϕi )

(51)

i=1

by the methods discussed here.

References [1] Abramowitz, M. and Stegun, I. A., Handbook of Mathematical Functions, National Bureau of Standards Applied Mathematics Series 55, Fourth Printing, December, 1965. [2] Arfken, G. Mathematical Methods for Physicists, Academic Press, Third Edition, 1991. [3] R. Barakat, ”Probability Density Functions of Sums of Sinusoidal Waves Having Non uniform Random Phases and Random Numbers of Multipath”, J. Acoust. Soc Am., 83 (3) March 1988. [4] P. Beckmann and A. Spizzichino, The Scattering of Electromagnetic Waves from Rough Surfaces, Artech House Inc., 1987. [5] R. E. Blahut, W. Miller, and C. H. Wilcox, Radar and Sonar Part I, “Theory of Remote Sensing Algorithms”, Springer-Verlag, 1991. [6] L. Cohen, “Time-Frequency Analysis”, PrenticeHall, 1994. [7] J. E. Gray, ”An Exact Determination of the Probability Density Function Under Coordinate Transformation”, The First IEEE Regional Conference on Aerospace Control Systems Proceedings, May 25-27, 1993, Westlake Village Ca.

343

Suggest Documents