Stochastic Processes

38 downloads 384 Views 224KB Size Report
Stochastic Processes. Adopted From p. Chapter 9. Probability, Random Variables and Stochastic Processes, 4th Edition. A. Papoulis and S. Pillai ...
Stochastic Processes Adopted p From

Chapter 9 Probability, Random Variables and Stochastic Processes, 4th Edition A. Papoulis and S. Pillai

9. Stochastic Processes Introduction Let  denote the random outcome of an experiment. To every such outcome suppose a waveform X (t,  ) X (t , ) is assigned.  The collection of such X (t,  ) waveforms form a  X (t,  ) stochastic t h ti process. The Th set of { k } and the time  X (t,  ) index t can be continuous X (t,  ) or discrete (countably t 0 t t infinite or finite) as well. Fig. 9.1 For fixed  i  S (the set of all experimental outcomes), X (t , ) is a specific time function. For fixed t, X 1  X (t1 , i ) n

k

2

1

1

2

is a random variable. The ensemble of all such realizations X (t , ) over time represents the stochastic

2

PILLAI/Cha

process X(t). (see Fig 9.1). For example X (t )  a cos(( 0 t   ), ) where  is a uniformly distributed random variable in (0, 2 ), represents t a stochastic t h ti process. Stochastic St h ti processes are everywhere: h Brownian motion, stock market fluctuations, various queuing systems all represent stochastic phenomena. If X(t) is a stochastic process, then for fixed t, X(t) represents a random variable. Its distribution function is given by

FX ( x, t )  P{ X (t )  x}

(9-1)

Notice that FX ( x, t ) depends on t, since for a different t, we obtain a different random variable. ariable Further F rther  dFX ( x, t ) (9-2) f X ( x, t )  dx represents the first-order probability density function of the 3 process X(t). PILLAI/Cha

For t = t1 and t = t2, X(t) represents two different random variables X1 = X(t X( 1) andd X2 = X(t X( 2) respectively. i l Their Th i joint j i distribution di ib i is i given by FX ( x1 , x2 , t1 , t2 )  P{ X (t1 )  x1 , X (t2 )  x2 }

(9-3)

and 2 FX ( x1 , x2 , t1 , t2 )   f X ( x1 , x2 , t1 , t2 )  x1 x2

(9-4)

represents the second-order density function of the process X(t). Similarly f X ( x1 , x2 , xn , t1 , t 2  , t n ) represents the nth order density function of the process X(t). Complete specification of the stochastic process X(t) requires the knowledge of f X ( x1 , x2 , xn , t1 , t 2 , t n ) for all ti , i  1, 2, , n and for all n. n (an almost impossible task in reality). 4

PILLAI/Cha

Mean of a Stochastic Process: 

 (t )  E{ X (t )}    x f ( x, t )dx d 

X

(9 5) (9-5)

represents the mean value of a process X(t). In general, the mean of a process can depend on the time index t. t Autocorrelation function of a process X(t) is defined as 

RXX (t1 , t2 )  E{ X (t1 ) X * (t2 )}    x1 x2* f X ( x1 , x2 , t1 , t2 )dx1dx2 (9-6) and it represents p the interrelationshipp between the random variables X1 = X(t1) and X2 = X(t2) generated from the process X(t). Properties:

1. R (t , t )  R * (t , t )  [ E{ X (t ) X * (t )}]* 1 2 2 1 2 1 XX XX

((9-7))

2 2. R XX (t , t )  E{| X (t ) | }  0. (Average instantaneous power)

5

PILLAI/Cha

3. RXX (t1 , t 2 ) represents a nonnegative definite function, i.e., for any n set of constants{ai }i 1 n

n

* a a  i j RXX (ti , t j )  0.

(9-8)

i 1 j 1

n

Eq. (9-8) follows by noticing that E{| Y | }  0 for Y   ai X (ti ). i 1 The function (9-9) C XX (t1 , t 2 )  RXX (t1 , t 2 )   X (t1 )  *X (t 2 ) 2

represents the autocovariance function of the process X(t). Example 9.1 91 Let T z   T X (t )dt. Then T

T

T

T

E [| z | ]   T  T E{ X (t1 ) X * (t2 )}dt1dt2 2

  T  T R XX (t1 , t2 )dt1dt2

(9-10)

6

PILLAI/Cha

Example 9.2

X (t )  a cos(( 0 t   ), )

 ~ U (0,2 ). )

(9 11) (9-11)

This gives

 (t )  E{ X (t )}  aE{cos( 0 t   )}  a cos  0 t E{cos }  a sin  0 t E{sin  }  0, X

since E{cos }  21

2

0

(9-12)

cos d  0  E{sin  }.

Similarly R XX (t1 , t2 )  a 2 E{cos( 0 t1   ) cos( 0 t2   )} a2  E{cos 0 (t1  t2 )  cos( 0 (t1  t2 )  2 )} 2 a2  cos  0 (t1  t2 ). ) (9 13) (9-13) 2 7

PILLAI/Cha

Stationary Stochastic Processes Stationary processes exhibit statistical properties that are invariant to shift in the time index. Thus, for example, second-order stationarity implies that the statistical properties of the pairs {{X(t ( 1) , X(t ( 2) } and {X(t { ( 1+c)) , X(t ( 2+c)} )} are the same for anyy c. Similarly first-order stationarity implies that the statistical properties of X(ti) and X(ti+c) are the same for any c. In strict terms, terms the statistical properties are governed by the joint probability density function. Hence a process is nth-order Strict-Sense Stationary (S.S.S) if f X ( x1 , x2 , xn , t1 , t2 , tn )  f X ( x1 , x2 , xn , t1  c, t2  c , tn  c ) (9-14) for anyy c,, where the left side represents p the joint j densityy function of the random variables X 1  X (t1 ), X 2  X (t2 ), , X n  X (tn ) and the right side corresponds to the joint density function of the random ) X 2  X (t2  c ), ) , X n  X (tn  c ). ) variables X 1  X (t1  c ), A process X(t) is said to be strict-sense stationary if (9-14) is 8 true for all ti , i  1, 2, , n, n  1, 2,  and any c. PILLAI/Cha

For a first-order strict sense stationary process, from ((9-14)) we have f X ( x, t )  f X ( x, t  c )

(9-15)

for any c. c In particular c = – t gives f X ( x, t )  f X ( x )

(9-16)

i the i.e., h first-order fi d density d i off X(t) X( ) is i independent i d d off t. In I that h case 

E [ X (t )]    x f ( x )dx   , a constant.

(9-17)

Similarly, for a second-order strict-sense stationary process we have from (9-14) f X ( x1 , x2 , t1 , t 2 )  f X ( x1 , x2 , t1  c, t 2  c) for any c. For c = – t2 we get f X ( x1 , x2 , t1 , t 2 )  f X ( x1 , x2 , t1  t 2 )

(9-18) 9

PILLAI/Cha

i.e., the second order density function of a strict sense stationary pprocess depends p only y on the difference of the time indices t1  t2   . In that case the autocorrelation function is given by  RXX (t1 , t2 )  E{ X (t1 ) X * (t2 )}

   x1 x2* f X ( x1 , x2 ,  t1  t2 )dx1dx2  RXX (t1  t2 )  RXX ( )  RXX* (  ),

((9-19))

i.e., the autocorrelation function of a second order strict-sense p onlyy on the difference of the time stationaryy pprocess depends indices   t1  t 2 . Notice that (9-17) and (9-19) are consequences of the stochastic process being first and second-order strict sense stationary. stationary On the other hand, the basic conditions for the first and second order stationarity – Eqs. (9-16) and (9-18) – are usually difficult to verify. I th In thatt case, we often ft resortt to t a looser l definition d fi iti off stationarity, t ti it known as Wide-Sense Stationarity (W.S.S), by making use of 10 PILLAI/Cha

(9-17) and (9-19) as the necessary conditions. Thus, a process X(t) is said to be Wide-Sense Stationaryy if (i) E{ X (t )}   (9-20) and (9-21) (ii) E{ X (t1 ) X * (t2 )}  R XX (t1  t2 ), ) i.e., for wide-sense stationary processes, the mean is a constant and the h autocorrelation l i function f i depends d d only l on the h difference diff between b the time indices. Notice that (9-20)-(9-21) does not say anything y functions, and instead deal about the nature of the pprobabilityy density with the average behavior of the process. Since (9-20)-(9-21) follow from (9-16) and (9-18), strict-sense stationarity always implies wide-sense stationarity. stationarity However, However the converse is not true in general, the only exception being the Gaussian process. This follows, since if X(t) is a Gaussian process, then by definition j i tl Gaussian G i random d X 1  X (t1 ), ) X 2  X (t2 ), ) , X n  X (tn ) are jointly variables for any t1 , t2 , tn whose joint characteristic function 11 PILLAI/Cha is given by

n

 (1 ,  2 , ,  n )  e

j

n

  ( tk )k   C k 1

XX

( ti , tk )i  k / 2

l ,k

(9-22)

X

where h i as defined d fi d on (9-9). (9 9) If X(t) iis wide-sense id C XX (ti , t k ) is stationary, then using (9-20)-(9-21) in (9-22) we get n

 (1 ,  2 , ,  n )  e

j

 k 1

 k  12

n

n

 C 11 k 1

XX

( ti  tk )i  k

(9-23)

X

and hence if the set of time indices are shifted by a constant c to generate a new set of jointly Gaussian random variables X 1  X (t1  c ), X 2  X (t2  c ),, X n  X (tn  c ) then their joint characteristic n { X } f ti is function i identical id ti l to t (9-23). (9 23) Thus Th the th sett off random d variables i bl i i 1 n and { X i}i 1 have the same joint probability distribution for all n and all c, establishing the strict sense stationarity of Gaussian processes from its wide-sense stationarity. To summarize if X(t) is a Gaussian process, then wide-sense wide sense stationarity (w.s.s)  strict-sense strict sense stationarity (s.s.s). Notice that since the joint p.d.f of Gaussian random variables depends 12 only on their second order statistics, which is also the basis PILLAI/Cha

for wide sense stationarity, we obtain strict sense stationarity as well. From (9-12)-(9-13), (refer to Example 9.2), the process X (t )  a cos( 0 t   ), ) in i (9-11) (9 11) iis wide-sense id stationary, t ti but b t t not strict-sense stationary. 2

Similarly if X(t) is a zero mean wide sense stationary process in Example 9.1, then  z2 in (9 (9-10) 10) reduces to T

  t1  t 2 T

T



t1

 T

T

 z2  E{| z |2 }   T  T R (t1  t2 )dt1dt2 .

2T  

XX

As t1, t2 varies A i from f –T T to t +T, +T   t1  t 2 varies i Fig. 9.2 from –2T to + 2T. Moreover RXX ( ) is a constant over the shaded region in Fig 9.2, whose area is given by (  0) 1 1 2 (2T   )  (2T    d ) 2  (2T   )d 2 2 and hence the above integral reduces to 2T

    2 t R ( )( 2T  |  |)d  2 z

XX

1 2T

| | R (  )( 1  2 T ) d .  2t XX (9-24) 2T

13

PILLAI/Cha

Systems with Stochastic Inputs

1 transforms each input A deterministic system y p waveform X (t , i ) into an output waveform Y (t , i )  T [ X (t , i )] by operating only on the time variable t. Thus a set of realizations at the input corresponding to a process X(t) generates a new set of realizations {Y (t , )} at the output associated with a new process Y(t). Y (t,  i )

X (t,  i ) X (t )  

T [[]

(t ) Y 

t

t

Fig. 9.3 Our goal is to study the output process statistics in terms of the input process statistics and the system function. 1A stochastic

system on the other hand operates on both the variables t and  .

14

PILLAI/Cha

Deterministic Systems

Memoryless Systems

Systems with Memory

Y (t )  g[ X (t )] Time-varying systems

Time-Invariant systems

Linear systems Y (t )  L[ X (t )]

Fig. 9.3 Linear-Time Linear Time Invariant (LTI) systems

X (t )

h (t )



Y (t )     h (t   ) X ( )d 

LTI system

    h ( ) X (t   )d .

15

PILLAI/Cha

Memoryless Systems: The output Y(t) in this case depends only on the present value of the input X(t). i.e., Y (t )  g{ X (t )} (9-25) i Strict-sense stationary input

Memoryless system

Strict-sense stationary output.

(see (9-76), Text for a proof.) Wide sense Wide-sense stationary input

Memoryless system

X(t) stationary Gaussian with RXX ( )

Memoryless system Fig. 9.4

Need not be stationary in any sense. Y(t) stationary,but not Gaussian with R XY ( )  R XX ( ). ) (see (9-26)). 16 PILLAI/Cha

Theorem: If X(t) is a zero mean stationary Gaussian process, and Y(t) = g[X(t)], where g () represents a nonlinear memoryless device, then

R XY ( )  R XX ( ),

  E{g ( X )}.

(9-26)

Proof:

R XY ( )  E{ X (t )Y (t   )}  E [ X (t ) g{ X (t   )}]    x1 g ( x2 ) f X1X 2 (x1 , x2 )dx1dx2

(9-27)

where X 1  X (t ), X 2  X (t   ) are jointly Gaussian random variables, and hence f X1X 2 ( x1 , x2 ) 

1

2 | A|

X  ( X 1 , X 2 )T ,

e

 x* A1 x / 2

x  ( x1 , x2 )T

 R (0) R ( )   * A  E{ X X }    LL   R ( ) R (0)  *

XX

XX

XX

XX

17

PILLAI/Cha

where L is an upper triangular factor matrix with positive diagonal entries. i.e.,  l11 l12  L .  0 l22  Consider the transformation Z  L1 X  ( Z1 , Z 2 )T ,

z  L1 x  ( z1 , z2 )T

so that 1

* 1

E{Z Z }  L E{ X X }L *

*

1

*1

 L AL

I

andd hence h Z1, Z2 are zero mean independent i d d t Gaussian G i random d variables. Also x  L z  x1  l11 z1  l12 z 2 , x2  l22 z 2

and hence x A1 x  z L* A1 Lz L  z z  z12  z22 . *

*

*

The Jacobaian of the transformation is given by

18

PILLAI/Cha

| J || L1 || A |1 / 2 .

Hence substituting these into (9-27), (9 27) we obtain 

RXY ( )   



  (l

11





 l11  



 l12  

z1  l12 z2 ) g (l22 z2 ) 

1 1 | J | 2 | A|1/ 2

  z g (l

22

z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2

22

z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2

1



  z g (l



2

e

 z12 / 2  z22 / 2

0

e



 l11   z1 f z1 ( z1 )dz d 1   g (l22 z2 ) f z2 ( z2 )dz d2 

 l12   z2 g (l22 z2 ) f z2 ( z2 ) dz2  1 e z / 2 2



l l2

12 22



  ug (u )

1 2

where u  l22 z2 . This gives

e

2 2

 u 2 / 2 l222

du, 19

PILLAI/Cha

fu ( u )    u2 / 2l u 1 RXY ( )  l12 l22   g (u ) l 2 e du 2 l  2 22

2 22

22



df u ( u )  f u ( u ) du



  RXX ( )   g (u ) f u(u )du,

since A  LL* gives l12 l22  R XX ( ). ) Hence 0

R XY ( )  R XX ( ){ g (u ) f u (u ) |

 



    g (u ) f u (u )du}

 R XX ( ) E{g ( X )}  R XX ( ), the desired result, where   E [ g ( X )]. Thus if the input to a memoryless device is stationary Gaussian, the cross correlation function u c o between be wee thee input pu andd thee output ou pu iss proportional p opo o too thee input autocorrelation function. 20

PILLAI/Cha

Linear Systems: L[] represents a linear system if L{a1 X (t1 )  a 2 X (t2 )}  a1 L{ X (t1 )}  a 2 L{ X (t2 )}. )}

(9-28) (9 28)

Y (t )  L{ X (t )}

(9-29)

Let

represent the output of a linear system. Time-Invariant System: L[] represents a time-invariant system if

Y (t )  L{ X (t )}  L{ X (t  t0 )}  Y (t  t0 )

(9-30)

i.e., shift in the input results in the same shift in the output also. If L[] satisfies both (9-28) and (9-30), then it corresponds to a linear time-invariant (LTI) system. y can be uniquely q y represented p in terms of their output p to LTI systems h (t ) Impulse a delta function

 (t )

LTI

h(t )

response of the system t

Impulse

Fig. 9.5

Impulse response

21

PILLAI/Cha

then

Y (t )

X (t ) X (t ) t

LTI



Fig 9.6 Fig. 96

arbitrary bit input

t

Y (t )

Y (t )     h(t   ) X ( )d 

    h( ) X (t   )d (9-31)

Eq. q ((9-31)) follows byy expressing p g X(t) ( ) as 

X (t )     X ( ) (t   )d and applying (9-28) and (9-30) to

(9-32)

Y (t )  L{ X (t )}. )}Thus



Y (t )  L{ X (t )}  L{   X ( ) (t   )d } 

    L{ X ( ) (t   )d }

By Linearity



    X ( ) L{ (t   )}d 

By Time-invariance 

    X ( )h (t   )d     h ( ) X (t   )d . (9-33)

22

PILLAI/Cha

Output Statistics: Using (9-33), the mean of the output process is ggiven byy 

 (t )  E{Y (t )}     E{ X ( )h(t   )d } Y



     X ( )h(t   )d   X (t )  h(t ). )

(9 34) (9-34)

Similarly the cross-correlation function between the input and output processes is i given i by b R XY (t1 , t2 )  E{ X (t1 )Y * (t2 )} 

 E{ X (t1 )    X * (t2   )h * ( )d } 

    E{ X (t1 ) X * (t2   )}h * ( )d 

    R XX (t1 , t2   )h * ( )d  R XX (t1 , t2 )  h * (t2 ). ) Finally the output autocorrelation function is given by

(9 35) (9-35) 23

PILLAI/Cha

RYY (t1 , t2 )  E{Y (t1 )Y * (t2 )} 

 E{   X (t1   )h(  )d Y * (t2 )} 

    E{ X (t1   )Y * (t2 )}h(  )d 

    R XY (t1   , t2 )h(  )d  R XY (t1 , t2 )  h(t1 ),

(9-36)

or RYY (t1 , t2 )  R XX (t1 , t2 )  h * (t2 )  h(t1 ). )

 (t ) X

(9 37) (9-37)

 (t )

h(t)

Y

(a) R XY ( t1 ,t 2 )      h (t2) h*(t RXX (t1 , t 2 )

(b)

Fig. 9.7

h(t1)

  RYY (t1 , t 2 ) 24

PILLAI/Cha

In particular if X(t) is wide-sense stationary, then we have  X (t )   X so that from (9 (9-34) 34)

 (t )   Y



X

  h( )d  

X

(9-38)

c, a constant.

Al RXX (t1 , t 2 )  RXX (t1  t 2 ) so th Also thatt (9-35) (9 35) reduces d tto 

R XY (t1 , t2 )     R XX (t1  t2   )h * ( )d 

 R XX ( )  h * (  )  R XY ( ),   t1  t2 .

(9-39)

Thus X(t) and Y(t) are jointly w.s.s. Further, from (9 (9-36), 36), the output autocorrelation simplifies to 

RYY (t1 , t 2 )    RXY (t1    t 2 )h(  )d ,

  t1  t 2

 RXY ( )  h( )  RYY ( ).

(9-40)

From (9-37), we obtain RYY ( )  RXX ( )  h* ( )  h( ).

(9-41)

25

PILLAI/Cha

From (9-38)-(9-40), the output process is also wide-sense stationary. gives rise to the followingg representation p This g X (t ) wide-sense stationary process

LTI system h(t)

Y (t ) wide-sense stationary process.

(a) X (t ) strict-sense stationary t ti process

LTI system h(t) (b)

X (t ) Gaussian pprocess ((also stationary)

Linear system (c) Fig. 9.8

Y (t ) strict-sense stationary process (see Text for proof ) Y (t ) Gaussian process (also stationary) 26

PILLAI/Cha

White Noise Process: W(t) ( ) is said to be a white noise pprocess if RWW (t1 , t 2 )  q (t1 ) (t1  t2 ),

(9-42)

ii.e., e E[W(t1) W*(t2)] = 0 unless t1 = t2. W(t) is said to be wide-sense stationary (w.s.s) white noise if E[W(t)] = constant, and RWW (t1 , t 2 )  q (t1  t 2 )  q ( ).

(9-43)

If W(t) is also a Gaussian process (white Gaussian process), then all of its samples are independent random variables (why?). White noise W(t)

LTI h(t)

Fig. 9.9 Fig 99 For w.s.s. white noise input W(t), we have

Co o ed noise Colored o se N (t )  h (t ) W (t )

27

PILLAI/Cha



E[ N (t )]  W   h ( )d , a constant

(9-44) (9 44)

and Rnn ( )  q ( )  h* ( )  h( )  qh* ( )  h( )  q ( )

(9-45)

where 

 ( )  h( )  h ( )    h( )h* (   )d . *

(9-46)

Thus the output of a white noise process through an LTI system represents a (colored) noise process. Note: White noise need not be Gaussian. “Whit ” andd “Gaussian” “White” “G i ” are two t different diff t concepts! t! 28

PILLAI/Cha

Upcrossings and Downcrossings of a stationary Gaussian process: Consider a zero mean stationary Gaussian process X(t) with autocorrelation function R XX ( ). An upcrossing over the mean value occurs whenever the realization X(t) passes through h h zero with ih X (t ) positive slope. Let t Upcrossings p the probability p y represent of such an upcrossing in t the interval (t , t  t ). We wish to determine  . Fig. 9.10 Downcrossing

Since X(t) is a stationary Gaussian process, its derivative process X (t ) i also is l zero mean stationary t ti Gaussian G i with ith autocorrelation t l ti function f ti  ( ) (see (9-101)-(9-106), Text). Further X(t) and X (t ) R X X  ( )   R XX are jointly Gaussian stationary processes, and since (see (9-106), Text) dR XX ( ) , R XX  ( )   29 d PILLAI/Cha

we have dR XX (  ) dR XX ( ) R XX  (  )      R XX  ( ) d (  ) d

(9-47) (9 47)

which for   0 gives RXX  (0)  0

 E [ X (t ) X (t )]  0

(9-48)

i e the jointly Gaussian zero mean random variables i.e., X 1  X (t ) and

X 2  X (t )

(9-49)

are uncorrelated l t d andd hence h i d independent d t with ith variances i

 12  R (0) and  22  R (0)   R  (0)  0 XX

X X 

(9-50)

XX

respectively. Thus f X1X 2 ( x1 , x2 )  f X ( x1 ) f X ( x2 ) 

1 2 1 2

e

x2   x2  1 2  1 2   2 1 2 2 

To determine  , the probability of upcrossing rate,

.

(9-51) 30

PILLAI/Cha

we argue as follows: In an interval(t , t  t ), the realization moves from X(t) = X1 to X (t  t )  X (t )  X (t ) t  X 1  X 2 t , and hence the realization intersects with the zero level somewhere in that interval if X 1  0, X 2  0,

and

X (t  t )  X 1  X 2 t  0

i.e., X 1   X 2 t. Hence the probability of upcrossing in (t , t  t ) is given by 

 t   x

2 0

0

 x   x t f 1

2



X1 X 2

( x1 , x2 )d x1dx2

(9-52)

X (t ) X ( t  t ) t

X (t )

t

t  t

Fig. 9.11



  0 f X 2 ( x2 )d x2   x t f X1 ( x1 )d x1 . 2

(9-53) (9 53)

Differentiating both sides of (9-53) with respect to t ,we get 

   0 f ( x2 )x2 f (  x2 t )dx2 X2

X1

and letting t  0, Eq. (9-54) reduce to

(9 54) (9-54) 31

PILLAI/Cha



  0

 1 x2 f X ( x2 ) f X (0)dx2  x2 f X ( x2 )dx2  0 2R XX (0)

1 1 1  ( 2 2 /  )  2R XX (0) 2 2

 (0)  R XX R XX (0)

(9-55)

[where we have made use of (5-78), Text]. There is an equal probability for downcrossings, and hence the total probability for crossing i the h zero line li in i an interval i l (t , t  t ) equals l  0 t , where h

  0

1



 (0) / R XX (0)  0.  R XX

(9-56) (9 56)

It follows that in a long interval T, there will be approximately  0T crossings of the mean value. value If  R XX large then the  (0) is large, autocorrelation function R XX ( ) decays more rapidly as  moves away from zero, implying a large random variation around the origin ( (mean value) l ) for f X(t), X( ) and d th the likelihood lik lih d off zero crossings i should h ld increase with increase in  RXX (0), agreeing with (9-56). 32 PILLAI/Cha

Discrete Time Stochastic Processes:

A discrete time stochastic p process Xn = X(nT) ( ) is a sequence q of random variables. The mean, autocorrelation and auto-covariance functions of a discrete-time process are gives by

 n  E{ X ( nT )}

( (9-57) )

R ( n1 , n2 )  E{ X ( n1T ) X * ( n2T )}

(9-58)

C ( n1 , n2 )  R ( n1 , n2 )   n1  n*2

(9-59)

and d

respectively. As before strict sense stationarity and wide-sense stationarity definitions apply here also. For example, example X(nT) is wide sense stationary if E{ X ( nT )}   , a constant

(9-60)

and d E [ X {( k  n )T } X * {( k )T }]  R ( n )  rn  r*n

(9-61)

33

PILLAI/Cha

i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite property of the autocorrelation sequence in (9-8) can be expressed in terms of certain Hermitian-Toeplitz Hermitian Toeplitz matrices as follows: Theorem: A sequence {rn }  forms an autocorrelation sequence of a wide sense stationary stochastic process if and only if every Hermitian-Toeplitz ii li matrix i Tn given i by b  r0 r1  *  r1 r0 Tn     r* r*  n n 1

r2  rn   r1  rn 1  *  T  n   *  r1 r0 

(9-62)

is non-negative (positive) definite for n  0, 1, 2, , . P f Let Proof: L a  [a0 , a1 , , a n ]T represent an arbitrary bi constant vector. Then from (9-62), n n * a Tn a    ai a k* rk i (9-63) (9 63) i 0 k 0

since the Toeplitz character gives (Tn ) i ,k  rk i . Using (9-61), 34 PILLAI/Cha Eq. (9-63) reduces to

2 n    * * * * a Tn a   ai ak E{ X ( kT ) X (iT )}  E   ak X ( kT )   0. ((9-64)) i 0 k 0  k 0  n

n

From (9-64), if X(nT) is a wide sense stationary stochastic process then h Tn is i a non-negative i definite d fi i matrix i for f every n  0, 1, 2,, . Similarly the converse also follows from (9-64). (see section 9.4, Text) If X(nT) represents a wide-sense stationary input to a discrete-time system {h(nT)}, and Y(nT) the system output, then as before the cross correlation function satisfies (9-65) R XY ( n )  R XX ( n )  h * ( n ) and the output autocorrelation function is given by (9-66) RYY ( n )  R XY ( n )  h ( n ) or RYY ( n )  R XX ( n )  h * (  n )  h( n ). (9-67) Thus wide-sense stationarity from input to output is preserved 35 for discrete-time systems also.

PILLAI/Cha

Auto Regressive Moving Average (ARMA) Processes Consider an input – output representation p

q

k 1

k 0

X ( n )    a k X ( n  k )   bkW ( n  k ),

((9-68))

where X(n) may be considered as the output of a system {h(n)} driven by the input W(n). h(n) W(n) X(n) Z – transform of (9-68) (9 68) gives Fi 9 12 Fig.9.12 p

X ( z )  ak z k 0

k

q

 W ( z )  bk z  k , k 0

a0  1

(9-69)

or 

H ( z )   h(k ) z  k k 0

1 2 q X ( z ) b0  b1 z  b2 z    bq z  B( z )    1 2 p W ( z ) 1  a1 z  a2 z    a p z A( z ) 36 (9-70) PILLAI/Cha

represents the transfer function of the associated system response {h(n)} in Fig 9.12 so that  X ( n )   h( n  k )W ( k ). (9-71) k 0

Notice that the transfer function H(z) in (9-70) is rational with p poles and q zeros that determine the model order of the underlying system. From (9-68), the output undergoes regression over p of its previous values and at the same time a moving average based on W ( n ), ) W ( n  1), ) , W ( n  q) of the input over (q + 1) values is added to it, thus generating an Auto Regressive Moving Average (ARMA (p, q)) process X(n). X( ) G Generally ll the th input i t {W(n)} {W( )} represents t a sequence off uncorrelated random variables of zero mean and constant variance  W2 so that (9 72) (9-72) RWW (n)   W2 (n). ) If in addition, {W(n)} is normally distributed then the output {X(n)} also represents a strict-sense strict sense stationary normal process. If q = 0, then (9-68) represents an AR(p) process (all-pole 37 process), and if p = 0, then (9-68) represents an MA(q) PILLAI/Cha

process (all-zero process). Next, we shall discuss AR(1) and AR(2) processes through explicit calculations. AR(1) process: An A AR(1) process hhas th the fform ((see (9-68)) (9 68)) X ( n )  aX ( n  1)  W ( n )

(9-73)

and from (9-70) the corresponding system transfer  1 n n H ( z)   a z  1 1  az n 0

((9-74))

provided | a | < 1. Thus h(n )  a n ,

| a | 1

(9-75)

represents the impulse response of an AR(1) stable system. Using (9-67) together with (9-72) and (9-75), we get the output autocorrelation sequence of an AR(1) process to be |n |  a R XX ( n )   W2 ( n )  {a  n }  {a n }   W2  a |n| k a k   W2 2 1  a k 0 38 (9-76) PILLAI/Cha

where we have made use of the discrete version of (9-46). The normalized (in terms of R (0)) output autocorrelation sequence is given i by b R XX ( n )  X (n)   a |n| , | n |  0. (9-77) R XX (0) XX

It is instructive to compare an AR(1) model discussed above by superimposing a random component to it, which may be an error term associated with observing a first order AR process X(n). X(n) Thus Y ( n)  X ( n)  V ( n)

(9-78)

where h X(n) X( ) ~ AR(1) as iin (9-73), (9 73) and d V(n) V( ) iis an uncorrelated l t d random d 2 sequence with zero mean and variance  V that is also uncorrelated with {W(n)}. From (9-73), (9-78) we obtain the output autocorrelation of the observed process Y(n) to be RYY (n)  RXX (n)  RVV (n)  RXX (n)   V2 (n) a |n| 2  W    ( n) V 2 1 a 2

(9-79)

39

PILLAI/Cha

so that its normalized version is given by RYY ( n ) 1 Y ( n )    |n| RYY (0) c a 

where

n0 n  1,  2,

(9-80)

2 c 2  1. 2 2    (1  a ) W

W

(9-81)

V

Eqs. q ((9-77)) and ((9-80)) demonstrate the effect of superimposing p p g an error sequence on an AR(1) model. For non-zero lags, the autocorrelation of the observed sequence {Y(n)}is reduced by a constant factor compared to the original process {X(n)}. {X(n)} From (9-78), the superimposed  (0)   (0)  1 error sequence V(n) only affects the h corresponding di term in i Y(n) Y( )  (k )   (k ) (term by term). However, a pparticular term in the “input p sequence” q n 0 k W(n) affects X(n) and Y(n) as well as 40 all subsequent observations. Fig. 9.13 PILLAI/Cha X

Y

X

Y

AR(2) Process: An AR(2) process has the form

X (n)  a1 X (n  1)  a2 X (n  2)  W (n)

((9-82))

and from (9-70) the corresponding transfer function is given by 

1 b1 b2    1 2 1 1  a1 z  a2 z 1  1 z 1  2 z 1

(9-83)

h(0)  1, h(1)  a1 , h(n)  a1h(n  1)  a2 h(n  2), n  2

(9-84)

H ( z )   h( n) z n 0

n

so that andd in i term t off the th poles l 1 andd 2 off the th transfer t f function, f ti from (9-83) we have h(n)  b11n  b2 n2 ,

n0

(9-85) (9 85)

that represents the impulse response of the system. From (9 (9-84)-(9-85), 84) (9 85), we also have b1  b2  1, b11  b2 2  a1. From (9-83), 41 1  2  a1 , 12  a 2 , (9-86) PILLAI/Cha

and H(z) stable implies | 1 | 1, | 2 | 1. Further, using (9-82) the output autocorrelations satisfy the recursion RXX (n)  E{ X (n  m) X * (m)}  E{[ a1 X (n  m  1)  a2 X (n  m  2)] X * (m)} 0

 E{W (n  m) X * (m)} (9-87)  a1 RXX (n  1)  a2 RXX (n  2) and hence their normalized version is given by  RXX ( n ) ( n ) X   a1  X ( n  1)  a2  X ( n  2). (9-88) RXX (0) B di By directt calculation l l ti using i (9-67), (9 67) the th output t t autocorrelations t l ti are given by RXX (n)  RWW (n)  h* ( n)  h(n)   W2 h* ( n)  h(n) 

  W  h* ( n  k )  h( k ) 2

k 0

 | b1 | ( ) b b ( ) b b ( ) | b2 | ( )   W      * 1  12 1   1 | 2 |   1 | 1 | 2

2

* n 1 2

* 1 2

* n 1

* 1 2

* n 2 * 1 2

2

* n 2 2

(9-89) (9 89) 42

PILLAI/Cha

where we have made use of (9-85). From (9-89), the normalized output autocorrelations may be expressed as RXX (n) *n *n  X ( n)   c11  c2 2 (9-90) RXX (0) where w e e c1 andd c2 aree appropriate pp op e constants. co s s. Damped Exponentials: When the second order system in (9-83)-(9-85) is real and corresponds to a damped exponential response the poles are complex conjugate which gives a12  4a2  0 response, in (9-83). Thus

1  r e  j ,

2  1* ,

r  1. 1

(9-91) (9 91)

* j In that case c1  c2  c e in (9-90) so that the normalized correlations there reduce to

 ( n )  2 Re{c  }  2cr n cos(n   ). X

*n 1 1

(9-92)

But from (9-86) (9 86)

1  2  2r cos  a1 ,

r 2  a 2  1,

(9-93)

43

PILLAI/Cha

and hence 2r sin   ( a12  4a2 )  0 which gives  (a12  4a2 ) tan   . a1

(9 94) (9-94)

Also so from o (9 (9-88) 88)

 (1)  a1  (0)  a2  (1)  a1  a2  (1) X

X

X

X

so that a1  X (1)   2cr cos(   ) 1  a2

(9-95)

where the later form is obtained from (9-92) with n = 1. But  X (0)  1 in ((9-92)) ggives 2c cos  1, or c  1 / 2 cos .

(9-96)

Substituting (9-96) (9 96) into (9-92) (9 92) and (9-95) (9 95) we obtain the normalized output autocorrelations to be 44 PILLAI/Cha

 (n )  ( a2 ) X

n/2

cos(n   ) , cos

 a2  1

(9-97)

where  satisfies cos(   ) a1  cos 1  a2

1 .  a2

(9-98)

Thus the normalized autocorrelations of a damped second order system with real coefficients subject to random uncorrelated impulses satisfy (9-97). More on ARMA processes

From (9-70) an ARMA (p, q) system has only p + q + 1 independent coefficients, ( ak , k  1  p, bi , i  0  q), and hence its impulse response sequence {hk} also must exhibit a similar dependence among them. In fact according to P. Dienes (The Taylor series, 1931), 45

PILLAI/Cha

an old result due to Kronecker1 (1881) states that the necessary and  sufficient condition for H ( z )   k  0 hk z  k to represent a rational system t (ARMA) iis th thatt det H n  0,

nN

(for all sufficiently large n ),

(9-99)

where

 h0 h1 h2  hn  h h   h h  2 3 n 1 . Hn   1 (9-100)   h h   h h  n n 1 n  2 2n  i.e., In the case of rational systems for all sufficiently large n, the Hankel matrices Hn in (9-100) all have the same rank.

The necessary part easily follows from (9-70) by cross multiplying and equating coefficients of like powers of z  k , k  0, 0 11, 22, . 1Among

other things “God created the integers and the rest is the work of man.” (Leopold Kronecker)

46

PILLAI/Cha

This gives b0  h0 b1  h0 a1  h1

(9-101)

 bq  h0 aq  h1aq 1    hm 0  h0 aq  i  h1aq  i 1    hq  i 1a1  hq  i , i  1. 1

(9-102)

For systems with q  p  1, letting i  p  q, p  q  1,  , 2 p  q in (9-102) we get h0 a p  h1a p 1    h p 1a1  h p  0 

h p a p  h p 1a p 1    h2 p 1a1  h2 p  0 which gives det Hp = 0. Similarly i  p  q  1,  gives

(9-103) 47

PILLAI/Cha

h0 a p 1  h1a p    h p 1  0 h1a p 1  h2 a p    h p  2  0

 h p 1a p 1  h p  2 a p    h2 p  2  0,

(9-104)

and that gives det Hp+1 = 0 etc. (Notice that a p  k  0, k  1, 2,  ) (For sufficiency proof, see Dienes.) I iis possible It ibl to obtain b i similar i il determinantial d i i l conditions di i for f ARMA systems in terms of Hankel matrices generated from its output q autocorrelation sequence. Referring back to the ARMA (p, q) model in (9-68), the input white noise process w(n) there is uncorrelated with its own past sample values as well as the past values of the system output. output This gives E{w( n ) w* ( n  k )}  0,, k  1

((9-105))

E{w( n ) x* ( n  k )}  0, k  1.

(9-106)

48

PILLAI/Cha

Together with (9-68), we obtain

ri  E{x ( n ) x* ( n  i )} p

q

k 1

k 0

   ak {x ( n  k ) x * ( n  i )}   bk {w( n  k ) w* ( n  i )} p

q

   ak ri  k   bk {w( n  k ) x * ( n  i )} k 1

(9-107)

k 0

and hence in general p

 ak ri  k  ri  0,

iq

(9-108)

i  q  1.

(9-109)

k 1

and p

 ak ri  k  ri  0, k 1

Notice that (9-109) is the same as (9-102) with {hk} replaced

49

PILLAI/Cha

by {rk} and hence the Kronecker conditions for rational systems can be expressed in terms of its output autocorrelations as well. Thus if X(n) ~ ARMA (p, q) represents a wide sense stationary stochastic process, then its output autocorrelation sequence {rk} s s es satisfies (9-110) rank D p 1  rank D p  k  p, k  0, where  r0 r1 r2  rk  r r  r  r 3 k 1  Dk   1 2   r r  r  r  k k 1 k  2 2k 

(9-111)

represents the ( k  1)  ( k  1) Hankel matrix generated from r0 , r1 ,  , rk ,  , r2 k . It follows that for ARMA (p, (p q) systems systems, we have det Dn  0,

for all sufficiently large n.

(9-112)

50

PILLAI/Cha