The autocorrelation matrix as a square and using the

0 downloads 0 Views 875KB Size Report
[7] S. Haykin , Communication System ,1983. [8] T. H. Glisson ... [16] Ziad Sobih, Generation of any PDF from a set of equally likely random variables, Global Journal of Computer Science ... [19] Simon Haykin, Adaptive filter theory, 2001.
The autocorrelation matrix as a square and using the projection matrix theta to get the square root Filter with computer simulation

As we can see this is explained in figure 11. We have three first order state space models for stage m. this is based on the following three projections : 1 For forward linear prediction. 2 For backward linear prediction. 3 For joint process linear estimation. At this point we will use one to one correspondence between Kalman variables and RLS variables and we will break up the state space characterization of stage m into three parts 1 Forward prediction [15]

87 And

88 X1(n) is the state variable and the reference signal is

89 The noise has zero mean and unit variance.

2 Backward prediction [18]

90 And

91 Where x2(n) is the second state variable and the second reference signal is

92 Also the noise has zero mean and unit variance.

3 Joint process estimate [17] 93

94 X3(n) is the third state variable and the reference signal is

95 The noise is zero mean and unit variance [16]. At this point we have one to one correspondence, shown in table 4, between Kalman and the three sets of least squares lattice (LSL). The prediction order is (m-1). The three sets are forward prediction and backward prediction and joint process estimation. This is a one to one correspondence between Kalman variables and RLS variables. The first three lines are from equations 87 to 95. To verify the remaining three lines we proceed as follows 1 Recall the correspondence between the filtered state error correlation matrix in Kalman filter theory and the inverse of the correlation matrix of the input vector in RLS theory

Table 4 Using Figure 11 a we can say the correspondence is

96 2 we recall the correspondence between Kalman gain vector and the gain of the vector in RLS theory

Using the definition of the gain vector we may write

97 3 we recall from Kalman theory that the innovation is equal to

98 From table 4 we can see the following correspondence

Using equation 98 we may write

Using equation 79 and 80 we may write

Using equation 77 we conclude

99 In Kalman theory the estimation error is

100 Where the state estimate is [13]

The transition matrix is

From the inverse rule we have

From the third row of table 4 we have

It follows

We may write equation 100 as

Using equation 79 and 80 we can write

In light of equation 61 we say

101 We use equation 99 and 101 to write

102 Equations 96 and 97 and 102 give us the base for the last three lines of table 4.

RQ decomposition based least squares lattice filters [19]

At this point with the state space models and the Kalman theory we are ready to get to the order recursive adaptive filter. This is three parts 1 adaptive forward prediction [17]. 2 adaptive backward prediction [18]. 3 adaptive joint process estimation [14]. From Kalman theory we have

103 Where theta is a unitary rotation Arrays for adaptive forward prediction Using equation 103 and put it for the forward prediction state space model using table 4 we will have one to one correspondence and the equation will be

104 In equation 104 we have done some math to simplify things. Let us defined some quantities 1 the real valued quantity

105 We defined in accordance with RLS theory

Note that this product is always real

2 The complex valued quantity

106 Where

107 This is the cross correlation between the angle normalized forward and backward prediction errors. The relation with the forward reflection coefficient for prediction order m is

108 The matrix theta in equation 104 is a unitary rotation matrix. We can write it as

109

Where

110 And

111 Using equation 109 in equation 104 we will have

112

113

114 Equations 105 and 110 through 114 are the solution. Arrays for adaptive backward prediction This is described by the state space equations 90 to 92. With one to one correspondences using table 4 we will have

115 In writing 115 we have done some math. The quantities are defined as follows 1 First we have

116 F (m-1)(n) is the sum of forward prediction error squares. In RLS theory this is

117 The product in the last term of this equation is a real value. 2 Except for the factor F ^(-1/2) (m-1)(n), the complex valued quantity

118 The quantity pb(m-1)(n) is related to the backward reflection coefficient for prediction order m by (Haykin, 1991)

119 The two by two theta matrix is a unitary rotation matrix. We can find it by [13]

120 Where

121 And

122 Substituting 120 into 115 we will find the following recursions

123

124

125 Equations 116 and 121 and 125 are the solution to adaptive backward problems in a least squares lattice sense.

Array for joint process estimation [16] Consider the joint process estimation described by the state equations 93 to95. With one to one correspondence between Kalman and LSL in table 4 we can write

126 In equation 126 we have done some math to simplify things. In equation 126 we have some quantities that we have to describe. First

127 The joint estimation parameter

128 The two by two matrix theta [14]

129 Where

130 And

131 Substituting equation 129 into 126 we get the recursions

132

133 At this point we have the equations we need for the joint process estimation problem in the least squares lattice sense.

The computer simulation

In this example we have an input that is white noise. This input goes to a second order filter to get the desired response. The block diagram of the system is figure 12. More details about this block diagram is figure 13. The input u(n) is given to the system to calculate the forward prediction error in the lattice stage 1 and the backward prediction error In the lattice stage one. Also the desired signal d(n) is given to the system to solve the joint process estimation problem in a least squares lattice sense. The equations to solve this problem are all in the RQ decomposition based least squares lattice filter section. We will use Matlab for our simulation. We will simulate the Arrays for adaptive forward prediction and the Arrays for adaptive backward prediction and the array for joint process estimation. Figure 12 ad 13 using Matlab are figure 14. As we can see from our simulation (Figure 15) the error between the desired and the joint process estimation decrease.

Figure 12

Figure 13

Product -1 Z

Band -Limited White Noise

-1 Z

1 s

Scope 5 Product 14

1 s

-1 Z

Integrator

Product 20

Gain 2

-.1

Product 2

-1 Z

sqrt

sqrt

Math Function

Math Function 1

Scope 3 1

sqrt

Integer Delay 7

sqrt

Math Function 5

Gain Gain 1

-1 Z

Scope 8 Scope 4

Product 3

Divide 2

Scope 2

Integer Delay 6

Product 1

Integer Delay Integer Delay 10

-1 Z

Integrator 4

Integer Delay 1

.72

Divide

Math Function 4

Scope 1

Integer Delay 5

1 s

-1 Z

Product 4

Integrator 1

Integer Delay 8 1 s

-1 Z

Product 17 Divide 7

Integer Delay 2

Product 5 Product 15

Product 12 Integrator 2

Divide 4

-1 Z

-1 Z

Integer Delay 9

Integer Delay 3

Product 6

sqrt

sqrt

Math Function 2

Math Function 3

Divide 3

1 s Integrator 5

Product 21 Product 7

Product 18

Product 8

Divide 1

-1 Z

Product 9 Divide 5

Scope 7 Divide 6

Integer Delay 4

Product 10 Scope 6 Product 19 Product 11

Figure 14

Figure 15

References : [1] A. Papoulis , Probability , Random Variables, and Stochastic Process,2002 [2] J. G. Proakis , Digital Communications, 2001 [3] R. J. Schilling , Engineering Analysis ,1988 [4] H. L. Van Trees , Detection, Estimation, and Modulation Theory,1968 [5] J. G, Proakis , Inroduction to Digital Signal Processing ,1988 [6] C. Chen , Linear System Theory and Design , 1984 [7] S. Haykin , Communication System ,1983 [8] T. H. Glisson , Introduction to System Analysis , 1985 [9] Martin Schetzen, Airborne Doppler Radar, 2006 [10] Martin Schetzen, The Volterra & Wiener Theories of Nonlinear Systems,2006 [11] Martin Schetzen, Discrete System using Matlab, 2004 [12] Arvin Grabel, Microelectronics, 1987 [13] Ziad Sobih, Time and Space, (International Journal of Engineering),Volume (7) : Issue (3) : 2013 [14] Ziad Sobih, Construction of the sampled signal up to any frequency while keeping the sampling rate fixed. (Signal Processing International Journal), Volume (7) : Issue (2) : 2013 [15] Ziad Sobih, Up/Down Converter Linear Model with Feed Forward and Feedback Stability Analysis, (International Journal of Engineering), Volume (8) : Issue (1) : 2014

[16] Ziad Sobih, Generation of any PDF from a set of equally likely random variables, Global Journal of Computer Science and Technology, Volume (14) : Issue (2) : Version 1.0 : Year 2014 [17] Ziad Sobih, Adaptive filters, Global Journal of Researches in Engineering (F) Volume (14): Issue (7): Version (1): Year 2014 [18] Ziad Sobih, An adaptive filter to pick up a Wiener filter from the error with and without noise, Global Journal of Researches in Engineering (F) Volume (15): Issue (2): Version (1): Year 2015 [19] Simon Haykin, Adaptive filter theory, 2001

Suggest Documents