Locally Optimal and Suboptimal Signal Detection in ... - CiteSeerX

2 downloads 0 Views 230KB Size Report
D10, and is close to D8 at high correlation. The structure of D11 is more complicated than. D2 but is simpler than D8. Thus the better ones among the six ...
Locally Optimal and Suboptimal Signal Detection in Transformation Noise Oscar C. Au1 Abstract In this paper, the ecacy of the optimal detector of a known vanishingly small signal in additive non-white transformation noise is compared with that of some eleven structurally simpler suboptimal detectors. Simulation is done under various signal choices, marginal densities and correlation functions. The block glo and the block combination g followed by Rv,1 in the optimal detector structure are found to be important for good performance in constant and oscillating signals respectively. Two suboptimal detectors with these block structures, D8 and D10 , are found to perform well consistently in all situations considered. A structurally simple suboptimal detector

D is found to be good in the cases with less correlated noise. 2

1

Oscar C. Au is with the Department of Electrical and Electronic Engineering, Hong Kong University of

Science and Technology, Clear Water Bay, Kowloon, Hong Kong. Email: [email protected]

1

1 Introduction In this paper, we are concerned with the discrete time detection of a known vanishingly small signal. The binary hypothesis testing problem is H : X= N K : X = N + s

where X = [X X : : : XM ]T is the observed samples, N = [N N : : : NM ]T is the noise 1

2

1

2

samples, s = [s s : : : sM ]T is the known signal, and  is a positive parameter such that 1

2

 ! 0. Here H is the noise only hypothesis and K is the signal in additive noise alternative. The noise samples can generally be correlated or non-white. The locally optimal detector, with ecacy E [2] [ @@ E (X)] E ( ) = lim ! var (X)

2

0

being the performance measure, was derived in [1] as follows. Let be any detector nonlinearity. The ecacy of the detector can be expressed in the form (covo[,sT rffXX ; (X)]) E( ) = varo (X) T (Eo[,s rffXX  (X)]) = varo (X) # " (X)  varo ,sT rff(X ) (

(

(

(

)

2

)

)

2

)

where covo, Eo, and varo are the covariance, expected value and variance under the hypothesis

H . The ecacy is thus maximized if and only if the detector nonlinearity is a constant multiple of ,sT rff XX . The optimal obtained is called the locally optimal detector. (

(

)

)

2

In this paper, we restrict ourselves to the class of transformation noise which is a tractable class of correlated noise that can match the sample marginal density and autocorrelation function. Transformation noise N [4] is the noise generated by passing a source noise V with joint density ' through a memoryless invertible nonlinearity g, . The density 1

function f of transformation noise N is

f (n) = '[g(n)]

M

Y

i=1

jg0(ni)j

where n = [n n : : : nM ]T . The locally optimal detector 1

2

lo (X)

(X) = ,sT rff(X )

lo

becomes

(X)) g0(X)  g00(X) T s = , r''(g(g(X )) g0(X) @ M 00 @Y '(Y) 0(X ) + g (Xi )  s  g = , i '(Y) Y g X g0(Xi) i i "

2

X =1

6 4

i

#



3 7 5

= (

)

resulting in the structure shown in Fig. 1a where and  are term-by-term multiplications and additions. We further assume that the source noise V is second order stationary multivariate Gaussian. The memoryless nonlinearity g of the transformation noise is then given by[4]

g(x) = , (FN (x)) 1

where  and FN are the cumulative distribution function of the Gaussian noise V and the transformation noise N respectively. Let Rn and Rv be the correlation matrix of N and V respectively. Then the ij th entry of Rn and Rv are n(i , j ) and v (i , j ) respectively. The

n and v are the autocorrelation functions of N and V respectively and they are related by 3

the correlation mapping[4] as follows.

Y ( ) =

1

X

n=0

bnnX ( ) 2

, where fbn g1 n are the coecients in the decomposition of the function g in terms of the 1

=0

Hermite polynomials fHng1 n

=0

1

g, (x) =

X

1

nZ=0

bn Hn(x)

1 ,1 g (x)Hn (x)(x)dx ,1

bn =

, Note that the coecients fbng1 n depend only on the memoryless nonlinearity g which in 1

=0

turn depends on the marginal density of N. The fbng1 n do not depend on the dependency =0

structure of N. With multivariate Gaussian background noise V, the locally optimal detector becomes lo (X) = ,

M

X

i=1

0 i) 0 0 si , ff ((X Xi) , g (Xi)g(Xi) + g (Xi)Zi "

#

1

1

where

Z = [Z Z Z : : : ZM ]T = R, g(X) 1

2

1

3

and f is the marginal density function of the transformation noise N. The detector structure 1

is shown in Fig. 1b, in which we use the term glo to mean the expression ,ff11 . Among the 0

four blocks in the locally optimal detector structure, the g, g0 and glo are memoryless but the block Rv, , I is not. All three memoryless nonlinearities require a knowledge of the 1

marginal density of the transformation noise while the block Rv, , I requires a knowledge 1

of the correlation function. The glo term is e ectively the locally optimal detector in the 4

special case of independent noise. It is the degenerate form of matrix. When the transformation noise is Gaussian, Although the locally optimal detector

lo

lo

lo

when the Rv is the identity

degenerates into the matched lter.

was obtained in [1], the extent of its im-

provement over other structurally simple suboptimal detectors was not studied in depth. The

lo

is structurally complicated, not particularly suitable for implementation. A struc-

turally simpler detector with close to optimal performance, if exist, would be more suitable for implementation purpose. In this paper, we attempt to answer the following questions:

Q1. How good is the locally optimal detector compared with structurally simpler suboptimal detectors, such as the independent locally optimal detector or the matched lter?

Q2. How important are the individual blocks in the locally optimal detector structure? If we are to simplify the detector by removing one block, which one should we remove? To address Q1 and Q2, we investigate the relative performance of the locally optimal detector and some eleven relatively simple suboptimal detectors through simulation in search of a good suboptimal detector that is simple in structure and good in performance. Particularly, to address Q2, we form the structurally simple suboptimal detectors by various combination of the four blocks in the optimal structure.

5

2 Construction of Suboptimal Detectors and Simulation Set Up Structurally, the locally optimal detector D is comprised of four blocks: g, g0, glo 1

and Rv, , I . Here we seek to study the relative importance of the individual blocks and the 1

performance advantage of the locally optimal detector over structurally simple suboptimal detectors. To study all the possible structurally simple suboptimal detectors is an impossible task. For our purpose, we will construct only a nite number of detectors with simpler structure than the locally optimal detector. As there is no standard way to achieve such a task, we choose to build the simple detectors by removing one or more blocks from the optimal structure. Again, this is by no means exhaustive but the study turns out to be meaningful. We feel that the two blocks g and Rv, naturally should operate together. So 1

they are treated as one unit in the construction process. Eleven suboptimal detectors are constructed, as shown in Fig. 2. The detectors including the optimal one are arbitrarily named D , D , ..., and D . Detector D is the 1

2

12

1

optimal detector and its ecacy should always be the largest. It has four blocks and is complicated in structure. Detector D is the independent locally optimal detector; that is, 2

it is optimal when the noise is independent. Structurally, it is D with g0 approximated 1

by zero. Its advantages over D are its simplicity and the fact that it does not need the 1

dependency information. Detector D is the matched lter which is optimal in the special 4

6

case of multivariate Gaussian noise. Its advantages are its simplicity and the fact that it requires no marginal density information. Detector D is a hybrid of D and D . The rest 3

2

4

of the detectors are various combinations of the four blocks in the optimal detector D . 1

Detector D is D with glo removed, or, with glo approximated by zero. Detector D is D

5

with the matrix I removed. Or, it is D with the term g00=g0 removed. Detector D is D

9

5

1

9

1

6

with g0 removed. Detector D is D with g0 removed, or, with g0 approximated by unity. 8

1

Detector D is D with glo removed. Or, it is D with g0 and glo approximated by unity 7

8

1

and zero respectively. Detector D is D with I removed. Detector D is D with Rv, 10

8

11

1

1

removed. Detector D is D with g0 removed. Among all detectors, D , D , and D are 12

11

2

11

12

memoryless. Only D does not require the marginal density information. 4

Naturally the performance measure for comparison purpose is the ecacy E because the locally optimal detector was derived using this measure. Unfortunately, the analytic comparison of detectors using the ecacy as performance measure is untractable. We will thus address the questions Q1 and Q2 by simulation results. We will simulate the ecacy of the detectors in various situations: two signal choices, two marginal densities and two correlation functions. This is not exhaustive but the idea is to compare the detectors under a wide variety of operating conditions. We will normalize the ecacy of the detectors by dividing their ecacy values by that of D . This will eliminate any scaling problem. One reason for the normalization is 4

that there exist a nice analytical expression for the ecacy of the matched lter D . 4

E (D ) = sTRn, s 1

4

7

E ectively, what we are doing is estimating the asymptotic relative eciency (ARE ) [2, 6, 7] of each detector with respect to the reference detector D . This is due to the Pitman-Noether 4

theorem [6] which states that within some weak regularity conditions, k) AREk; = EE ((D D) 4

4

Thus we will loosely call the normalized ecacy of detector Dk as ARE of Dk . The two signal choices are the constant signal

si = 1 and the oscillating signal

si = (,1)i with i = 1; : : : ; M , for simplicity. In the special case of Gaussian noise and M=2, these are the signals that give the largest signal-to-noise ratio when the correlation is negative and positive, respectively. The constant signal is the signal with the lowest possible frequency while the oscillating signal is the one with the highest possible frequency sampled at the Nyquist frequency. Probably another good signal choice is the eigenvector corresponding to the smallest eigenvalue of Rn . It maximizes E (D ) but also changes with Rn . For our 4

purpose, we will use only the constant and the oscillating signals. As we will nd out later in the paper, detector performance is closely related to the signal choice. The correlation functions chosen are the triangular function and the exponential function:

v (x) =

8 > > > < > > > :

1 , jmxj ; jxj  m

jxj  m

0; 8

(1)

and

n(x) = exp(, xa )

(2)

As we will nd out later in this paper, many detectors have markly di erent performance pro les under the two correlation functions. While the triangular correlation has nite support, the exponential correlation has in nite support. The triangular correlation can be considered as an extreme and uncommon case because of its linearly decaying tail. The exponential correlation, on the other hand, is more realistic because its exponentially decaying tail is commonly found in the output processes of simple RC lters fed with white noise. The v and n are the correlation of the underlying Gaussian noise V and the transformation noise

N respectively. We will use the term adjacent correlation to mean these functions evaluated at i = 1. Note that we have used n for exponential correlation and v for triangular correlation. Our original idea was to use n for both. In [1], a triangular n was studied. However, it turns out that there does not exist any nonnegative-de nite correlation functions for the underlying Gaussian noise to produce a triangular correlation function for a transformation noise with symmetric marginal density. A proof of this claim is given in the Appendix. To remedy this situation, we force v instead of n to be triangular. The marginal density functions chosen are the Laplace density and the Johnson density. The Laplace density [8] is a fairly good model for some impulsive noises [9] and the Johnson density [10, 11] is actually a class of densities indexed by a shape parameter  to give a wide range of tail weights. The Johnson densities are smoother than the Laplace noise due to the absence of the cusp at the origin. As  approaches in nity, the Johnson 9

density approaches the Gaussian noise and as  approaches zero, the Johnson density becomes very non-Gaussian with very heavy tail. One limitation of the Johnson densities is that they cannot describe densities with tail weights lighter than Gaussian. The range of  corresponding to quick changes of tail weights is the region between zero and one. We thus pick three  values 0.8, 1, and 10 for detailed study. The Johnson density with  = 10 is close to the Gaussian density. We will loosely call it "essentially Gaussian". The Johnson density with  = 1 has moderately heavy tail compared with the Gaussian density and thus we will loosely call it the "moderately non-Gaussian" case. The Johnson density with  = 0:8 has very heavy tail and we will call it the "very non-Gaussian" case. The statistical description of the Laplace density is

p

f (x) = p1 exp , 2jxj 2

!

1

8 > > >
> > :

2

x0 , [ exp( x )]; p  1  g0(x) =  exp 2 g (x) , 2 jxj p glo(x) = 2 sgn(x) 2

1 1 2

2

and the statistical description of the Johnson density is

f (x) = 1

g, (x) = 1

g(x) = g0(x) =

"   #, =    !  x 1 1 , p  1+  exp , 2 sinh x 2   sinh x   sinh, x  q  1 + ( x ) 2

1 2

1

1

2

10

2

glo(x) = 1 + x "



,

2 # 1

x +  1+ x    2

"



,=

2 # 1 2

2

sinh,1



x 



When  is large, i.e. when the noise is essentially Gaussian, the g and glo are very close to the linear function f (x) = x and the g0 is very close to the constant function with unity amplitude. As  decreases below unity, the g and g0 become highly nonlinear and g0 deviates signi cantly from the constant function. When  is small, a small change in  corresponds to a large change in the detector nonlinearities. The correlation mappings for Laplace noise and Johnson noise are plotted in Figs. 3 and 4, respectively. Using the fact that

Dk (X); D (X)) = Eo[Dk (X)D (X)] E (Dk ) = covo(var D (X) var D (X) 1

1

o k

o k

we estimate the ecacy E (Dk ) by the ratio of the estimate of the numerator to the estimate of the denominator. Although this is not the optimal estimator of E (Dk ), it is a meaningful one. To estimate Eo [Dk (X)D (X)] and varo [Dk (X)], 8000 vectors each of length M are 1

generated and the following two equations are used. n X Ec [Dk (X)D (X)] = 1 Dk (Xi)D (Xi) ni n n X X d Var Dk (X) = n ,1 1 [Dk (Xi) , n1 Dk (Xj)] i j o

1

1

=1

o

=1

=1

with n = 8000. Each vector is generated by transforming multivariate Gaussian samples using the memoryless nonlinearity g, . The multivariate Gaussian samples in turn are 1

generated by an IMSL routine called from a FORTRAN program. For the constant and oscillating signal cases, M is chosen to be 50 and 40 respectively as these numbers are found experimentally to be large enough for convergence. 11

3 Simulation Results under Constant Signal In this section, we will look at the simulated ARE (or normalized ecacy) of the twelve detectors under constant signal and transformation noise with various marginal densities and correlation functions. From the results, we will nd answers to the questions raised before. We will consider in the rst subsection the cases with triangular correlation as described by Eqn. 1. The values of the m used in the study are shown in Table 1 together with the adjacent correlations, which are the correlation function evaluated at i = 1. Then in the second subsection, we will look at the cases with exponential correlation, as described by Eqn. 2. The values of the a used are shown in Table 2 with the corresponding adjacent correlation. As it turns out, the behavior of the detectors are di erent under the two correlation functions. As pointed out before, the triangular correlation can be considered as a worst case correlation function because of its slow linearly decaying tail. The exponential correlation is more realistic because of its exponentially decaying tail. 3.1

Triangular Correlation

The ARE of detectors in transformation noise with Laplace marginal density and triangular correlation under constant signal are plotted in Fig. 5. Consider the case of m = 1 or zero adjacent correlation in Fig. 5. This is the independent case in which D and D do not exist and the detectors D , D and D degenerate 5

7

1

2

8

into the sign detector. As expected, D has the highest ARE and its value is close to the 1

12

theoretical value. The theoretical expression [12]

AREsign detector;matched filter = 4 f (0) 2

2

gives a value of two in independent Laplace noise with unity variance. As adjacent correlation  increases, the ARE of D increases gradually until a spike 1

appears at around  = 0:5. Then the ARE of D starts to increase very steeply. There does 1

not seem to be any simple reason why the spike should appear at that location. Two other detectors also show special behavior at  = 0:5, namely D has a spike and D has a dip. 9

5

Both D and D have similar structures. Checking the structures of the detectors, we nd 5

9

that the common structure in D , D and D is the g followed Rv, and the term-by-term 1

5

1

9

multiplication by g0. This structure is found only in these three detectors and not in others. This suggests that the special behavior at  = 0:5 is possibly due to the presence of this structure. Detector D is worse than D at all adjacent correlation levels. Compared with 5

9

other detectors, both D and D have highly uctuating ARE and their ARE are not very 5

9

large. Thus they are not considered good suboptimal detectors. When adjacent correlation is low, the optimal detector D has an ARE that is only 1

slightly larger than the ARE of D and D . Actually, D , D , D or even D have an 8

2

3

10

11

9

ARE that is fairly close to the optimal curve of D . In other words, the optimal detector 1

can be closely approximated in performance by suboptimal detectors at low correlation. However, when adjacent correlation is high, the optimal detector is much better than any suboptimal detectors considered. This suggests that it may be worthwhile to construct the more complicated optimal detector if it is known that the noise environment has high 13

adjacent correlation. But if the noise correlation is low, we can use structurally simpler suboptimal detectors to have close-to-optimal performances. As for the performance pro les of the other detectors, we can see that D , D , D , 2

3

8

D , D and possibly D are the relatively good ones, with considerable larger ARE than 10

11

12

D , D and D . Detectors D , D , D and D are considered the relatively poor detectors 4

6

7

4

5

6

7

because they are at the bottom of Fig. 5. A common block found in the relatively good suboptimal detectors is the glo block, which is absent in the relatively poor detectors. This suggests that the block glo is important for good performance in this case of Laplace marginal with triangular correlation. At all correlation levels, detector D is considerably better than D which in turn is 8

2

better than both D and D . Detector D is slightly better than D . Notice this order of 10

3

10

3

D , D , D and D . It will come up again in other situations. As D and D have almost 8

2

10

3

8

10

identical structure, we conclude that D is superior to D in this case. Similarly, D is 8

10

2

structurally simpler than D and yet D has better performance than D . Thus we conclude 3

2

3

that D is superior to D . A comparison of D and D reveals that, while D is considerably 2

3

8

2

8

better in performance than D , detector D is considerably simpler in structure than D . 2

2

8

Detector D is signi cantly better than D at all correlation levels. While D is 11

12

11

poorer than both D and D at low correlation, its performance is better than D , D and 2

8

2

3

D , and is close to D at high correlation. The structure of D is more complicated than 10

8

11

D but is simpler than D . Thus the better ones among the six relatively good detectors are 2

8

D , D and D . 2

8

11

14

We now change the marginal density from Laplace noise to Johnson noise. The ARE of detectors in transformation noise with Johnson( = 1) and Johnson( = 0:8) marginals and triangular correlation under constant signal are plotted Figs. 6 and 7. Similar to the case for Laplace noise, at low correlation, the optimal detector D can 1

be approximated closely in performance by suboptimal detectors such as D , D and D . 2

8

11

Note that, unlike the Laplace case, here D has very good performance at low correlation. 11

It is slightly better than D for most of the correlation levels. At high correlation, D is 2

1

much better than any suboptimal detectors considered, as in the Laplace case. The relatively good detectors continue to be D , D , D , D , D and possibly D . This suggests that 2

3

8

10

11

12

the block glo is important for good performance. We again see the ordering of D , D , D 8

2

10

and D by performance. Note also that while D is not particularly good in both the cases 3

12

of Laplace marginal and Johnson( = 1), it is better in the Johnson( = 0:8) case. The relatively poor detectors are D , D , D and D . Both D and D are still as uctuating 5

6

7

8

9

5

as in the Laplace case. However, D have a spike instead of a dip at  = 0:5 in the case of 5

Johnson( = 0:8) marginal. From the three cases of detection of constant signal in transformation noise with triangular correlation and various marginal densities considered, we nd the same set of relatively good detectors sharing the common block of glo. This suggests that the block glo is important for good performance in detection of constant signal in transformation noise with triangular correlation.

15

3.2

Exponential Correlation

The ARE of detectors in transformation noise with Laplace marginal density and exponential correlation under constant signal are plotted in Fig. 8. Many features in Fig. 8 are similar to the corresponding case under triangular correlation in Fig. 5. When correlation is close to zero, the ARE of D is close to the 1

theoretical value of two for independent noise, verifying that the simulation is working well. Although detector D is the best detector among all detectors, it can be closely approximated 1

in performance by suboptimal detectors such as D , D and possibly D at low adjacent 2

8

11

correlation. The ARE of other detectors such as D , D and D are quite close to the optimal 3

9

10

curve of D as well. However, D is signi cantly better than all the suboptimal detectors 1

1

considered at high correlation. Its spike at  = 0:5 in the case of triangular correlation is absent here. So is the spike or dip of detectors D and D at the same location. Without 5

9

the spike, D is a very poor detector at high correlation. The four detectors D , D , D 9

4

5

6

and D are still relatively poor. The relatively good detectors are still D , D , D , D , D 7

2

3

8

10

11

and possibly D . This again suggests that the block glo is important for good performance 12

in transformation noise with Laplace marginal and exponential correlation. We also observe the ordering of D , D , D and D by performance. 8

2

10

3

We now change the marginal density from Laplace to Johnson noise. The two values, 0.8 and 1, of the parameter  are again chosen for study. The ARE of detectors in transformation noise with Johnson( = 1) and Johnson( = 0:8) marginals and exponential correlation under constant signal are plotted in Figs. 9 and 10. 16

Figs. 9 and 10 are similar to Fig. 8 of the Laplace case. The important features are that the optimal detector D can be fairly closely approximated by suboptimal detectors 1

such as D , D and D , except at high correlation. The relatively good suboptimal detectors 2

8

11

are still D , D , D , D , D and possibly D as in the triangular case. This again suggests 2

3

8

10

11

12

that the block glo is important for good performance. The ordering of D , D , D and 8

2

10

D by performance can still be seen. Note that D is still one of the best suboptimal 3

8

detectors at all correlation levels. As in the case of triangular correlation, D is quite good 12

in Johnson( = 0:8) marginal though it may not be very good in Laplace or Johnson( = 1) marginals. From the three cases of detection of constant signal in transformation noise with exponential correlation and various marginal densities considered, we again nd the same set of relatively good suboptimal detectors sharing the common block of glo. This suggests that the block glo is important for good performance in the detection of constant signal in transformation noise with exponential correlation.

17

4 Simulation Results with Oscillating Signal In this section, we will look at the simulated ARE of the twelve detectors for oscillating signal and transformation noise with various marginal densities and correlation functions. As in the previous section, we will show that conclusions can be very di erent when a signal choice other than the constant signal is used, indicating the importance of signal choice in a detection problem. In the rst subsection, we will look at the cases with triangular correlation. In the second subsection, we will look at the cases with exponential correlations. 4.1

Triangular Correlation

The ARE of detectors in transformation noise with Laplace marginal density and triangular correlation under oscillating signal are plotted in Fig. 11. As expected, the ARE of D in the special case of independent noise is close to the 1

theoretical value of two. At low correlation, when adjacent correlation is less than around 0.4, most of the detectors have similar performances. As correlation increases, a large upward jump can be found in the detectors D , D , D , D , D , D and D . After the jump, the 1

5

6

7

8

9

10

ARE gradually decrease. The seven detectors with this behavior have essentially the same ARE at all correlation levels. In other words, there exists suboptimal detectors that closely approximate the performance of the optimal detector at all correlation levels. The other ve detectors have relatively small ARE at medium to high correlation. In this case, the relatively good suboptimal detectors are D , D , D , D , D , and 5

18

6

7

8

9

D . The relatively poor detectors are D , D , D , D and D . Note that among the good 10

2

3

4

11

12

suboptimal detectors in the detection of constant signal, only D and D remain to be good 8

10

in the detection of oscillating signal. Actually all the memoryless detectors D , D and D 2

11

12

are poor here. The D , D , D , and D , which are poor in the constant signal case, are good 5

6

7

9

in this oscillating signal case. The common structure found in the relatively good suboptimal detectors is the combination of the memoryless block g followed by the memory block Rv, , which is not 1

found in the relatively poor detectors. This suggests that the combination g followed by Rv,

1

is important for good performance in the detection of oscillating signal in transformation noise with Laplace marginal and triangular correlation. Among the relatively good detectors, the D and D pair have almost identical 5

9

performances. The same is true for the D and D pair, or the D and D pair. The 6

7

8

10

D , D pair has the simplest structure among the three pairs. Basically in any of the 6

7

three pairs, if one can be implemented, the other can also be implemented with little extra e ort. Detectors D and D may not be good because they reduce to a constant zero, which 5

7

is useless for detection purpose, in the special case of independent noise. Among the six relatively good suboptimal detectors, both D and D have slightly larger ARE 's than the 8

10

others, suggesting that these two may be slightly better than the others. We now change the marginal density from Laplace to Johnson noise. Three values

 = 0:8; 1; 10 are used for study. The Johnson density with  = 10 is the essentially Gaussian density, included as a check. The ARE of the detectors in transformation noise with 19

Johnson() marginal and triangular correlation for oscillating signal are plotted in Figs. 12 to 14. For  = 10 when the noise is essentially Gaussian, all detectors have ARE close to unity, except for the memoryless detectors D , D and D . This is expected because in the 2

11

12

special case of Gaussian noise, the nonlinearities g, g0, and glo degenerate into

g(x) = glo(x) = x; g0(x) = 1 and detectors D , D , D and D degenerate into the matched lter D which is the optimal 1

6

8

9

4

detector in Gaussian noise. Detectors fD , D g and fD , D g are pairwise identical and 3

10

5

7

all four detectors have structures very similar to the matched lter. Both D and D 11

12

are identically zero in Gaussian noise. Apparently any reasonable detectors with memory elements have close-to-optimal performance in the essentially Gaussian situation. In the other two cases with  = 1 and  = 0:8, we have non-Gaussian noise with heavy tails. The gures for these two cases are very similar to the gure for Laplace noise. The major di erence among the three cases is the di erence in the scale of the ARE . The performance pro les of the detectors are almost identical. There exists suboptimal detectors that closely approximate the optimal detector in performance at all correlation levels: D , 5

D , D , D , D and D . The other ve detectors D , D , D , D and D are poor, 6

7

8

9

10

2

3

4

11

12

particularly at high correlation. The large jump in ARE when adjacent correlation is around 0.5 can be seen in all three cases. Again the common structure shared by all the relatively good suboptimal detectors is the combination of g followed by Rv, , which is not found in the 1

relatively poor detectors. This suggests that the combination g followed by Rv, is important 1

20

for good performance in the detection of an oscillating signal in transformation noise with triangular correlation. 4.2

Exponential Correlation

The ARE of detectors in transformation noise with Laplace marginal density and exponential correlation under oscillating signal are plotted in Fig. 15. In Fig. 15, the optimal detector D can be closely approximated by D , D and 1

8

10

possibly D . Detectors D and D are quite good at low and high correlation respectively. 9

3

9

Detectors D , D and D are considerably worse than the optimal detector at low correlation, 5

6

7

but are quite close to D at high correlation, especially D . Detector D has close-to-optimal 1

5

2

performance at low correlation, but is progressively poorer at high correlation. Both D

11

and D are poor at all correlation levels. 12

We now change the marginal density from Laplace noise to Johnson noise. The

ARE of various detectors are plotted in Figs. 16 to 18. In the case of  = 10 which is the essentially Gaussian case, we again nd that all detectors with memory have close-to-optimal performance. Only the memoryless detectors do not perform well. However, D can have 2

close-to-optimal performance at low correlation. In the cases of  = 0:8 and  = 1 in which the noise is non-Gaussian with heavy tails, the optimal detector D can be closely approximated by detectors such as D , D and 1

8

9

D . The three detectors D , D and D are again considerably poorer than the optimal 10

5

6

7

detector at low correlation but are better at high correlation. Actually D has close-to5

21

optimal performance at high correlation. As a contrast, the three detectors D , D , and 2

3

D have close-to-optimal performance at low correlation but are worse at high correlation. 11

Detectors D and D are both very poor at all correlations. 4

12

In the three cases of Laplace, Johnson( = 0:8) and Johnson( = 1) marginal densities, the relatively good detectors are D , D and D . These three detectors share the 8

9

10

common structure of g followed by Rv, . This suggests that the combination g followed by 1

Rv, is important for good performance in the detection of an oscillating signal in transfor1

mation noise with exponential correlation.

22

5 Conclusion In this paper, we consider the simulated ARE of twelve detectors in the detection of constant and oscillating signals under transformation noise with Laplace and Johnson marginal densities and triangular and exponential correlation functions. Our goal is to address the two questions Q1 and Q2. We nd that the optimal detector can be closely approximated by suboptimal detectors in terms of performance in all the situations considered except possibly at high adjacent correlation. The block glo is found to be important for good performance in constant signal and the block combination of g followed by Rv, is important for good performance in oscillating 1

signal. Apparently, the block g0 is least important in the optimal detector. Two of the detectors D and D are found to be consistently among the relatively 8

10

good suboptimal detectors in all the cases considered, with D performing better than D 8

10

in the constant signal cases. The two detectors have almost identical structures. Thus D

8

emerges as the best suboptimal detector considered. Considering the importance of the glo in constant signal detection and the importance of the combination of g followed by Rv, in 1

oscillating signal detection, any detector with good performance in arbitrary signal choices cannot be much simpler than D . 8

We also found that the structurally very simple detector D is good at low correla2

tion. It is a close approximation to the optimal detector in terms of performance when the adjacent correlation is less than 0.2. 23

6 Appendix In this appendix, we will show that any non-Gaussian transformation noise with even symmetric marginal density cannot achieve the triangular correlation of Eqn. 1. Let N (t) be a second order stationary transformation noise generated from an underlying multivariate Gaussian process V (t) so that

N (t) = g, (V (t)) 1

with g, de ned as 1

g, (x) = F , ((x)) 1

1

where F and  are the marginal cumulative distribution function of N (t) and V (t) respectively. The case we consider here is not the degenerate case. Therefore, g is not a linear function. Without loss of generality, we assume that both N (t) and V (t) have zero mean and unity variance. It can be shown easily that the even symmetry of the marginal density of N (t) implies that g, is odd symmetric. 1

Let n and v be the autocorrelation function of the transformation noise and the underlying noise, respectively. Assume now that n is triangular as in Eqn. 1. We will show that this assumption implies a contradiction. The two correlation functions are related by

n( ) =

1

X

i=0

bi iv ( ) 2

where

g, (x) = 1

1

X

i=0

bi Hi(x) 24

bi = Note that we have

1

X

i=0

bi =

1 ,1 g (x)Hi(x)(x)dx ,1

Z

1

X

i=0

biiv (0) = n (0) = 1

And

bi = 0; for even i because  is even, g, is odd, and Hi is even for even i. 1

Claim 6.1 The two correlation functions v and n are related by jn( )j  jv ( )j

(3)

with equality i v ( ) = 0 or jv ( )j = 1.

Proof For 0  v ( )  1, we have

n( ) =

1

X

i=0

bi iv ( )  2

1

X

i=0

!

bi v ( ) = v ( ) 2

with equality i v = 0 or v = 1. For ,1  v ( )  0, we have 0  n( ) =

1

X

i=0

b i v ( )  2 2 +1

i

2 +1

1

X

i=0

bi

2 2 +1

!

v ( ) = v ( )

with equality i v = 0 or v = ,1. Therefore Eqn. 3 is true for all 0  jv ( )j  1. And Claim 6.1 is proved. Separately, there is a theorem[13] that states that the relation

f ( m2 )  12 f (0) 25

(4)

is true for any symmetric non-negative de nite function f with support on (-m,m). The assumption that n is triangular implies that

v (0) = n(0) = 1; v (m) = n (m) = 0 Applying Claim 6.1, we get

v m2 > n m2 = 12 n(0) = 12 v (0) 







which is a contradiction to Eqn. 4. Therefore n cannot be a triangular function.

26

References [1] A. Martinez, P. Swaszek and J.B. Thomas, "Locally Optimal Detection in Multivariate Non-Gaussian Noise," IEEE Trans. in Info. Th., Vol. IT-30, No.6, pp. 815-822, Nov 1984. [2] S.A. Kassam, Signal Detection in Non-Gaussian Noise, Springer-Verlag, New York, 1987. [3] O.C. Au, "A Performance Comparison of Detectors in Transformation Noise," Proc. of 24th Conf. on Information Sciences and Systems, pp. 404-409, Mar. 1990.

[4] O.C. Au and J.B. Thomas, "On Transformation Noise: Properties and Modeling", Journal of the Franklin Institute, Vol. 330, No. 4, pp. 707-720, Jul 1993.

[5] J.B. Thomas, An introduction to Communication Theory and Systems, Springer Verlag, New York, 1988. [6] H.V. Poor, An Introduction to Signal Detection and Estimation, Springer Verlag, New York, 1988. [7] E.L. Lehmann, Theory of Point Estimation, John Wiley and Sons, New York, 1983. [8] N.L. Johnson and S. Kotz, Continuous Univariate Distributions-2, John Wiley and Sons, New York, 1970. [9] E.J. Modugno III, The Detection of Signals in Impulsive Noise, PhD Thesis, Princeton University, 1982. 27

[10] N.L. Johnson, "Systems of Frequency Curves Generated by Methods of Translation," Biometrika, Vol.36, pp.149-176, 1949.

[11] N.L. Johnson, Distributions in Statistics, Vol. 2, John Wiley and Sons, New York, 1970. [12] S.A. Kassam, J.B. Thomas, "Dead-Zone Limiter: An Application of Conditional Tests in Nonparametric Detection", J. Acoust. Soc. Am., Vol-60, No.4, pp.857-862, Oct. 1976. [13] S.G. Tyan, The Structure of Bivariate Distribution Functions and Their Relation to Markov Process, PhD Thesis, Princeton University, 1975.

28

Suggest Documents