Hierarchical Closely-Spaced Object (CSO) Resolution for ... - CiteSeerX

2 downloads 0 Views 1MB Size Report
Numerica Corporation, P.O. Box 271246, Fort Collins, CO 80527. ABSTRACT ...... Society of the Pacific 114, pp. ... E. Scheirer, “Music-listening systems,” tech. rep., Massachusetts Institute of Technology, S.M. Media Arts and. Sciences, 2000.
Hierarchical Closely-Spaced Object (CSO) Resolution for IR Sensor Surveillance Daniel Macumber, Sabino Gadaleta, Allison Floyd, and Aubrey Poore Numerica Corporation, P.O. Box 271246, Fort Collins, CO 80527 ABSTRACT The observation of closely-spaced objects using limited-resolution Infrared (IR) sensor systems can result in merged object measurements on the focal plane. These Unresolved Closely-Spaced Objects (UCSOs) can significantly hamper the performance of surveillance systems. Algorithms are desired which robustly resolve UCSO signals such that (1) the number of targets, (2) the target locations on the focal plane, (3) the uncertainty in the location estimates, and (4) the target intensity signals are correctly preserved in the resolution process. Furthermore, decomposition of UCSO objects must be done in a way which will not overwhelm a tracking system in the event of a sudden increase in the number of objects. This paper presents a framework for obtaining UCSO resolution while meeting tracker real-time computing requirements by applying processing algorithms in a hierarchical fashion. Image restoration techniques, which are often quite cheap, will be applied first to help reduce noise and improve resolution of UCSO objects on the focal plane. The CLEAN algorithm, developed to restore images of point targets, is used for illustration. Then, when processor constraints allow, more intensive algorithms are applied to further resolve USCO objects. A novel pixel-cluster decomposition algorithm that uses a particle distribution representative of the pixel-cluster intensities to feed the Expectation Maximization (EM) is used in this work. We will present simulation studies that illustrate the capability of this framework to improve correct object count on the focal plane while meeting the four goals listed above. In the presence of processing time constraints, the hierarchical framework provides an interruptible mechanism which can satisfy real-time run-time constraints while improving tracking performance. Keywords: Infrared Sensor Surveillance, Pixel (Clump) Cluster Tracking, Single and Multi-Assignment, Pixel-Cluster Decomposition, Multiple Hypothesis Pixel-Cluster Decomposition

1. INTRODUCTION When viewing point targets with IR sensors, pixels on the sensor focal plane above a certain threshold are taken to represent objects of interest and are input to a tracking system for position estimation and target recognition. Measured signals from Closely Spaced Objects (CSOs) may overlap on the sensor focal plane, forming a connected pixel-cluster which represents multiple objects. These Unresolved Closely Spaced Objects (UCSOs) both reduce the effective resolution of the measurement and obscure the number of true targets in the scene. Multiple Hypothesis Pixel-Cluster Decomposition (MHPCD) tracking methods have been developed 1 to track UCSO objects but tend to grow computationally expensive as the number of pixel-cluster decomposition hypotheses increases. In addition these methods do not allow for individual objects within a USCO to be identified early, which is critical for target recognition. An approach which can accurately decompose UCSOs early in flight is desired. However, it is not always advantageous to immediately decompose all objects in the scene as a sudden, large increase in the number of reported objects may overwhelm the track initiation problem. Therefore, this paper proposes an interruptible, hierarchical approach to pixel-cluster decomposition. In this approach relatively cheap image restoration algorithms are used first to increase sensor resolution as CSO objects begin to form in the scene. Then, after the initial computational needs of track initiation decrease, more complicated CSO algorithms are applied which further decompose UCSO objects. These algorithms may be applied to individual pixel-clusters based on needs and computational ability of the tracker. Furthermore, data from other sensors and the global track database may be used to aid the USCO decomposition by providing prior decomposition Further author information: (Send correspondence to A.F or A.P.) A.F.: E-mail: [email protected], Telephone: (970) 419 8343 A.P.: E-mail: [email protected], Telephone: (970) 419 8343

probabilities which are incorporated into the CSO algorithm. Figure 1 illustrates the components in the single-sensor image processing chain. The recorded image is first improved through an image restoration or deconvolution algorithm, taking as input a focal plane image and giving as output an improved image. The process of detection eliminates pixels with intensities below a threshold and groups remaining signal pixels into object returns. CSO algorithms are then applied to candidate UCSO pixel-clusters and yield a set of (potentially resolved) object returns.

FocalPlane Image

Image Restoration/ Deconvolution

Resolved Object PixelClusters

Detection

Input to Tracking Unresolved Object PixelClusters

Single-Sensor Signal Processing Chain

CSO Resolution

Unresolved Object PixelClusters

Figure 1: Image processing components for single sensor data.

This paper presents benefits of the proposed framework to the single sensor tracking problem. The aim is to improve the quality of data from individual sensors before using MHPCD algorithms at a central fusion node. As mentioned above, as MHPCD algorithms using data from multiple sensors converge to give a best estimate of the number of true targets in the scene this information can be fed back into the single sensor CSO processing for improved decomposition. Section 2 provides a review of the sensor model used in our studies. A brief overview of the UCSO problem is given in Section 3. Then a review of the popular CLEAN image restoration techniques is given in Section 4. An overview of three CSO algorithms, which are extensions of the UCSO particle cluster decomposition, 1 is given in Section 5. Results in Section 6 show that application of image restoration and CSO algorithms in a hierarchical manner can increase the number of correct targets resolved in a manner consistent with the needs of tracking.

2. DESCRIPTION OF INFRARED SENSOR MODEL Numerica has developed a medium-fidelity IR sensor model that is integrated with its Algorithm Simulator for Tracking and Observations (ALTO) Matlab prototyping and simulation environment. The model incorporates the following features: (1) received irradiance as function of target intensity and range with Gaussian noise amplitude variation, (2) sensor and detection noise modeling through noise floor, Gaussian and Poisson noise components, (3) adaptive thresholding based on specified Signal-to-Noise (SNR) ratio , and (4) merged measurement model via a Gaussian Point Spread Function (PSF) to simulate optical blur and sensor jitter. The sensor model simulates point targets and provides measurements to the tracker in the form of intensity weighted focal plane centroids and covariances. The following briefly provides some technical details of the sensor model. Optical Radiation. Every body emits radiation due to its internal heat energy. The radiation can be characterized by its wavelength λ or its frequency ν that are related according to the law λ · ν = c, where c refers to the velocity of light in a vacuum, c = 299, 792, 458 m/s. The Long-Wavelength Infrared (LWIR) portion of this radiation, λ ∈ [5µm, 1000µm], is used in our studies. Planck’s law2 describes the amplitude of radiation emitted from a blackbody as a function of radiation wavelength λ and thermodynamic temperature of the emitter T : Lλ (λ, T ) =

2hc2 · λ5

exp



1  [W · m−3 sr−1 ], hc −1 λkT

(1)

with the Boltzmann entropy constant k = 1.3806503×10 −23 J · K−1 and the Planck quantum of action h = 6.62606876× 10−34 J · s. The spectral radiance over a specific wavelength interval [λ 1 , λ2 ] is: Lλ1 −λ2 (T ) =

Z

λ2

(2)

Lλ (λ, T )dλ. λ1

The integral needs to be solved numerically, for example by using Simpson’s rule. 3 The radiant intensity I for a graybody over a given wavelength interval for an object of effective surface area A eff and emissivity  is then: Iλ1 −λ2 (T ) = Aeff Lλ1 −λ2 (T ) [W · sr−1 ].

(3)

In this paper we simulate targets with  = 0.5 and T = 300K. We do not simulate atmospheric or other clutter effects. Irradiance at Aperture Entrance. The amount of energy incident on the sensor per unit area is referred to as the irradiance E and it can be approximated as being directly proportional to radiant intensity I, E = τ I/r 2 [W/m2 ], where r is the distance between the source and the sensor, and τ is the optical transmittance. 2 The total amplitude A received from a point target by a circular sensor is then given by: Z τ πD2 I [W], (4) A= E ds = 4 r2 aperture area with the aperture diameter D of the optics and the optical transmittance τ of the system. Point Spread Function. An important characteristic of any optical system is its Point Spread Function (PSF). The PSF is the output of the imaging system for an input point source. 4 Although a PSF will typically vary from pixel to pixel and over time, a two-dimensional symmetric Gaussian distribution is often used to approximate the PSF of an optical system. Assuming that xk , yk , represent continuous coordinates of a unit amplitude point-source viewed on the focal plane, the PSF is given by: (x−xk )2 +(y−yk )2 − 1 2σ 2 psf i(x, y) = e , (5) 2 2πσpsf with blur width parameter σpsf .5 Assuming a square detector cell with length ∆ and with center located at (x c , yc ), the unit amplitude detector response g is given by an integration of the PSF over the pixel area: g(xc , yc ) =

Z

yc +0.5∆ yc −0.5∆

Z

xc +0.5∆

(6)

i(x, y)dxdy. xc −0.5∆

Factoring i(x, y): i(x, y) = i(x)i(y), i(x) = √

− 1 e 2πσpsf

(x−xk )2 2σ 2 psf

, i(y) = √

− 1 e 2πσpsf

(y−yk )2 2σ 2 psf

,

(7)

this integral can be expressed in the form: g(xc , yc ) = g(xc )g(yc ) =

Z

xc +0.5∆

i(x)dx xc −0.5∆

! Z

yc +0.5∆

!

i(y)dy . yc −0.5∆

(8)

We can express the integral g(xc ) through the error function: 2 Φ(x) = √ π

Z

x 0

2

e−z dz,

(9)

√ using a substitution u = (x − xk )/( 2σpsf ), to get, after some manipulation:      xk − xc − 0.5∆ xk − xc + 0.5∆ 1 1 √ √ −Φ . Φ g(xc ) = (Φ(−u1 ) − Φ(−u2 )) = 2 2 2σpsf 2σpsf

(10)

The result for g(yc ) is identical with x being replaced by y. The size of the point spread function is primarily influenced by optics blur, sensor jitter, and platform movement. For simulation purpose we assume that jitter results in a symmetric point-spread function and ignore blur due to platform 2 2 2 movement. It follows that we can express the size of the point spread function as σ psf = σblur + σjitter . We set the parameter 2 2 σjitter = 4.2E-8. σblur , i.e., the size of the PSF due to optics alone, is related to the Energy-on-Detector (EOD) which is defined as the fraction of the energy of a point source integrated within a single detector. Assuming that the source image falls at the detector center6 :   √ ∆ ∆ , √ √ EOD = Φ , such that σblur = √ (11) 2 2σblur 2 2 Φ−1 EOD where Φ−1 denotes the inverse error function.

Linear Sensor System Model. We now express the focal-plane sensor response as a linear model, given n point sources at locations (xi , yi ), i = 1, . . . , n. Then, the focal plane response can be modeled in the form 7 s = Ga + n, with the spreading matrix (or steering matrix): u=1,...,u

max . G(x, y) = [g(u, v, x1 , y1 ) g(u, v, x2 , y2 ) . . . g(u, v, xn , yn )]v=1,...,vmax

(12)

The spreading matrix is a (umax vmax × n) matrix assuming that the focal plane has u max pixels indexed u = 1, . . . , umax along the x coordinate and vmax pixels indexed v = 1, . . . , vmax along the y coordinate ∗ . The spread functions g(u, v, x, y) are similar to the unit amplitude responses g(xc , yc ) above, but now indexed by pixel indices. The model can be generalized to T 2  I1 /r12 · · · In /rn2 , include a non-active pixel-area width.7 The amplitude vector a can be modeled as7 a = τ πD 4 with radiant intensity Ij of point target j in units W/sr, and distance rj of point target j to the sensor in units of meter. The vector n represents an additive noise vector. In the general case, image noise arises from sources including background clutter, atmospheric turbulence, quantum variations in source and background intensity (photon noise), and electronic noise in the IR detector components. While most noise components are generally assumed Gaussian, some of these noise sources are Poisson processes. Background clutter may not fall into any kind of easily-described statistics. For the purposes of this study, we will model noise as a linear combination of a noise floor, Gaussian component, and Poisson component: n = n0 + nGaussian + nPoisson . The mean of these distributions is specified by a Signal-to-Noise Ratio (SNR).

SNR can be defined as SNR = |s|2 /|n|2 . We will follow the definition of7 and replace the total signal with the average signal per pixel: Pumax vmax 2 ˆs2 si 2 i=1 ˆ SNR = 10 log10 pixel , with s = , (13) pixel σn2 umax vmax

where SNR is expressed in dB. This formula allows one to express the standard deviation of the noise term as: σn2 =

ˆs2pixel 10SNR/10

.

(14)

Detection. We refer to the process of thresholding and segmentation as detection. The purpose of detection is to identify connected sets of object pixels that likely correspond to targets of interest and to remove background pixels, that are likely caused by noise, false targets, clutter, etc. The simplest way to extract the objects from the background is to select a threshold sth such that any pixel at location (u, v) with signal s(u, v) > s th is an object pixel, while any other pixel is a background pixel.8 The simplest approach for selecting a threshold is to choose a fixed value s th as a fraction of ∗

In the simulation environment, u denotes the focal plane rows and v denotes the focal plane columns.

the maximum pixel signal, sth = αs maxu,v s(u, v), with a detection threshold parameter α s . An adaptive thresholding approach can be formulated as follows,8 given an initial estimate sth and a weighting term αa : (1) segment the image using sth , (2) compute µ1 as the mean signal of all object pixels and µ2 as the mean signal of all background pixels, (3) update the threshold: sth = αa µ1 + (1 − αa )µ2 , and (4) repeat step 1 through 3 until sth converges or a stopping criterion is fulfilled. The term αa allows one to specify the weight of the threshold either towards the maximum peak signal or the minimum peak signal. We use adaptive thresholding with α a = 0.35 and a starting threshold of sth = 0.1. After the detection step, the image consists of object pixels and background pixels. Cluster identification allows one to identify connected sets of object pixels, i.e., a pixel-cluster or a clump, that likely correspond to the return from the same target class. A widely used cluster identification algorithm is the Hoshen-Kopelman (HK) algorithm 9 that identifies clusters in a single pass over the data by assigning pixels to temporary clusters and applying merge rules when two temporary clusters overlap.10 The 4 neighbor criteria is used to identify neighbors in this work. Berry et al. 10 present an efficient implementation of this algorithm using a finite state machine which is used in our IR sensor model. Pixel-Cluster Representation. To support tracking, given a pixel-cluster, it is necessary to compute a representative measurement z with a covariance estimate R. We assume that the pixel-cluster is described in (u, v) focal plane row, column coordinates. Let (zuv , Ruv ) denote the representative measurement on the u, v axis. First we discuss the single-pixel pixel-cluster case. Let (u, v) denote the coordinate of the pixel that forms the pixelcluster. Then it is:     2 u − 0.5 σsingle pixel 0 zuv = , (15) , Ruv = 2 v − 0.5 0 σsingle pixel

2 2 2 where we define the σsingle pixel such that 3 sigma circumscribes a single pixel, i.e., 0.7071 pixel units, σ single pixel = 2(0.5) /9.

To discuss the multi-pixel pixel-cluster case, let P = {(u 1 , v1 ), . . . , (un , vn )} denote the set containing the n pixel coordinates of a pixel-cluster P . The signal at a coordinate (u, v) is given by s(u, v). As covariance estimate, we use a 2nd moment, intensity-weighted, estimate:   2     2 1 su σu σuv zu , (16) , Ruv = 2 = zuv := σuv σv2 zv sP sv

where

sP =

X

s(u, v), su =

(u,v)∈P

and σu2 =

1 sP

X

(u,v)∈P

X

(u,v)∈P

s(u, v)(u−zu )2 , σv2 =

1 sP

X

(u,v)∈P

s(u, v)u, sv =

X

s(u, v)v,

(17)

(u,v)∈P

2 s(u, v)(v−zv )2 , σuv =

1 sP

X

s(u, v)(u−zu )(v−zv ). (18)

(u,v)∈P

2 For the case that the pixel-cluster has only a single row or column, we use the estimate σ single pixel along this dimension instead of the above weighted values.

We note that the CSO processing algorithms, described in Section 5, can be used to compute a covariance representation of the pixel-cluster. However, for consistency, we do not use the covariance returned by the CSO processing algorithm for representation of the cluster. Instead, the CSO processing algorithm is used only to decompose the pixelcluster data. The above formulas are then used on each decomposed pixel-cluster for a consistent representation of parsed UCSO pixel-clusters. The last step in producing measurement data for tracking is to transform the representative measurements (z uv , Ruv ) from row, column coordinates to continuous valued azimuth α and elevation  coordinates. To this end, let i FOV denote the Instantaneous Field of View (IFOV), i.e., the size of a pixel measured in spherical α,  coordinates. This transformation is then:  n  n (19) iFOV ,  = − zu iFOV , α = zv − 2 2 where n denotes the number of pixels along the vertical (or horizontal) direction (we assume a square detector). Furthermore, covariances may be transformed as: 2 2 2 σα2 = σv2 i2FOV , σ2 = σu2 i2FOV , σα = σuv iFOV .

(20)

3. MATHEMATICAL PROBLEM FORMULATION The UCSO problems addressed in Section 1 are the result of finite focal plane resolution, imaging blur due to optics and jitter, as well as thermal, shot and background noise. Assuming stationary blur and additive noise, a linear degradation model for the focal plane signal s(i, j) can be written in the form 11 : M N 1 XX h(i − k, j − l)f (k, l) + n(i, j) = h(i, j) ∗ f (i, j) + n(i, j), s(i, j) = MN

(21)

k=1 l=1

where f (i, j) represents the undistorted M × N image and n(i, j) represents an additive noise term, combining noise from Gaussian or Poisson statistics. The term h(i, j) represents the PSF of the imaging system, which is assumed to be spatially invariant in the above model, and the operator “∗” indicates convolution. Detection is applied as discussed in Section 2. Total irradiance incident on focal plane array

Two point sources and their individual PSFs

−0.02

−0.02

−0.04

−0.04

−0.06

−0.06

−0.08

−0.08

i(x,y)

i(x,y)

Two far−spaced point targets

−0.1 −0.12

Optics blur point signals

Sum over all target irradiances

−0.14 −0.16

−0.1 −0.12 −0.14 −0.16

100

100 80 60 40 20 0

y

0

20

40

60

80

80

100

60 40 20 0

y

x

(a)

0

20

40

60

80

100

x

(b) Apply pixel−spread matrix

Digitize (sample) image Centroid

Extent

Focal plane 0

0

Pixel−cluster representation

−0.02 −0.04

−0.08

−0.02 −0.04 −0.06

i(x,y)

i(x,y)

−0.06

−0.1

−0.08 −0.1

−0.12

−0.12

−0.14

−0.14

−0.16

−0.16

100

100 80 60 40 20

y

(d)

0

0

20

40

60

80

80

100

60 40 20

y

x

Representative measurement data: one centroid and covariance

(c)

0

0

20

40

60

80

100

x

Digitized image data shows single pixel−cluster

Figure 2. Detection processing applied to two closely spaced objects. (a) Individual PSFs of the two point targets. (b) Linear mixture of the two PSFs. (c) Observed raw measurement data in the focal plane. (d) Representative measurement data.

To illustrate how the system PSF acts to form USCO returns we simulate two equal intensity point targets with coordinates on the focal plane (65,50) and (35,50). In this example the focal plane is taken to be the xy-plane 5 and the PSF is Gaussian with width σpsf = 10. Figure 2(a) shows the individual PSFs of the two point targets. The detector response consists of the sum of the individual PSFs, as shown in Figure 2(b). The digitized image is formed and the resulting pixel-cluster in the focal plane is shown in Figure 2(c). In this case, imaging produces a pixel-cluster which represents two overlapping object signals. Depending on the detection threshold used, this image likely produces an UCSO measurement as shown in Figure 2(d).

4. IMAGE RESTORATION Techniques used for image restoration commonly assume known models of image degradations, usually blur and noise, and then apply an inverse procedure to obtain an approximation of the original scene. 11 This is equivalent to attempting to work through the problem illustrated in Figure 2 backwards, in the presence of noise. These algorithms take as input a degraded focal plane image and give as output an improved image. Generally, these algorithms run in fixed time for a given number of pixels. Based on the model in Equation (21), one fundamental task of image restoration is to deconvolve the blurred image with the system PSF to obtain a restored image ˆf . The quality of the restored image is strongly determined by knowledge of the PSF. However, knowledge of the PSF is not sufficient to guarantee accurate image restoration because the inverse problem is ill-conditioned and suffers severely from noise amplification. The second fundamental task of image restoration is therefore the rejection of noise. We first describe a procedure to estimate the PSF, assuming that it is Gaussian, from observed raw image data. Then we review several popular image restoration algorithms. Of these we select the CLEAN image restoration algorithm which uses the estimated PSF to recover an original image of point targets in the presence of noise.

4.1. Estimating the PSF In many cases, the PSF of a sensor will be estimated off-line, before the sensor is launched. Once launched, the actual PSF may vary due to changes in the optical properties of the sensor system. The estimate of the PSF can then be updated by comparing known point sources (e.g., stars) with the observed image. In these cases, information about the PSF may be communicated to the image restoration algorithm. However, if no information about the PSF is available, the PSF must be estimated from the image data received by the image restoration algorithm. Thus, an image restoration algorithm should be able to provide a PSF estimate given detected pixel-clusters. A simple algorithm is used to estimate a Gaussian PSF from reported pixel-clusters in this work. Let P = {(i1 , j1 , I1 ), (i2 , j2 , I2 ), . . . , (in , jn , In )} denote a pixel-cluster that is formed by connected pixels at locations (i1 , j1 ), . . . , (in , jn ) with intensities I1 , . . . , In , respectively. Let Imax denote the maximum intensity observed within the pixel-cluster, Imax = maxi=1,...n Ii . Given this pixel-cluster, we can estimate the Energy On Detector (EOD) as: EOD ≈ P

Imax i=1,...n Ii

.

(22)

We can repeat this calculation for all reported pixel-clusters, excluding single pixel pixel-clusters, and maintain the maximum EOD estimate, EODmax , which corresponds to a minimum PSF spread. Given this value, we can construct a Gaussian PSF: (x−xk )2 +(y−yk )2 − 1 2σ 2 psf,est , (23) h(x, y) = e 2 2πσpsf,est with estimated blur width σpsf,est . Then we have6 : ∆ , σpsf,est = √ √ 2 2 Φ−1 EODmax

(24)

where Φ denotes the error function and ∆ denotes the pixel detector cell length.

4.2. Review of Common Image Restoration Methods Many algorithms implement robust image restoration procedures in the presence of noise. Three popular examples are the Wiener Filter,8, 12 the Richardson-Lucy Algorithm,13, 14 and the CLEAN algorithm.15 The Wiener filter operates much the same as direct inverse filtering, but uses Tikhonov regularization to impose stability constraints on the solution. The Richardson-Lucy algorithm maximizes the likelihood that the convolution of the restored image with the estimated PSF results in the original raw image, assuming Poisson noise statistics. The CLEAN algorithm is designed especially for point targets and iteratively subtracts the estimated PSF signal from the dirty beam in order to form a restored image or clean beam. All of the algorithms mentioned above, and many others not mentioned, each have their own unique

advantages and disadvantages. However, we select the H o¨ gbom implementation of the CLEAN algorithm 15 for this work because it is designed specifically to improve images of point targets, which is an appropriate approximation for targets j=1,...,N denote a matrix that contains the corrupted signals. The in midcourse IR sensor measurements. Let I = {s(i, j)} i=1,...,M algorithm proceeds as follows: A LGORITHM 1 (CLEAN). 1. Find strength and position of the maximum signal pixel in I. 2. Subtract from I an estimated PSF signal, centered at the peak and multiplied by the peak strength (and possibly a damping factor ≤ 1). 3. Unless the next remaining peak is below a user-specified fraction of the highest (first) peak, I max , return to Step 1.

4. Construct an image from the removed peaks by convolving the accumulated point source model with the “sharp” Gaussian, that is with σpsf,sharp < σpsf,est . This produces the “CLEAN” image. 5. Add the remaining residuals to the CLEAN image to obtain the final restored image. We use 0.5σpsf,est for the spread of the “sharp” Gaussian PSF, and 0.4I max as the termination threshold. Note that the CLEAN algorithm does not actively test whether signal pixels are likely to be noise or not. However, by “sharpening” the strongest signals in the image the CLEAN algorithm raises the noise threshold which is then applied to the image by detection, allowing for rejection of noise signals.

5. SINGLE-IMAGE PIXEL-CLUSTER CSO RESOLUTION Section 4 describes image restoration algorithms that deconvolve the raw focal plane image to produce an improved image. In this section we focus on algorithms that operate on single UCSO pixel-clusters with the goal of producing a decomposition into RO pixel-clusters or smaller UCSO sub-clusters consistent with the observation characteristics of the sensor. Unlike image restoration algorithms, the run-time of these CSO algorithms is not fixed for a given number of pixels. Instead it is a function of the number of UCSO candidate pixel-clusters, the signal distribution of the pixel-clusters, and the number of decomposition hypotheses to consider. Because CSO processing is applied to individual pixel-clusters it may be applied to pixel-clusters of greatest interest first and possibly interrupted during processing of pixel-clusters of lesser interest if necessary. We present a novel pixel-cluster decomposition algorithm that uses a particle distribution representative of the pixel-cluster signals to feed the EM algorithm for UCSO resolution. Alternative CSO algorithms have been given in the literature.5, 7, 16, 17

5.1. Identifying Candidate UCSOs Given Single-Image Data After a focal plane image has been segmented into pixel-cluster sets, the CSO algorithm is applied to those pixel-clusters that have been identified as candidate UCSO pixel-clusters, i.e., pixel-clusters that likely represent signals from more than one truth object. A single-pixel pixel-cluster will not be considered a candidate UCSO. However, any pixel-cluster that has more than one isolated signal peak is considered a candidate UCSO. A signal pixel is determined to be an isolated peak if its intensity is greater than or equal to the intensity of all other pixels in its 8-neighbor area. A pixel-cluster without isolated peaks may also be considered a candidate UCSO after comparing the size of the pixel-cluster with the size of the sensor’s estimated PSF. Each pixel-cluster is represented by a centroid and 2 nd moment, intensity weighted covariance:  2  2 σu σuv Ruv = 2 , (25) σuv σv2 Let rpixel spread denote the pixel spread relative to the PSF, defined as: rpixel spread =

max(σu , σv ) . σpsf

(26)

The CSO algorithms described in this section are applied to all pixel-clusters in this work which are formed by at least two pixels and have either more than one isolated signal peak or pixel spread r pixel spread > 1.

5.2. Prior-Knowledge Particle Clustering (PPC) Decomposition As input, the CSO algorithm is given the signal pixel intensities of a candidate UCSO pixel-cluster and a guess, n obj,guess , concerning the number of objects likely represented by the pixel-cluster. This guess could be computed from the number of peaks in the pixel-cluster, size of the pixel-cluster, or information from a central track database. We also refer to n obj,guess as the decomposition number. The PPC algorithm decomposes the pixel-cluster into exactly J := n obj,guess sub-clusters. This is performed in two steps: (1) distribute particles over the pixel-cluster according to a distribution that reflects the pixel intensity signals and (2) cluster the set of particles into J Gaussian distributions with representative centroids and covariances using the EM algorithm. The clustering will produce J representative measurements on the focal plane of the sensor that parse the pixel-cluster into J sub-clusters and that can be transformed into the desired coordinate system for input to the tracking system. Figure 3 illustrates this algorithm for a sample pixel-cluster consisting of 21 signal pixels. Pixel−Cluster with Particles

Pixel−Cluster 7

7

8

8

9

9

10

10

Pixel−Cluster Decomposition 5 6 7 8

Distribute particles uniformly according to pixel irradiances

11

12

y

y

y

9

11

Cluster particles into n obj clusters

12

10 11 12 13

13

13

14 14 11

(a)

12

13

14

x

15

16

17

14 11

(b)

12

13

14

x

15

16

15 10

17

(c)

11

12

13

x

14

15

16

17

Figure 3. (a) Pixel-cluster that consists of 21 signal pixels. (b) 1000 particles distributed over the pixel-cluster to match the signal distribution of the pixel-cluster. (c) Clustering of particles and resulting pixel-cluster decomposition into 3 clusters. The covariances are shown at the 3σ level.

The EM algorithm finds a (locally optimal) J component Gaussian Mixture Model (GMM) which maximizes the probability that the distributed particles were drawn from this distribution. Details of the particle distribution and use of the EM algorithm for clustering may be found elsewhere. 1 Since the particles approximate the signal distribution of the pixel-cluster, the final clustering decomposes the pixel-cluster into J sub-clusters which provide a decomposition of the focal plane signal into J separate Gaussian sources with measurement centroid and covariance useful for tracking. In this work 150 particles are distributed over each USCO pixel-cluster. When estimating covariances, we use the estimated PSF as an input to the clustering algorithm and prevent model covariances from shrinking below the size of the estimated PSF. Additionally, the membership function of particles to sub-clusters computed by the EM algorithm gives a way to assign fractions of the total original pixel-cluster intensity to each sub-cluster for discrimination. This fractional intensity map is then used to compute the final measurement centroid and covariance as discussed in Section 2 in order to be consistent with other types of processing.

5.3. Constructive Particle Clustering (CPC) The CPC algorithm is an extension of the PPC algorithm which allows decompositions of a pixel-cluster into J components, with J being allowed to grow greater or less than the prior guess on the number of sub-clusters. The goal is to resolve the pixel-cluster into the most likely number of clusters rather than simply what is suggested by n obj,guess . To accomplish this we use a hierarchical clustering approach that produces J-component particle clusters with J being selected from the set: J ∈ J = {1, . . . , nobj,guess , . . . , nobj,guess + nextra }, where nextra denotes the number of model components considered beyond nobj,guess . We use nextra = 2 in our simulations. The task now becomes that of model selection, i.e., deciding which of the J-component clusters, J ∈ J , describes best the data observed (which is approximated through the particle set). Many approaches exist for model selection.18, 19 Of these we implement mixture splitting where the model is initialized with a small number of components and new components are added to the model until some maximum number is reached or a higher-order model fails to improve a certain measure of model fitness. Critical is the measure used to

assess the value of a certain model, i.e., the value of a J-component clustering. If the clustering cardinality selected is too small the model fails to capture some important features evident in the data and targets remain unresolved. If the clustering cardinality selected is too large then noise in the data will be fitted and false targets are created. To avoid false targets, the model value measure can incorporate terms that penalize more complex models. The Bayesian Information Criterion (BIC) or Schwarz criterion20 incorporates this penalty (we also include a prior probability P (Θ) of the cluster model parameters Θ into the selection criterion 21 ): BIC(Θ) = −2 log L(Θ) − 2 log P (Θ) + p log N,

(27)

with the model likelihood L(Θ), the total number of parameters in the model p, and the number of particles N . For a Gaussian mixture, Θ describes the mean, covariance, and weight of the mixture components. Assuming a mixture of J components and letting d denote the dimension of the data to be clustered (d = 2 for focal plane data), the number of model parameters is given by: p = mean parameters + covariance parameters + weight parameters = dJ +

1 d(d + 1)J + (J − 1). 2

(28)

The above form of the BIC represents a negative penalized log likelihood function that is to be minimized in model selection. The probability P (Θ) allows one to incorporate a prior belief into the model selection criterion. We “design” P (Θ) such that it depends on the estimated SNR in a manner that gives very high prior belief to J = n obj,guess in low-SNR environments, while relaxing this belief in high-SNR environments where it is safer to decompose beyond the prior guess. This is done by defining P (Θ) as a normal distribution in J, centered around the prior guess:   1 (J − nobj,guess )2 √ exp Pprior (J, SNR) = . (29) 2 (SNR) 2σprior σprior (SNR) 2π The standard deviation of the distribution is selected to depend on the estimated SNR: γ  SNR prior max , σprior (SNR) = σprior 20

(30)

max where σprior denotes a maximum size of the deviation and γ prior denotes a power-coefficient that controls how fast the size max of the distribution grows to the maximum allowed size with SNR. In this work we use γ prior = 1.0 and σprior = 0.75.

5.3.1. Covariance Constrained Constructive Particle Clustering (C3PC) Assuming that the PSF estimate is sufficiently accurate, we can use the PSF in a more strict fashion with the EM clustering algorithm. We remove the covariances from the model parameters and instead constrain the covariances of the model clusters to be equal to the estimated PSF covariance. This forces the algorithm to find point targets in the pixel cluster decomposition rather than allowing other covariance shapes which would more likely represent UCSO measurements. The CPC algorithm is then modified to not update covariance estimates during iteration and the BIC criterion is adjusted to remove covariance from the model parameters. We refer to this Covariance Constrained Constructive Particle Clustering algorithm as the C3PC algorithm.

6. RESULTS We show one representative scene to display the hierarchical application of image restoration and CSO processing. The following example simulates equal levels of Gaussian and Poisson noise at SNR = 10, background noise of SNR = 50, and all targets are simulated with equal intensity as discussed in Section 2. Figure 4 shows a sample focal plane with nt = 60 (a) after detection with no image restoration, (b) after image restoration with the CLEAN algorithm, and (c) after applying the CPC algorithm to candidate UCSO pixel-clusters remaining after the CLEAN algorithm. True target locations are shown as red diamonds and return centroids are shown as black stars. The covariance of each return is plotted at the 3σ level and a black line connects targets to their assigned return centroid. False targets are marked by a white circle. After detection the raw focal plane contains 31 returns, of which 30 are correct returns and 1 is due to noise. After processing with the CLEAN algorithm the number of correct returns is increased to 35 and the false target due to

None − None

0

CLEAN − None

0 6.66892e−11

5

6.66892e−11

5

5.33514e−11

4.00135e−11

4.00135e−11

y

10

y

10

5.33514e−11

2.66757e−11

2.66757e−11

15

15 1.33378e−11

20

0

1.33378e−11

20

0

5

10

x

15

20

0

0

5

10

(a)

x

15

20

(b) CLEAN − CPC

0

6.66892e−11

5

5.33514e−11

4.00135e−11

y

10

2.66757e−11

15 1.33378e−11

20

0

0

5

10

x

15

20

(c) Figure 4. 24×24 focal plane with 60 targets distributed randomly over the pixel array. (a) After detection without image restoration. (b) After image restoration with the CLEAN algorithm. (c) After image restoration with the CLEAN algorithm and applying the CPC algorithm on candidate UCSO pixel-clusters.

noise is eliminated. CPC processing is then applied and is able to decompose 6 additional correct targets without creating any false returns. Details of the computation of these metrics are given later in this Section. In order to study the effect of these processing algorithms on focal plane data in general we perform 50 Monte-Carlo simulations with nt = 60, Gaussian and Poisson noise at SNR = [6, 8, 10, 12.5, 15, 20, 30, and 50], and with background noise and target intensity as in Figure 4. After each simulation the metrics related to tracking performance that we compute are the (1) number of correct reports, (2) number of false reports, (3) number of lost targets, (4) average distance error between returns and targets represented by each return, (5) average number of targets associated with each correct return, (6) number of RO returns, (7) number of Resolved Intensity Object (RIO) returns, (8) average covariance consistency between returns and targets represented by each return, and (9) run-time of the image restoration and CSO algorithms. We compute these metrics by performing an iterative multi-assignment between truth targets and returns after each processing stage.22 No target is assigned to a return with Mahalanobis distance greater than 9.21, which is the 99% confidence level reported by the return. A return is tagged as correct if it originates from a single or multiple truth target signals. Thus, both a RO and an UCSO are tagged as correct returns. A return is false if it is not associated with a truth target signal. A target is lost if no reported pixel-cluster return is associated with the target. The average distance error is computed as the sum of distances from return centroids to each of the targets represented by those returns divided by the number of correct returns. The average number of targets per return is computed as the number of targets associated with a return divided by the number of correct returns. A return is tagged as a RO return if it represents only one target. A return is tagged as a RIO return if it is a RO return and the reported intensity is within 10% of the target’s true intensity. The average covariance consistency is computed as in 23 but is normalized by the number of degrees of freedom in the data, which is 2 for focal plane data. This metric is related to the accuracy of the reported return uncertainty, the ideal

for this metric is 1. Values greater than 1 indicate that uncertainties are underestimated and values less than 1 indicate that uncertainties are overestimated. Finally, run-time is recorded as the average run-time of the image restoration and CSO algorithm excluding the time taken to compute performance metrics. All simulations were performed with a single instance of Matlab running on a dual 3.06 GHz Xeon 32-bit processor machine with 8 GB of RAM. Results showing these metrics are presented in Figures 5, 6 and 7. The key used to label these plots is Restoration Method-CSO Method, so that CLEAN-None means that a given image was processed with the CLEAN algorithm with no further CSO processing applied. 18

50 40 30 None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

20 10 0

10

20

SNR [dB]

30

40

1

None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

16 14 12

Total Number of Lost Targets

Total Number of False Returns

Total Number of Correct Returns

60

10 8 6 4 2 0

50

(a)

None−None CLEAN−None

0.8 0.6 0.4 0.2 0

10

20

SNR [dB]

30

40

10

50

(b)

20

30 SNR [dB]

40

50

(c)

Figure 5. Metrics plotted against SNR. (a) Total number of correct returns. (b) Total number of false returns. and (c) Total number of lost targets.

Figure 5 (a) shows that application of the CLEAN image restoration algorithm results in a higher number of correct returns and that subsequent CSO processing further increases the number of correct returns. The average number of correct returns is almost the same for all values of SNR for None-None, CLEAN-None, and CLEAN-PPC. However, CLEAN-CPC and CLEAN-C3PC, which are allowed to decompose pixel-clusters into a number of returns other than the prior guess, are found to decompose more correct targets at high SNR. This occurs due to the relaxation in the weight given to n obj,guess at high SNR by Equation (29). Figure 5 (b) shows that application of the CLEAN algorithm significantly reduces the number of false targets due to noise at low SNR. CSO processing applied after image restoration then benefits from the reduction in false targets at low SNR. Here the relaxation of weight given to the prior decomposition number at high SNR by Equation (29) allows CLEAN-CPC and CLEAN-C3PC to fit more false targets than the other processing types. Figure 5 (c) shows the number of lost targets after to application of the CLEAN algorithm. It is seen that the CLEAN algorithm has a tendency to lose true target signals at low SNR. Because 0 component cluster hypotheses are not considered by the CSO algorithms, CSO processing will not result in additional lost targets after image restoration. −5

None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

4.5 4 3.5 3 2.5 2 1.5

10

(a)

20

SNR [dB]

30

40

50

2.2

60

None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

2 1.8

Number of Resolved Objects

x 10

Average Number of Targets per Return

Average Distance Error [α, ε coordinates]

5

1.6 1.4 1.2 1

10

(b)

20

SNR [dB]

30

40

50

50 40 30 20

None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

10 0

10

20 30 SNR [dB]

40

50

(c)

Figure 6. Metrics plotted against SNR. (a) Average distance error between return centroid and truth targets. (b) Average number of targets per return. (c) Total number of resolved objects.

Figure 6 (a) shows that the average distance between targets and their associated returns decreases after application of the CLEAN algorithm and decreases again after CSO processing. This metric is closely related to Figure 6 (b) because the decrease in number of targets represented by each return allows each return to better represent the targets that do assign to it. Figure 6 (c) is almost the inverse of Figures 6 (a) and (c). This is because the number of single target RO returns increases as the number of targets per return decreases. Notice that all three of the metrics in Figure 6 improve after CLEAN image processing. They improve further after CSO processing, with CLEAN-CPC outperforming CLEAN-PPC and CLEAN-C3PC outperforming CLEAN-CPC.

10 None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

5

0

10

(a)

20

30 SNR [dB]

40

50

3

Detection CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

1 0.9

2.5 2 Runtime [s]

Average Covariance Consistency

Number of Resolved Intensity Objects

15

0.8 0.7

None−None CLEAN−None CLEAN−Prior CLEAN−CPC CLEAN−C3PC

1.5 1 0.5

0.6 10

(b)

20

SNR [dB]

30

40

50

0

10

20

30 SNR [dB]

40

50

(c)

Figure 7. Metrics plotted against SNR. (a) Total number of resolved intensity objects. (b) Average covariance consistency. (c) Run-time necessary for image restoration and CSO algorithms.

Figure 7 (a) shows the number of resolved intensity objects and is similar to 6 (d) except for the performance of the None-None processing. This is due to the difference in definition between RO and RIO objects. In addition to representing only a single target an RIO must also have an intensity matching with 10% of the target’s intensity. Because the PSF in the raw image data is rather wide, large portions of a target’s intensity signal will be spread out in other pixels at low intensity. At low SNR additional noise allows some of these pixels to pass the adaptive threshold, but at high SNR these pixels are often rejected as noise. Although the central peak passes and is returned as a RO return, the intensity returned after thresholding does not accurately describe the target. Because the CLEAN algorithm “sharpens” signal peaks, it pulls some of the energy spread out by the PSF back into the central peak. Therefore, signal pixels which pass thresholding after the CLEAN algorithm more closely resemble the intensity signal of the original target. Subsequent CSO processing then benefits from this effect as well. Figure 7 (b) shows the average covariance consistency for the various processing combinations. A covariance consistency of 1 indicates that the return covariances are indeed consistent with the distribution of target distance errors. None-None processing is seen to have a very low covariance consistency, meaning that returns detected in the raw image overestimate their own uncertainty. While this is not catastrophic for tracking it does indicate that the position of targets is better known than what is reported. CLEAN-None processing brings the covariance consistency closer to 1, which is even more significant when considering that the average size of the return covariance is also decreasing. Further application of CSO processing results in a similar covariance consistency to CLEAN-None, with the CLEAN-C3PC algorithm producing pixel-cluster partitions with the most consistent pixel-cluster covariances. Finally, Figure 7 (c) shows the run-time necessary to perform the image restoration and CSO processing algorithms. As expected, application of the CLEAN algorithm is very cheap and requires little more processing time than detection on the raw image alone. The CSO algorithms require much more time to run. Furthermore, the CPC and C3PC algorithms, which consider multiple decomposition hypotheses, take longer to run than the PPC algorithm, which considers only one decomposition hypothesis per pixel-cluster. The results in Figures 5, 6, and 7 paint an overall positive picture of the use of this hierarchical scheme. It is seen that application of image restoration techniques increase the correct target count and that successive application of CSO processing algorithms further increase this number. One cost of this increase in correct target count is an increase in the number of false targets fitted by CSO algorithms as well as an increase in the number of lost targets by the CLEAN algorithm. Because a multi-target tracker can typically handle these types of errors to a certain extent it is unknown how these penalties will weigh against the benefits of higher position accuracy and correct target returns in the tracking

problem. Key to this result will be whether or not the targets lost by CLEAN or falsely created by CSO processing will be consistent from frame to frame. Further testing will be done to determine these properties. Figure 8 shows a possible hierarchical application of these algorithms applied at SNR = 10. Figure 8 (a) shows the number of correct and false targets likely to be returned after applying (i) no image restoration or CSO processing (ii) CLEAN image restoration only (iii) CLEAN image restoration followed by the PPC algorithm and (iv) the CLEAN algorithm followed by C3PC processing. This order is chosen so that the total number of targets reported to the tracker is monotonically increasing. After each successive algorithm is applied the tracker is faced with an increase in the number of targets and must be given time to solve the track initiation problem before further processing is applied. The hierarchical application of these algorithms allows the number of targets tracked to gradually increase, better loading the track initiation problem. Because the CSO algorithms are applied to pixel-clusters remaining after image restoration, tracks on decomposed returns in each pixel-cluster may each be initialized with track data maintained for the larger pixelcluster, aiding the track initiation problem. Figure 8 (b) shows the run-time necessary for application of the successive processing algorithms. It is seen that processing costs remain relatively low until the final application of the C3PC algorithm. This cost may be reduced by applying the C3PC or CPC algorithms only to pixel-clusters of particular interest, rather than to all UCSO candidates. In any case it should be noted that the run-time costs of these algorithms make up only part of the total tracking cost which also includes track initiation and track maintenance. These costs must be balanced at all times to run within a fixed processing budget. 60 50

2.5

40

2 Runtime [s]

Total Number of Returns

3

Correct Reports False Reports

30 20 10 0

(a)

1.5 1 0.5

None None

CLEAN None

CLEAN PPC

CLEAN C3PC

0

None None

CLEAN None

CLEAN PPC

CLEAN C3PC

(b)

Figure 8. Progression chart showing the (a) total number of correct and false targets and (b) run-time necessary for image restoration and CSO processing

7. CONCLUSION Results obtained in this work by use of the CLEAN algorithm for image restoration applied before the use of a novel particle clustering CSO algorithm show favorable support for the hierarchical approach in CSO resolution. The capability of these algorithms to successively decompose UCSO returns into more correct returns with higher position accuracy was demonstrated. Furthermore, other metrics identified as meaningful to tracking and discrimination were also improved after successive image restoration and CSO processing. The CLEAN algorithm was found to lose some true target signals at low SNR and some CSO algorithms were found to fit false targets at high SNR. The ability of a tracker to reject these errors while incorporating the benefits discussed above is a topic of future investigation.

ACKNOWLEDGMENTS This work was supported by the Missile Defense Agency through Contract Number W9113M-04-C-0046.

REFERENCES 1. S. Gadaleta, A. Poore, and B. Slocumb, “Pixel-cluster decomposition tracking for multiple IR-sensor surveillance,” in SPIE Vol. 5204, Signal and Data Processing of Small Targets, pp. 270–282, 2003. 2. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, Boston, London, 1993. 3. J. Stewart, Calculus, Brooks/Cole Publishing Company, 1999. 4. T. Veldhuizen, “Grid filters for local nonlinear image restoration,” tech. rep., Dept. of Systems Design Engineering, University of Waterloo, 1998. Master’s Thesis. 5. Y. Yardimci, J. Cadzow, and M. Zhu, “Comparative study of high resolution algorithms for multiple point source location via infrared focal plane arrays,” Proceedings of the SPIE, Signal and Data Processing of Small Targets 1954, pp. 59–69, 1993. 6. J. Korn, H. Holtz, and M. Farber, “Trajectory estimation of closely spaced objects (CSO) using infrared focal plane data of an STSS (Space Tracking and Surveillance System) platform,” in SPIE Vol. 5428, pp. 387–399, 2004. 7. J. Reagan and T. Abatzoglou, “Model-based superresolution CSO processing,” Proceedings of the SPIE, Signal and Data Processing of Small Targets 1954, pp. 204–218, 1993. 8. R. Gonzalez and R. Woods, Digital Image Processing, Pearson Education, 2002. 9. J. Hoshen and R. Kopelman, “Percolation and cluster distribution. I. Cluster multiple labeling technique and critical concentration algorithm,” Physical Review B 14(8), pp. 3438–3445, 1976. 10. M. Berry, J. Constantin, and B. V. Zanden, “Parallelization of the Hoshen-Kopelman algorithm using a finite state machine,” Tech. Rep. CS-95-300, University of Tennessee, Dept. of Computer Science, 1995. 11. M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Processing Magazine , pp. 24–41, March 1997. 12. J. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy: A review,” Publications of the Astronomical Society of the Pacific 114, pp. 1051–1069, 2002. 13. W. Richardson, “Bayesian-based iterative method of image restoration,” Journal of the Optical Society of America , pp. 55–59, 1972. 14. L. Lucy, “An iterative technique for the rectification of observed distribution,” The Astronomical Journal , pp. 745– 754, 1974. 15. T. Cornwell and A. Bridle, “Deconvolution tutorial,” tech. rep., National Radio Astronomy Observatory, 1996. http://www.cv.nrao.edu/∼abridle/deconvol/deconvol.html. 16. T. Bartolac and E. Andert, “Feed-forward neural networks to solve the closely-spaced objects problem,” in Proc. SPIE Vol. 2235, Signal and Data Processing of Small Targets 1994, pp. 94–105, 1994. 17. W. Lillo and N. Schulenburg, “A Bayesian closely spaced object resolution technique,” in Proc. SPIE Vol. 2235, pp. 2–13, 1994. 18. S. Theodoridis and K. Koutroumbas, Pattern Recognition, Academic Press, 1999. 19. G. McLachlan and D. Peel, Finite Mixture Models, Wiley Inter-Science, John Wiley, New York, 2000. 20. G. Schwarz, “Estimating the dimension of a model,” Annals of Statistics 6, pp. 461–464, 1978. 21. E. Scheirer, “Music-listening systems,” tech. rep., Massachusetts Institute of Technology, S.M. Media Arts and Sciences, 2000. PhD Thesis. 22. T. Kirubarajan, Y. Bar-Shalom, and K. Pattipati, “Multiassignment for tracking a large number of overlapping objects,” IEEE Transactions on Aerospace and Electronic Systems 37, pp. 2–21, 2001. 23. Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation, John Wiley & Sons, Inc., New York, 2001.

Suggest Documents