Image-Domain Moving Target Tracking with ...

5 downloads 9225 Views 509KB Size Report
Advanced image registration technique with sub-pixel accuracy has been ... image frames, we have developed and applied an image domain moving target ...
Image-Domain Moving Target Tracking with Advanced Image Registration and Time-Differencing Techniques Hai-Wen Chen and Dennis Braunreiter Sensor Systems Operation Science Applications International Corporation San Diego, California 92121 ABSTRACT Advanced image registration technique with sub-pixel accuracy has been developed and applied for TD (timedifferencing) process [1]. The TD process can help to suppress heavy background clutter for improved moving target detection. After processing a CFAR (constant false alarm rate) thresholding detector on the time-differenced image frames, we have developed and applied an image domain moving target tracking (IDMTT) process for robust moving target tracking. The IDMTT process uses a unique location feature by mapping and associating the real moving targets in the previous time-differenced frame with the ghost moving targets in the current time-differenced frame. The accurate location mapping and associating information between time frames is provided by the registration process. Preliminary tests for the IDMTT process are promising. Robust moving target tracking can be achieved even under quite low signal-to-clutter noise-ratio (SCNR = 0.5).

1. INTRODUCTION 1.1 Target Detection and Tracking Evaluation Criteria In target detection using IR (infrared) sensor systems or visible light CCD (charge-coupled device) camera systems, a challenging task is to detect moving targets that are embedded in a heavy clutter background. In this case, the targets’ SCNR (signal-to-clutter noise-ratio) may be too low, resulting in high Pfa (probability of false detection) after a CFAR (constant false alarm rate) thresholding process. Target detection performance is generally evaluated using ROC (receiver operating characteristics) curves that are plotted Pd (probability of detection) vs. Pfa. For a specific Pd (e.g., 95%), a lower Pfa means a better performance. However, a more important probability for target detection and tracking evaluations is Pdec (probability of target declaration). The higher the Pdec, the better performance is achieved. For example, for a single target detection case, a detection process (with a 100% Pd) comes out with ten detections, which means that one of the detection among the ten detections is the real target and the rest of nine detections are false detections. Without any pre-knowledge or cueing information, we have to consider all the ten detections to have the same probability to be declared as the target. That is, the Pdec for each detection is only 10%, although the Pd is 100%. To improve Pdec towards 100%, we have to reduce the nine false detections to zero. In general, a target tracking process after the detector is applied for reducing false detections and thus increasing Pdec. 1.2 Detection Tracking Processing Conventionally, target detection is conducted in image-domain, while target tracking is conducted in the inertial (three-dimensional (3D) space) domain. A tracking process establishes tracking files for new detections in the current time frame, and associates detections in the current frame with tracking files established from previous time frames. A good tracker will update and hold valid moving target tracking files, and gradually reduce false tracking files, based on some useful target temporal dynamics and kinematic features. One powerful temporal integration method used by a tracker is called persistency test (or called post-detection temporal integration). For a moving widow of N time frames, k (k < N) detections that are in different frames but belong to a same detection tracking file are evaluated out of the N frames. For example, for a criteria of 5 out of 7, if an object (tracked by a detection tracking file) was detected in five or more frames within a moving window of seven frames, the detected Acquisition, Tracking, Pointing, and Laser Systems Technologies XXIII, edited by Steven L. Chodos, William E. Thompson, Proc. of SPIE Vol. 7338, 73380C · © 2009 SPIE · CCC code: 0277-786X/09/$18 · doi: 10.1117/12.818230

Proc. of SPIE Vol. 7338 73380C-1

object is considered a target; otherwise, it is considered noise or clutter detection tracking. (Please refer to reference [2] for more detailed discussion of this issue.) In a conventional setting, the detector hands over the detection information in the image-domain to the tracker in the 3D space-domain. The information may include detection x-and y-locations, detection peak or average intensities and SNR, detection sizes in each individual time frame, etc. However, there are many more useful target features in the image-domain (e.g., the target shapes and target intensity and color profiles, etc.) that are useful for moving target tracking and that are lost in the transition from the image-domain to the 3D space-domain. In this paper, an image-domain tracking process is developed that can utilize many more useful target features for efficient moving target tracking. 1.3 Time-Differencing Processing and Ghost Target Advanced image registration technique with sub-pixel accuracy has been developed and applied for TD (time-differencing) process [1]. During the TD process, we use the current image frame as the reference frame, and register the previous frame (as the search frame) to the current frame. The registered search frame is then subtracted from the reference frame to obtain a time-differenced image to suppress all the static background clutter. The TD process can help to suppress heavy background clutter for improved moving target detection. In general, for a successful TD process, the moving target should have a speed that can move the target for a distance larger than the target size during the time interval between the reference (current) frame and the search (previous) frame. In this case, the moving targets in the current frame will stay at the same locations in the time-differenced frame with the same intensity signs, while the moving targets in the previous frame will be located in the time-differenced frame at different locations with reverse intensity signs. Since the moving targets with reverse intensity signs in the timedifferenced frame are not real targets in the current frame, we call them “ghost” targets. As discussed in [1], image registration and TD performance can be evaluated using SCNR_gain. SCNR_gain is calculated by take the ratio between the STD (standard deviation) of the original reference image and the STD of the time-differenced image: SCNR_gain = STD(ref_img) / STD(TD_img).

Eq. (1)

In general, for a SCNR_Gain = 3-5, the Pfa (probability of false detection) can be reduced by a factor of 10-100 times. 1.4 IDMTT Processing and 2nd Adaptive Local CFAR Thresholding We have developed and applied an IDMTT (image-domain moving target tracking) process for robust moving target tracking. After processing a CFAR (constant false alarm rate) thresholding detector on the timedifferenced image frames, the IDMTT process uses a unique location feature by mapping and associating the real moving targets in the previous time-differenced frame with the ghost moving targets in the current time-differenced frame. The accurate location mapping and associating information between time frames is provided by the registration process. (Note: Theoretically, if there is no registration error, the ghost-real target pair should be mapped into exactly the same location.) On the other hand, the false detections are more prone to random noise, and their locations vary a lot from frame to frame. It would be rare for a previous false detection to be mapped to a current false detection with sub-pixel accuracy. Another important feature to extract the ghost-real target pairs is that the intensity sign of a ghost target detection should be reverse to that of its real target detection. On the other hand, false detections caused by leakage of static background clutter generally hold the same intensity contrast signs across time frames. A dilemma in a global CFAR thresholding process is that we want to lower the threshold level to increase Pd. However, a lower global thresholding will considerably increase Pfa. A second adaptive local CFAR thresholding process is developed to solve this problem. This is possible when we conduct both detection and tracking processes in the same image-domain. In this setting, a relatively high threshold level is set for the first global CFAR process so that we have a relatively low Pfa, though the Pd will be low too. A second local CFAR thresholding process is applied at a local small area around the ghost target detection locations. The second threshold level is lower than the first threshold level, and thus will improve Pd. Nevertheless, the second thresholding process is conducted only on a much smaller local area than the first global thresholding, and thus will

Proc. of SPIE Vol. 7338 73380C-2

not increase too much of Pfa. In this case, the location of the ghost targets provides us with useful cueing information about local locations of the moving targets. Preliminary tests for the IDMTT process are promising. Robust moving target tracking can be achieved even for a SCNR condition as low as 0.5. Performance of the IDMTT process was tested and evaluated using long wave infrared (LWIR) imagery taken from a boat. The LWIR video imagery was taken at 30 frames per second from a boat in bayside water, looking at buildings along the shore. An unresolved moving target was inserted into the LWIR imagery.

2. TARGET DETECTION Depending on the target sizes, target detection can be divided into three classes: 1) unresolved target detection; 2) small extended target detection; and 3) large extended target detection. Unresolved target detection is contained within a single pixel, and thus, the unresolved target source can be considered as a point source. Extended target detection is extended from several pixels to several hundred pixels. In general, we call a target containing less than 50 pixels a small extended target; otherwise, we call it a large extended target. MF (Matched Filter) method is currently a popular approach for unresolved target detection using IR FPAs (focal plane arrays) and CCD cameras as sensor detectors. In the MF method, DPSF (discrete point spread function sampled by discrete pixels in 2D sensors) is estimated from CPSF (continuous point spread function). CPSF is available based on the sensor optical and lens designs. A matched spatial filter is obtained by divided the DPSF with the co-variance matrix of background clutter. This matched filter is optimal in an MSE (mean-square-error) sense, in that it provides a maximum SCNR (signal-to-clutter noise-ratio) for a point source (unresolved) target. In current, advanced optical designs, most energy of DPSF can be contained within a 3x3 pixel area, and the PVF (point visibility function) can be as high as 0.6~0.75. A 0.7 PVF means that if the peak of a CPSF is located at the center of a pixel, then this pixel will contains 70% of the CPSF energy, and 30% of its energy is spread out in the neighbor pixels. In addition, a neighbor background normalization process may further improve unresolved target detection performance. The MF method is much more difficult to be applied for extended target detection, since extended targets can have many different shapes and intensity profiles. Therefore, many different matched filters are needed to match to the different target shapes and intensity profiles. Furthermore, many moving targets do not have firm shapes, such as walking or running people and animals, and thus, it is almost impossible to apply the MF method. In general, for small extended target detection, we can apply a mean filter and an anti-mean filter [3] to the image to remove local low spatial frequency slops for improving week target detection, and then directly apply a CFAR thresholding process for target detection. In addition, a neighbor background normalization process may further improve small extended target detection performance. For large extended target detection, we have recently developed a promising detection method using nonlinear image processing (morphological operations and nonlinear logics). The results will be presented in a future SPIE paper.

3. IMAGE-DOMAIN MOVING TARGET TRACKING PROCESS To understand how the IDMTT process works, imagine that we have three original image frames. We obtain the first time-differenced image by subtracting the first original image from the second one after image registration, and obtain the second time-differenced image by subtracting the second original image from the third one after image registration. The real target detections from the first time-differenced image and the ghost target detections from the second time-differenced image are from the same sources – the moving targets from the second original image. Therefore, when we map (associate) the detections in the first time-differenced image to the seond time-differenced image after compensating the sensor platform motion estimated by the image registration process, the real target detections from the first time-differenced image should locate at the same (x-, y-) location as the ghost target detections from the second time-differenced image, but in reverse intensity signs. That is, if a real target has a positive intensity contrast, then its ghost target should have a negative intensity contrast and vice versa.

Proc. of SPIE Vol. 7338 73380C-3

Here are the general descriptions of target detection and IDMTT processing: 1) TD processing: a) Conduct image registration between the current image frame and the previous frame. The current frame is served as the reference frame and is unchanged; b) Subtract the registered image frame from the current frame to obtain a time-differenced image frame. 2) Target detection processing: a) The matched filtering method is applied to the time-differenced image for unresolved target detection and anti-mean filtering method is applied to the time-differenced image for small extended target detection. b) A first global CFAR thresholding process is applied to the whole filtered image. The thresholding levels are specified by the positive detection (exceedances) number and the negative detection number: P_det and N_det. c) Conduct detection centroiding for each detection (exceedance). 3) IDMTT processing: a) Map the stored detection locations in the previous time-differenced image to the locations in the current time-differenced image after compensating the sensor platform motion. The pixelwise image displacement (optical flow) between the current and previous images is caused by the sensor platform motion, and can be estimated by the image registration process with subpixel accuracy [1];. b) Calculate the distances between the mapped previous detections and the current detections. c) Set a maximum allowed distance error parameter Dist_err for a possible ghost-real target pair (the error is related to the accuracy of image registration). d) Find all the detection pairs that have separation distances less than Dist_err, and that are in reverse intensity contrast. These detection pairs are assigned as the ghost-real detection pairs, and the detections related to the pairs in the current time-differenced image are assigned as the ghost target detections in the current time-differenced image. e) If a current detection meets the three following conditions, we assign it as the current real target: i) it has reverse intensity contrast sign to the ghost detection; ii) it is the closest neighbor to a ghost detection; and iii) the distance to the ghost detection location is less than a distance gate: Dist_gate that is related to the moving target velocity. f) If no ghost or real target is detected in the current time-differenced image, it is possible that the first global thresholding level was set too high for this frame so that the target intensity is below the thresholding. We then assign the previous real target detections that have been mapped to the current time-differenced image as the ghost target detections, and conduct a second local CFAR thresholding around the ghost detection locations to detect the real moving targets in the current time-differenced image. The setting parameters in this process are the local area size (lcl_sz) and positive/negative detection numbers (P_det2 and N_det2).

4. PERFORMANCE EVALUATION 4.1 LWIR Imagery with Inserted Unresolved Moving Targets (SCNR = 0.5) A LWIR video imagery was taken at 30 frames per second (a time interval of 0.333 ms between frames) from a boat in bayside water, looking at buildings along the shore. We under-sampled the frame rate (8:1 undersampling rate) to 3.75 Hz with a time interval of 0.267 ms between the reference frame and the search frame for image registration and TD processing. An original image (the third image) is shown in figure 1(a), the first time-differenced image is shown in figure 1(b) by subtracting the first original image from the second original image, and the secomd time-differenced image is shown in figure1(c) by subtracting the seond original image from the third original image. The STD (standard deviation) for the third original image is 803, and the STD for the second time-differenced image is 34.3.

Proc. of SPIE Vol. 7338 73380C-4

According to Equation (1), the SCNR_gain = 803 / 34.3 = 23.4. This is a quite high gain, which means that the image registration and time-differencing process did a good job suppressing the static background clutter.

(a)

3rd Org Img; STD = 802.8571

1st TD Img; STD = 55.0018

20

20

40

40

60

60

80

80

100

100

120

(b)

120 20

40

60

80

100

120

20 40 60 80 100 120 Circle: Negative Det; Triangle: Positive Det

(c)

2nd TD Img; STD = 34.3442

2nd TD Img

20

20

40

40

60

60

80

80

100

100

120

120 20

40

60

80

100

120

20 40 60 80 100 Black: Previous Det; White: Current Det

(d)

120

Figure 1. Original Image, Time-differenced Images, and Detection Mapping

An unresolved moving target was inserted into the LWIR imagery. The inserted target is a 3 by 3 center phase DPSF function with a PVF of 0.61. The target was moving from the left side to the right side. The target xand y-location in the eleven consecutive original frames (under-sampled with 3.75 Hz frame rate) are: Tgt_x = [20, 28, 36, 44, 52, 60, 68, 76, 84, 92, 100], and Tgt_y = [40, 40, 40, 40, 40, 40, 40, 40, 40, 40, 40]. The target is inserted as positive intensity contrast. Its peak intensity is set as half of the original image STD intensity, and thus, the SCNR = 0.5. It is quite a weak target compared with the surrounding background clutter. The inserted target is barely to be seen by the eye and is indicated by the small black square in figure 1(a). Nevertheless, the target really pops up in the second time-differenced image in figure 1(c), because the TD process suppressed most of the surrounding heavy clutter. The negative and positive detections after the first global CFAR thresholding process for the 1st TD image are shown in figure 1(b) as circle and triangle symbols, respectively. Similarly, the negative and positive detections for the 2nd TD image are shown in figure 1(c). The detection process parameters were set as:

Proc. of SPIE Vol. 7338 73380C-5

P_det = 10, N_det = 7, Dist_err = 0.7, P_det2 = 1, lcl_sz = 10, and Dist_gate = 10. In general, the detection process using TD and the MF approach works quite well. After the firstglobal CFAR thresholding processing with a threshold level of P_det = 10 and N_det = 7, the Pd is about 100% for a SCNR = 1 or above, is about 94% for a SCNR = 0.8, and is about 72% for a SCNR = 0.5. Nevertheless, when we applied the second adaptive local CFAR thresholding process with a threshold level of P_det2 = 1, we obtained Pd = 100% for SCNR = 0.5. The detections from the first TD image were mapped to the second TD image after compensating the sensor platform motion. The mapping results were shown in figure 1(d). The black symbols are the detections from the previous first TD image, and the white symbols are the detections from the current second TD image. It is seen that the ghost-real detection pair (previous real target paired with the current ghost target) has the shortest location separation (distance error) with reverse intensity contrast. The distance error Dist_err = 0.11 in figure 1(d). Note: As shown in figure 1(d), there are some black triangle/white triangle pairs that may have shorter Dist_err. However, they are not in reverse intensity contrast, and thus are excluded from possible ghost-real detection pairs. M e a n E rro r = 0 . 1 6 7 3 9 P ix e l 0.35

Sub-Pixel Distance Error

0.3

0.25

0.2

0.15

0.1

0.05

0

1

2

3

4

5

6

7

8

F ra m e #

Figure 2. Ghost-Real Detection Distance Errors from multiple TD images

The ghost-real target detection pair in eight TD images were estimated and plotted in figure 2. All the distance errors are sub-pixel errors (less that one pixel). The maximum error is 0.31 pixel. In fact, as shown in figure 3 (detection mapping between two TD images after the first global CFAR thresholding process), for the nine TD frames we tested, the minimum separation distance for all the false detections with reverse intensity contrast is larger than 0.7 pixel. Therefore, when we set the parameter Dist_err = 0.7 in the IDMTT process, all the false detections in the nine TD images are eliminated, and only the ghost-real target detection pair is left over in each of the nine TD images. The results are shown in figure 4 and figure 5. The circle detections in figure 4 are the ghost target detections of the current TD images (i.e., the real target detections of the previous TD images). The triangle detections are the real targets in the current TD images. The performance is surprisingly good. At the time, as soon as in the second TD time frame (the third original image time frame), all the false detections are gone, and we do not only achieve Pd = 100%, but also achieve Pdec = 100%. The ghost-real target pair in each image provides us with useful tracking information. The length of the vector from the centroid ghost target location to the centroid real target location provides us with the target moving distance and velocity (distance divided by the frame time interval). The direction of the vector provides us with the target moving direction.

Proc. of SPIE Vol. 7338 73380C-6

2nd TD Img

3rd TD Img

4th TD Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120 5th TD Img

20 40 60 80 100 120 Black: Previous Det; White: Current Det 6th TD Img

20 40 60 80 100 120 7th TD Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120 8th TD Img

20 40 60 80 100 120 Circle: Negative Det; Triangle: Positive Det 9th TD Img

20 40 60 80 100 120 10th TD Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120 20 40 60 80 100 120

120 20 40 60 80 100 120

20 40 60 80 100 120

Figure 3. Detection Mapping between two TD Images Similar results are shown in figure 5 on the original image frames. The target locations at the previous nine frames and the location at the current frame are plotted on the eleventh (the last) original image frame. The result indicates the tracking file of this moving target. The target was originally inserted at a constant y-coordinate (y = 40). However, the sensor platform (the boat) moves following the water wave motion, and thus, the track of the moving target along the y-direction will also follow the water wave motion as indicated in figure 5. The boat motions estimated by the image registration process along the x- and y-directions are plotted in figure 6 (pixel movement vs. time frame number).

Proc. of SPIE Vol. 7338 73380C-7

2nd TD Img

3rd TD Img

4th TD Img

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120 20 40 60 80 100 120 5th TD Img

j -

20

120

20 40 60 80 100 120 Circle: Ghost Tgt; Triangle: Real Tgt 6th TD Img

20 40 60 80 100 120 7th TD Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120

20 40 60 80 100 120

8th TD Img

20 40 60 80 100 120

9th TD Img

10th TD Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120

20 40 60 80 100 120

20 40 60 80 100 120

Figure 4. Ghost-Real Target Detection Pair in Each TD Image

Proc. of SPIE Vol. 7338 73380C-8

3rd org Img

4th org Img

5th org Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120 6th org Img

20 40 60 80 100 120 Circle: Ghost Tgt; Triangle: Real Tgt 7th org Img

20 40 60 80 100 120 8th org Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120 20 40 60 80 100 120

120 20 40 60 80 100 120

9th org Img

20 40 60 80 100 120

10th org Img

11th org Img

20

20

20

40

40

40

60

60

60

80

80

80

100

100

100

120

120

120

20 40 60 80 100 120

20 40 60 80 100 120

p

20 40 60 80 100 120 Tracking a Moving Tgt

Figure 5. Ghost-Real Target Detection and Moving Target Tracking Boat Motion

50 0 -50

0

20

40

60

80

Y-dir Motion X-dir Motion Figure 6. Sensor Platform (Boat) Motion

Proc. of SPIE Vol. 7338 73380C-9

100

120

5. DISCUSSION AND SUMMARY The time-differencing process with the help of image registration techniques is useful for image-domain moving target detection under heavy clutter conditions. Time-differencing between two well registered image frames can significantly suppress the heavy static background clutters and, thus, improve moving target detection. In this paper, we have developed and applied an image-domain moving target tracking (IDMTT) process for robust moving target tracking. The IDMTT process uses a unique location feature by mapping and associating the real moving targets in the previous time-differenced frame with the ghost moving targets in the current timedifferenced frame. In conventional TD process, the ghost target detections are unwanted by-products, since they are not real detections in the current image frame. Nevertheless, in this paper, we show that the accurate mapped locations of the ghost target detections can provide very useful information and features for reducing Pfa and for robust moving target detection and tracking. The location mapping accuracy of ghost target detections depends on the image registration accuracy. We can achieve sub-pixel local mapping accuracy (Dist_err < 0.31 pixel) for the boat platform. In addition, the detection centrioding process for extended targets may result in higher location errors than the detection centroiding process for unresolved targets as shown in this paper. Robust target detection and tracking have been demonstrated in the examples with unresolved target detection and tracking using LWIR imagery. The preliminary performance results in these examples are promising. As soon as the second TD time frame (the third original time frame), all the false detections have been eliminated, achieving not only Pd = 100%, but also achieving Pdec =100%. Furthermore, the accurate mapped locations of the ghost target detections will provide useful cueing information for local locations of the real moving targets, which allow us to develop a second adaptive local CFAR thresholding process for further reducing Pfa and improving Pd and Pdec. As shown in the LWIR imagery case, for a very low SCNR (= 0.5), the Pd is only about 72% after the first global CFAR thresholding process (with a threshold level of P_det = 10 and N_det = 7). Nevertheless, when we applied a second adaptive local CFAR thresholding process (P_det2 = 1), we improve the detection and tracking performance to Pd = Pdec = 100% as soon as the third original time frame. In conventional target detection and tracking designs, the detector hands over the detection information in the image-domain to the tracker in the 3D space-domain, and thus, many useful target features in the image-domain are lost. In this paper, a moving target tracking process is developed and is working in the same image-domain as the detection process. The results in this paper have shown that the ghost-real target pair mapping with reverse detection intensity contrast signs is a good feature in the image-domain for improving detection and tracking performance for unresolved and small extended targets. Moreover, there are many other useful target features in the image-domain that can help to further improve performance. For example, for relatively large extended targets, correlations between different target shapes and target intensity/color profiles are potentially useful features for moving target tracking, especially for multiple, closely spaced, moving target tracking cases. In fact, we have recently developed a robust image-domain moving target tracking process using an adaptive local target correlation tracker. As shown by the results in this paper, the performance of a conventional tracking process depends on the performance of the detection process. However, we may lose detection of a moving target from time to time under heavy clutter conditions (Pd < 100%), and also we may lose detection of a moving target when this target stops moving. For example, a moving vehicle will temporarily stop moving in front of a red light or a stop sign. Nevertheless, in our new approach, once we start to track a target, the correlation tracker can continue to track this target, no matter whether we can still detect this target in the future image frames or not. These new results will be presented and published at this conference as a separate technical paper [4].

Proc. of SPIE Vol. 7338 73380C-10

6. REFERENCES [1]. Hai-Wen Chen, Dennis Braunreiter, and Dennis Healy, “Advanced Image Registration Techniques and Applications,” SPIE Defense & Security Symposium, Proceedings of Independent Component Analyses, Wavelets, Unsupervised Nano-Biomimetic Sensors and Neural Networks VI, vol. 6979, pp. OX01-OX14, Orlando, FL, 17-19 March 2008. [2]. Hai-Wen Chen, Surachai Sutha, and Teresa Olson, “Target Detection and Recognition Improvements Using Spatio-temporal Fusion,” Journal of Applied Optics, vol. 43 (2), pp. 403-415, January 2004. [3]. Gonzalez and Woods, Digital Image Processing, Second Edition, Prentice Hall, © 2002. [4]. Hai-Wen Chen and Dennis Braunreiter, “Robust image-Domain Target Tracking Process under Heavy Clutter Conditions." To be Published at SPIE Defense & Security Symposium, Proceedings of Acquisition, Tracking, Pointing, and Laser Systems Technologies XXIII, vol. 7338-23, Orlando, FL, 13-17 April 2009.

Proc. of SPIE Vol. 7338 73380C-11