Target Detection and Verification via Airborne ... - Google Sites

2 downloads 0 Views 2MB Size Report
Israel. She is now with the Optoelectronics Group, Cavendish Laboratory, Uni- ..... Electrical Engineering, Technion-Israel Institute of. Technology, Haifa, Israel.
IEEE SENSORS JOURNAL, VOL. 10, NO. 3, MARCH 2010

707

Target Detection and Verification via Airborne Hyperspectral and High-Resolution Imagery Processing and Fusion Doron E. Bar, Karni Wolowelsky, Yoram Swirski, Zvi Figov, Ariel Michaeli, Yana Vaynzof, Yoram Abramovitz, Amnon Ben-Dov, Ofer Yaron, Lior Weizman, and Renen Adar

Abstract—Remote sensing is often used for detection of predefined targets, such as vehicles, man-made objects, or other specified objects. We describe a new technique that combines both spectral and spatial analysis for detection and classification of such targets. Fusion of data from two sources, a hyperspectral cube and a high-resolution image, is used as the basis of this technique. Hyperspectral imagers supply information about the physical properties of an object while suffering from low spatial resolution. The use of high-resolution imagers enables high-fidelity spatial analysis in addition to the spectral analysis. This paper presents a detection technique accomplished in two steps: anomaly detection based on the spectral data and the classification phase, which relies on spatial analysis. At the classification step, the detection points are projected on the high-resolution images via registration algorithms. Then each detected point is classified using linear discrimination functions and decision surfaces on spatial features. The two detection steps possess orthogonal information: spectral and spatial. At the spectral detection step, we want very high probability of detection, while at the spatial step, we reduce the number of false alarms. Thus, we obtain a lower false alarm rate for a given probability of detection, in comparison to detection via one of the steps only. We checked the method over a few tens of square kilometers, and here we present the system and field test results. Index Terms—Anomaly suspect, high-resolution chip, probability of detection–false alarm rate (PD–FAR) curve, spatial algorithm.

I. INTRODUCTION

W

E describe a new technique that combines both spectral and spatial analysis for detection and classification of predefined targets, such as vehicles, man-made objects, or other specified objects. Fusion of data from two sources, a hyperspectral cube and a high-resolution image, is used as the basis of this technique.

Manuscript received August 31, 2008; revised January 08, 2009; accepted January 08, 2009. Current version published February 24, 2010. The associate editor coordinating the review of this paper and approving it for publication was Dr. Neelam Gupta. D. E. Bar, K. Wolowelsky, Y. Swirski, A. Michaeli, Y. Abramovitz, A. Ben-Dov, O. Yaron, and R. Adar are with Rafael Advanced Defense Systems Ltd., Haifa 31021, Israel (e-mail: [email protected]). Z. Figov was with Rafael Advanced Defense Systems Ltd., Haifa 31021, Israel. He is now with MATE Intelligent Video, Jerusalem 91450, Israel. Y. Vaynzof was with Rafael Advanced Defense Systems Ltd., Haifa 31021, Israel. She is now with the Optoelectronics Group, Cavendish Laboratory, University of Cambridge, Cambridge CB2 1TN, U.K. L. Weizman was with Rafael Advanced Defense Systems Ltd., Haifa 31021, Israel. He is now with the School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem 91904, Israel. Digital Object Identifier 10.1109/JSEN.2009.2038664

The Compact Army Spectral Sensor (COMPASS) is a hyperspectral sensor. In addition, it includes a high-resolution panchromatic imager. Using COMPASS, Simi et al. [1] describe the following technique: hyperspectral anomalies were extracted, and a subregion from the high-resolution image (in the following text, we refer to this as a “chip”) was matched to each anomaly. This chip is displayed for the operator. We take this technique one step further and add automatic spatial algorithms on the chips at the classification phase. The technique is described in the next section. Data and results are described in Sections III and IV. A summary concludes this paper. II. TECHNIQUE We mounted a hyperspectral imager and a high-resolution imager on an airborne platform. The bore-sighting of the two cameras was verified. We collected data over different areas, landscapes, and seasons. The data were transferred to the algorithm block, whose main steps are as follows. 1) Extract hyperspectral anomaly suspects. 2) Each suspect is matched to a high-resolution chip. 3) Apply spatial algorithms to each chip in order to incriminate or exonerate the suspects. 4) Pass incriminations on for further investigation. The first step, extracting hyperspectral anomaly suspects, was done using unsupervised detection algorithms on the hyperspectral data. Two detection algorithms were used: local [2] and global [3]. After applying the algorithms, we obtained the algorithms results in a fuzzy map of scores, represented by nonnegative numbers. We used a four-connected neighborhood criterion in order to group pixels with score above a given threshold score into segments. The centers of mass of these segments were used as a list of suspect points. The second step, matching high-resolution chips to the suspect points, was done in three substeps: approximate translation based on global position system (GPS) time tags, improved translation based on global image matching algorithms (such as feature based or region based), and final translation based on local algorithms. At the end of this process, each suspect point in the hyperspectral image is matched to a point in the high-resolution image. This point is defined as the center of the chip for further analysis. Using linear discrimination functions and decision surfaces on spatial features, each detected point is classified in the third step as incriminated or exonerated. The spatial features are built

1530-437X/$26.00 © 2010 IEEE Authorized licensed use limited to: Hebrew University. Downloaded on June 20,2010 at 12:50:52 UTC from IEEE Xplore. Restrictions apply.

708

IEEE SENSORS JOURNAL, VOL. 10, NO. 3, MARCH 2010

Fig. 1. Hyperspectral image with anomaly suspects and the matched chips.

III. DATA Two cameras were used to demonstrate this technique. • A hyperspectral ASIA camera of SPECIM company. The camera operates at the visual near-IR range with dispersive system based on a prism-grating-prism component. The data are collected via the pushbroom technique. Instantaneous field of view (IFOV) 1 mrad. • A high-resolution camera with 11 mega pixels, red-greenblue (RGB) Redlake camera. IFOV 0.1 mrad. The optics were chosen so that the RGB resolution would be ten times higher than the resolution of the hyperspectral image. The image capture rate was chosen, depending on flight speed, to ensure a slight overlap between images. We mounted the imagers on a light aircraft. Inertial navigation system and GPS systems were mounted on the platform and recorded for the duration of the flight. Preprocessing of the data was performed for each algorithm block. • For the hyperspectral stage: calibration of hyperspectral raw data, and coregistration of hyperspectral bands [5]. • For the matching stage: Georectification of the hyperspectral data and RGB image enhancement and cropping. • For the spatial stage: conversion of RGB chips to gray-level chips. We collected data at three different times of year: summer, spring, and winter. The cloud cover was 0/8, 4/8, and 7/8, respectively, with some rain during the winter data collection. The landscape included open fields, forests, and various types of roads and buildings. IV. RESULTS

Fig. 2. High-resolution image with chips that were incriminated (bold square) and were exonerated (square) are plotted. For chips A–D, see detailed explanation in the text.

in three substeps: extract line segments and shadow segments, build vehicle hypotheses from lines, then to each hypothesis match a shadow segment [4]. Each hypothesis is assigned a nonnegative score. The shadow segments are used as support for the vehicle hypotheses. Thus, when partial shadow is presented the third substep is omitted. We used MATLAB and ENVI/IDL software to implement the algorithms. To demonstrate the technique, we plot an example in Figs. 1 and 2. At the center of Fig. 1, the band with 700-nm wavelength from the hyperspectral cube is plotted. White patches represent pixels that got a score higher than a given threshold. To each segment of such pixels, a high-resolution chip is matched. Those chips are seen around the spectral image. In Fig. 2, a high-resolution image with chips that were incriminated by the spatial analysis (bold square) and were exonerated (thin square) are plotted. For three chips, vehicle hypotheses and shadow segments are plotted. Since no shadow segment was matched to the vehicle hypothesis in chip A, this chip was exonerated. In chip D, no vehicle was found, though there was a car in the hyperspectral image. This is due to the fact that the car was on the move and there is slight time difference between capturing a scene by the two imagers.

A. Comparing the Parts to the Whole Operations research was done at each algorithm step by marking the targets on the images—hyperspectral and high resolution—and calculating the probability of detection (PD) and false alarm rate (FAR). Calculations of PD–FAR graphs are different for each algorithm block. At the hyperspectral algorithm step, one obtains a score map. This map is cut at a threshold to give a list of pixels above the threshold. We used a connection criterion in order to generate segments for those pixels. For each segment, its center of mass is considered as the location of the suspect. Thus, we count susand the others pects that fall inside a marked target as hits . We repeat the process for different thresholds as false to obtain a PD–FAR graph for this algorithm step. A PD–FAR curve, is calculated from these data based on the total number of targets in the area considered and its area (1) At the spatial algorithm step, we looked at the hypotheses’ scores. Thus, each chip gets the highest score of the hypotheses that appear inside it, or zero score if there is no hypothesis. A chip is considered as a hit if its score is above the threshold and there are marked targets inside it. To check a spatial-only algorithm, we divided the high-resolution images to chips and performed the aforementioned check

Authorized licensed use limited to: Hebrew University. Downloaded on June 20,2010 at 12:50:52 UTC from IEEE Xplore. Restrictions apply.

BAR et al.: TARGET DETECTION AND VERIFICATION VIA AIRBORNE HYPERSPECTRAL AND HIGH-RESOLUTION IMAGERY PROCESSING AND FUSION

Fig. 3. Results—comparing the parts to the whole. PD–FAR curves are plotted for the hyperspectral-only algorithm step (line-dot), for the spatial-only algorithm step (lined) and for the combined step system (bold line). This analysis was preformed over 2.94 km with 11 marked targets.

709

Fig. 4. Comparison of two flight altitudes.

for all chips. For the combined algorithm, we checked the chips that had been extracted by the matching step. Each target, in the area checked, is assigned the score of the chip, which it is in, or zero if it is not included in any chip. If a target is inside multiple chips, it is assigned the highest score of those chips. All chips with no targets in them are considered as false alarms. Thus, for a given score , we count the number of and the number of false alarms . targets detected and as the number of targets hit and the Define number of false alarms, respectively, with a score higher than the given score (2) (3) A PD–FAR curve is calculated, see (1). We present a comparison between the algorithms on an area of 2.94 km with 11 marked targets. In Fig. 3, the results for the hyperspectral-only algorithm step, for the spatial-only algorithm step, and for the combined steps are presented. Inspection of the results shows that the number of false alarms is reduced by an order of magnitude as a result of combining the two algorithms. At 80% PD, the false alarm incidences are 14, 13, and 1.1 for the three algorithms, respectively. At 90% PD, the false alarm incidences are 28, 24, and 2.5, respectively. B. Resolution Compression To check the dependency of the algorithms on ground resolution, we compare two flight lines at two different altitudes: 4000 and 6000 ft above ground level. These altitudes produce ground resolutions of the hyperspectral image of 1.0 1.0 and 1.5 1.5 m to pixel, respectively. A comparison of the results at these two altitudes is seen in Fig. 4. There is a reduction in the PD for a given FAR at the higher altitude flight line. This is due to the loss of spatial information in the high-resolution images.

Fig. 5. Combined results over several databases.

C. Combined Results Over Several Databases We checked the whole algorithm on the combination of three databases totaling 55 km , 270 targets, different cloud conditions/weather and different landscapes. The results overall are shown in Fig. 5. The ground resolutions were in the range of 1.0 1.0 to 1.5 1.5 m /pixel. A global threshold was chosen for the hyperspectral algorithm step. Both winter and summer experiments showed similar results of 3 false alarms per square kilometer with 80% detection. In the spring experiment, we saw a reduction in the detection, 62% detection for 3 false alarms per square kilometer. This may be due to the scattered cloud cover during this experiment. Although that the three databases are very different from each (where other. The individual PD–FAR curves, is the set of databases), are similar for the different databases. Therefore, it was justified to combine those curves, to get a global PD–FAR curve

Authorized licensed use limited to: Hebrew University. Downloaded on June 20,2010 at 12:50:52 UTC from IEEE Xplore. Restrictions apply.

(4)

710

IEEE SENSORS JOURNAL, VOL. 10, NO. 3, MARCH 2010

For the databases we checked, one may expect 2.5 false alarms per square kilometer with 70% detection or 4 false alarms per square kilometer with 80% detection. V. SUMMARY A system—composed of airborne sensors and various automatic algorithms—was presented. We present results that compared the system to its parts. The results show a reduction by an order of magnitude of the number of false alarms for a given PD. Thus, the fusion of spectral and spatial algorithms is better than the sum of the parts. A reduction in the PD was observed at higher altitude. This is due to the loss of spatial information in the high-resolution images. The results of the combined algorithms are similar when the cloud cover is 0/8 or 8/8. However, when one has scattered cloud cover—such as 4/8—a reduction in the PD is observed and further investigations should be made. Reduction of false alarm rate needs further investigation. Due to the algorithmic process, the false alarms detected were rectangular. Thus, improving spatial algorithms might not improve results significantly. However, improving unsupervised algorithms on the hyperspectral data might reduce false alarms due to better understanding of the background model [6]. We checked the system over different landscapes—open fields, forests, roads, buildings—and in different illumination and weather conditions—seasons, sun angles, and cloud coverage. The results showed that the algorithms are robust. ACKNOWLEDGMENT The authors would like to thank the members of the Image Processing Group at Rafael Advanced Defense Systems, Ltd., for various algorithms used in this research, and all the people who helped in preparing and carrying out the experiments and data collection. They would also like to thank A. Kershenbaum for his helpful comments. REFERENCES [1] C. G. Simi, E. M. Winter, M. J. Schlangen, and A. B. Hill, S. S. Shen and M. R. Descour, Eds., “On-board processing for the COMPASS, algorithms for multispectral, hyperspectral, and ultraspectral imagery VII,” Proc. SPIE, vol. 4381, pp. 137–142, 2001. [2] I. R. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Trans. Acoust., Speech Signal Process., vol. 38, no. 10, pp. 1760–1770, Oct. 1990. [3] O. Kuybeda, D. Malah, and M. Barzohar, “Rank estimation and redundancy reduction of high-dimensional noisy signals with preservation of rate vectors,” IEEE Trans. Signal Process., vol. 55, no. 12, pp. 5579–5592, Dec. 2007. [4] Z. W. Kim and R. Nevatia, “Uncertain reasoning and learning for feature grouping,” Comput. Vis. Image Understanding, vol. 76, pp. 278–288, 1999. [5] Z. Figov, K. Wolowelsky, and N. Goldberg, L. Bruzzone, Ed., “Co-registration of hyperspectral bands,” Image Signal Process. Remote Sens. XIII. Proc. SPIE, vol. 6748, pp. 67480s-1–67480s-12, 2007. [6] L. Boker, S. R. Rotman, and D. G. Blumberg, “Coping with mixtures of backgrounds in a sliding window anomaly detection algorithm,” in Proc. SPIE, Electro-Opt. Infrared Syst.: Technol. Appl. V, 2008, vol. 7113, pp. 711315-1–711315-12.

Doron E. Bar was born in 1962. He received the Ph.D. degree in applied mathematics from the Technion, Haifa, Israel, in 1996. Since 1999, he has been with Rafael Advanced Defense Systems Ltd., Haifa, Israel, as an Image Processing Engineer. His current research interests include computer visions, image processing, and remote sensing tasks.

Karni Wolowelsky, photograph and biography not available at the time of publication.

Yoram Swirski was born in Jerusalem, Israel, in 1955. He received the B.Sc. degree in physics from Tel-Aviv University, Tel-Aviv, Israel, in 1976, the M.Sc. degree (cum laude) in applied physics and electrooptics from The Hebrew University of Jerusalem, Israel, in 1978, and the Ph.D. degree in physics from the Technion, Haifa, Israel, 1992. Since 1979, he has been with Rafael Advanced Defense Systems Ltd., Haifa, where he is currently engaged in research on IR radiometry, image generation and simulation, and computer vision.

Zvi Figov studied computer science and mathematics at Bar-Ilan University, Ramat-Gan, Israel. He received the B.Sc. degree, in 1999, and the M.Sc. degree with specialization in neuroscience, in 2002, from Bar-Ilan University. From 2001 to 2008, he was with Rafael Advanced Defense Systems Ltd. (formerly Rafael Armament Development Authority), Israel, where he was engaged in research on image processing and remote sensing. He is currently with the MATE Intelligent Video, Jerusalem, Israel, where he is engaged in research on developing video analytics, computer vision, real-time analytics, and remote sensing.

Ariel Michaeli, photograph and biography not available at the time of publication.

Yana Vaynzof was born in Tashkent, Uzbekistan, on December 2 1981 and immigrated to Israel in 1991. She received the B.Sc degree (summa cum laude) in electrical engineering from the Technion-Israel Institute of Technology, Haifa, Israel, in 2006, and the M.Sc. degree in electrical engineering from Princeton University, Princeton, NJ, in 2008. She is currently working towards the Ph.D. degree in physics at the Optoelectronics Group, Cavendish Laboratory, University of Cambridge, Cambridge, U.K. During her undergraduate studies, she worked part-time as a Student Engineer in Rafael Advanced Defense Systems Ltd., Haifa, in the Image-Processing Group of the Missile Division. During 2000–2002, she was with the Israeli Defense Forces as an Instructor in the Flight Academy. Her current research interests include development of hybrid polymer solar cells and the improvement of their efficiency and stability. Miss Vaynzof was the recipient of a number of fellowships and awards, including the Pinzi Award for Academic Excellence (2004), Knesset (Israeli Parliament) Award for contribution to the Israeli Society (2005), Gordon Y. Wu Fellowship (2006–2008), and the Cavendish Laboratories Award (2008).

Authorized licensed use limited to: Hebrew University. Downloaded on June 20,2010 at 12:50:52 UTC from IEEE Xplore. Restrictions apply.

BAR et al.: TARGET DETECTION AND VERIFICATION VIA AIRBORNE HYPERSPECTRAL AND HIGH-RESOLUTION IMAGERY PROCESSING AND FUSION

Yoram Abramovitz was born in Affula, Israel, in 1962. He received the B.Sc. and M.Sc. degrees in physics from the Technion, Haifa, Israel, in 1994. Since 2000, he has been with Rafael Advanced Defense systems Ltd., Haifa, where he is currently engaged in research on remote sensing R&D of electrooptical systems and radiometric measurements.

Amnon Ben-Dov was born in 1955. Since 1981, he has been an Electronics Practical Engineer at the Physics Development Laboratories, Rafael Advanced Defense Systems Ltd., Haifa, Israel.

Ofer Yaron was born in 1965. He received the B.Sc. degree in physics from the Technion, Haifa, Israel, in 1992, and the M.Sc. degree in physics from Tel-Aviv University, Tel-Aviv, Israel, in 1998. Since 1992, he has been with Rafael Advanced Defense Systems Ltd., Haifa, where he is currently engaged in research on remote sensing, image generation, and simulation.

711

Lior Weizman received the B.Sc. (with distinction) and M.Sc. degrees in electrical engineering from Ben-Gurion University of the Negev, Beer-Sheva, Israel, in 2002 and 2004, respectively. He is currently working towards the Ph.D. degree at the School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel. From 2005 to 2008, he was with Rafael Advanced Defense Systems Ltd., Haifa. His current research interests include image processing, pattern recognition, and statistical signal processing.

Renen Adar was born in Afula, Israel, in 1955. He received the B.Sc. and M.Sc. (cum laude) degrees in physics and mathematics from The Hebrew University of Jerusalem, Israel, and the D.Sc. degree in microelectronics from the Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel. From 1989 to 1993, he was a Member of Technical Staff with the Passive Optical Component Research, AT&T Bell Laboratories, Murray Hill, NJ. Since 1994, he has been with Rafael Advanced Defense Systems Ltd., Haifa, where he is currently engaged in research on algorithm development activities related to machine vision and image recognition tasks.

Authorized licensed use limited to: Hebrew University. Downloaded on June 20,2010 at 12:50:52 UTC from IEEE Xplore. Restrictions apply.

Suggest Documents