Mapping shadows in very high-resolution satellite data using HSV and ...

6 downloads 205 Views 2MB Size Report
Sep 18, 2013 - Abstract. Multispectral scanners (MSS) such as IKONOS have very high spatial resolution and therefore, provide excellent source of information ...
Appl Geomat (2013) 5:299–310 DOI 10.1007/s12518-013-0118-4

ORIGINAL PAPER

Mapping shadows in very high-resolution satellite data using HSV and edge detection techniques Sunil Bhaskaran & Swaroopa Devi & Sanjiv Bhatia & Ashok Samal & Leroy Brown

Received: 5 December 2012 / Accepted: 5 September 2013 / Published online: 18 September 2013 # Società Italiana di Fotogrammetria e Topografia (SIFET) 2013

Abstract Multispectral scanners (MSS) such as IKONOS have very high spatial resolution and therefore, provide excellent source of information about terrestrial features. The images from these scanners may contain shadows that can lead to partial or complete loss of radiometric information, leading to misinterpretation or inaccurate classification. In addition, the identification of shadows is critical for several applications. The goal of this study is to develop computer-based algorithms to detect shadows from IKONOS panchromatic (1× 1 m) and MSS bands (4×4 m). We converted subsets of IKONOS pan and MSS images over New York City to HSV color space, and used histogram analysis to determine an intensity threshold. Potential sunlit and shadow areas were demarcated and edge detection techniques were employed to eliminate the non-shadow, low-intensity areas and identify shadow areas on the image subsets. We tested the results on a time series of datasets to develop a robust model that has the capability to detect shadows and extract them from highresolution satellite imagery.

Keywords Shadow detection . Shadow removal . Image processing . Remote sensing

S. Bhaskaran : L. Brown Bronx Community College and Earth and Environmental Studies (E.E.S), Graduate Center, City University of New York, New York, NY, USA S. Devi : S. Bhatia (*) Department of Mathematics and Computer Science, University of Missouri—St. Louis, St. Louis, MO, USA e-mail: [email protected] A. Samal Department of Computer Science and Engineering, University of Nebraska–Lincoln, Lincoln, NE, USA

Introduction Classification of high-resolution imagery is beneficial for several agencies to make informed decisions. These agencies include agricultural, transportation, emergency management, marketing, and business entities. They may make use of satellite data that is typically acquired at different time intervals. Depending on the time each satellite image is acquired, there is a strong likelihood of shadows being present in the imagery. This has both positive and negative impacts. For example, the height of a tall structure may be easily estimated by using the shadow cast by that structure. On the other hand, the shadow may result in loss of radiometric information content that may lead to inaccurate classification of data (Arevalo et al. 2008). The shadows on images have, therefore, been a major concern to remote-sensing scientists. This concern has resulted in the development of various methods to remove or reduce the effect of shadows. The presence of shadows in satellite imagery can lead to anomalies in classification. Shadows are interplay between light and objects in the scene that may be classified as surface features. This fact underscores the importance of detection of shadows in remotely sensed imagery (Xu et al. 2006). A number of researchers have worked on detection of shadows in different terrain regions. For example, Asner and Warner (2003) have quantified the spatial variation due to canopy shadow in tropical forests and savannas. Chung et al. (2009) iterate over local thresholds to mark a set of pixels as shadows, and later, iterate over those candidate shadow pixels to find real shadows. Zhou et al. (2009) have used an object-based approach to detect shadows. Bhaskaran et al. (2011) have investigated a combination of object-oriented and spatial autocorrelation techniques to identify urban features. In this paper, we present a computer-based algorithm to detect shadows from IKONOS panchromatic (1×1 m) and MSS bands (4×4 m). We have used subsets of IKONOS panchromatic and

300

MSS images over New York City to test our algorithm and to present the results. Our algorithm has clearly demarcated potential sunlit and shadow areas in the images. We have also used edge detection techniques to eliminate the non-shadow, lowintensity areas and identify shadow areas on the image subsets. This paper is organized as follows. In the next section, we review the literature on the detection of shadows. This is followed by our algorithm to detect shadows. We conclude with a section to describe our data, experiments, and results.

Literature review A study conducted by Cucchiara et al. (2001) discusses a technique for shadow detection and suppression used in a system to detect and track moving visual objects. This study carried out the analysis in the hue/saturation/value (HSV) color space to improve the accuracy in detecting shadows. The study describes signal processing and optic motivations of the approach and outlines the integration of the shadow detection module into the system along with an evaluation of the experimental results. Guo et al. (2010) described a method to remove shadows from Google Earth images by using the height of buildings. They explain that their method of shadow removal is suitable for urban areas with tall building structure. They explain the cause of shadows and the algorithm used to remove shadows from applicable images. The presented method manipulated images in red, green, and blue (RGB) color space. They showed that the wavelength of the shadow increased as the shadow intensity got stronger. This method is suitable for urban areas; it falls under the Lambertian property of surface reflections found only in high-rise buildings and metropolitan locations. Arevalo et al. (2008) have used a region-growing process to segment shadow regions in high-resolution color images from satellites. Their technique is based on imposing restrictions on the saturation and intensity values of the shadow pixels and their edge ingredients using a region-growing process in a specific band. They used data from QuickBird satellite under different lighting conditions in both urban and rural areas. However, their method requires manual input of thresholds and the technique resulted in a large number of false positives in some urban applications. Singh et al. (2012) present an efficient and simple approach for shadow detection and removal based on HSV color model in complex urban remote-sensing imagery to solve problems caused by shadows. They detect shadows using normalized difference index and subsequent thresholding based on Otsu's method. They classify shadows after detection and estimate a non-shadow area around each shadow, termed as buffer area, using morphological operators. They use the mean and variance of these buffer areas to compensate for the shadow regions.

Appl Geomat (2013) 5:299–310

Richter and Muller (2005) introduced a de-shadowing concept that can be applied to terrain with less than 25 % cloud cover. They specify that it is highly preferred that the water bodies are to be excluded as much as possible using this method. This is due to the difficulties in distinguishing water bodies from cloud shadow areas. The advantage of this method, as demonstrated in the paper, is its fast processing performance. Their method relies on spectral calculations and neglects time-consuming geometric cloud/shadow pattern considerations. Simpson et al. (2000) have also focused on removing shadows emitted from clouds. They demonstrated algebra, functions, and graphs in explaining their methods of de-shadowing. This process considers the angles of the sun as one of the fundamental focuses in eliminating cloud shadows as well as detecting them. Young et al. (2005) describe an algorithm to detect and identify terrain shadowing effects from a flight perspective. They use digital elevation models to extract shadows. They have also argued in favor of representing terrain features from both model and sensor measurements. Premože et al. (1999) investigated shadow management for snowy and mountain topography. This method requires information about the geometry and the photometry of the scene. The technique makes use of an orthorectification process; the perspective image is warped to remove the effects of the lens projection, camera orientation, and terrain. The multispectral satellite data can be converted to RGB that approximates the visual color. In this scheme, the first desirable ortho-imaging requires brightness management of pixels, neighborhood areas, and other criteria such as slope and aspect. Giles (2001) focuses on removing shadows in rigid terrains and mountains. He delves into the issues of topographic protrusions and the angle at which the shadows form due to the rigidness of the mountain and describes an algorithm to undo the shadow. Some data shown uses a mountain peak at 2,500 m in its elevation. Pixels are clustered into three groups using a K-means algorithm. The results were compared between a human volunteer to the delineated cast shadow algorithm that demonstrated 85.9 % accuracy. Dare (2005) introduced and named previously used shadow manipulation tactics. He also identifies three techniques in removing shadows from high-resolution imagery: masking, multisource data fusion, and radiometric enhancement. The article contains traces of other references and the protocols necessary to execute various applicable tasks as well as some issues with each approach. Dare notes that the simplest algorithms provide the best chance of separating shadow from non-shadow regions. Li et al. (2005) presented an approach to automatically detect and de-shadow high urban aerial images for GIS applications. The shadow is computed from a digital surface model (DSM) and the solar altitudes. They used a ray-tracing method to determine the visibility of shadows in the image. The

Appl Geomat (2013) 5:299–310

301

Fig. 1 Overall steps to identify shadows in satellite imagery of urban areas

shadow was segmented from the RGB image at the base of the traced image shadows. The paper describes a set of innovative techniques by using photogrammetric and image analysis methods. The DSM is short on respective details, but the traced shadow gives a primarily correct location of the image. Lo and Yang (2006) described an algorithm that uses color, shading, texture, neighborhoods, and temporal consistency for the efficient and reliable detection of shadows in a scene. This algorithm can detect umbra and penumbra in different scenarios under various illumination conditions. Lu (2007) describes two approaches to detect shadows based on maximum and minimum filters using tests carried out on second-moment texture measures. Ollis and Stentz (1997) depicted an agricultural scenario in which a large field is scanned by a harvester. The harvester then distinguishes between the successfully cut crops and those that were not trimmed. Their system uses lighting to remove noise caused by shadow in making clear-cut predictions of the agricultural land. They developed an algorithm that provides boundaries between cut and uncut crops. They presented a technique to remove the noise by using differences in the spectral power distribution between light, illuminating the shadowed and deshadowed regions using an RGB camera. The authors also set up obstacle detection to exclude from within the agricultural growth area. Another study by Ozdemir (2008) examines the relationship between field-measured stem volume and tree attributes, including tree crown and tree shadow areas, measured from pansharpened QuickBird imagery in a forested area. This study computed and modeled stem volume using linear regression

and statistical analysis and showed that the stem volume is correlated with shadow and crown areas. Arellano (2003) described a method that uses a wavelet transformation to deal with clouds and shadows that cover remote-sensing areas. He discusses how a lot of the conventional methods to remove clouds and their shadows are based on time series, but emphasizes that the spatial information is superior to such. The wavelet analysis mentioned is a refined Fourier analysis. This is based on a function that splits the spatial signal into its frequency components. The Fourier analysis cannot deal with a changing signal over time, but the wavelet can deal with the amount of localization in time and in frequency.

Fig. 2 Histogram of the panchromatic image

Fig. 3 Distribution of pixel counts in histogram

Detection of shadows We have examined very high-resolution panchromatic and multispectral bands of IKONOS MSS satellite imagery for this study. However, this method can be applied to images in other bands as well. We have taken samples of both monochromatic and color images for this study and found the method to be effective in both. Our approach to detect shadows in images starts with the histogram analysis of the image. The method analyzes the intensity of the pixels in the image to determine an exact threshold point to segment the image for candidate regions that contain shadows. We define this threshold as the point in the histogram where the count of pixels shows a sharp difference between the number of pixels at low intensity and the

302

Appl Geomat (2013) 5:299–310 Table 1 Pixel classification criterion

Fig. 4 A sunlit pixel

number of pixels at high intensity. We found that performing the analysis in the HSV space as compared to the RGB space gives a better threshold that delineates sunlit and shadow regions. This analysis correctly identifies the intensity of the shadowed areas but cannot isolate all the shadows from lower intensities. We improved the system's performance through a novel algorithm that uses edge detection to discriminate the actual shadows from other low-intensity objects such as water bodies and roads in the image. Before continuing with the analysis, we'd like to justify our use of the HSV color space as related to the human vision system. The human vision system is based on sensing the light energy by photoreceptors in the retina. These photoreceptors are known as cones and rods. Rods are responsible for the sensing of scotopic or low-light vision. Cones are highly sensitive to color and are responsible for photopic or bright light vision. Cones respond to the three primary colors—red, green, and blue, with different levels of sensitivity. Out of 6 to 7 million cones in the human eye, about 65 % respond to red color, 33 % respond to green color, while just 2 % respond to blue color, the latter being the most sensitive (Gonzalez and Woods 2008). The three primary colors were identified and standardized in 1931 by the International Commission on Illumination (CIE). The digital imaging era saw the use of these three

Initial guess

Pixel intensity

Final classification

Shadow Shadow Sunlit Sunlit

Sunlit Shadow Sunlit Shadow

Sunlit Shadow Sunlit Sunlit

colors for storage and display of color images. The standard way to store images assigns 8-bits or 256 shades to each of the three colors. However, due to sensitivity of the human eye, the three colors have different amount of effects inside the eye. Therefore, these colors are not so useful for reasoning with the images. Humans tend to describe colors in images as dark reddish rather than quantifying the exact amount of each color. Such concepts have led to development of different color systems to perform reasoning or comparison with colors. Some of those systems are HSV and L*a*b*. Hue describes the dominant color as perceived by a human observer, saturation indicates the relative purity of the color, and value gives the brightness (Poynton 1996). HSV has proved to be one of the best systems to perform reasoning with color images and hence, we converted our images to HSV to detect shadows. It should be noted that RGB remains the most dominant space for storage of images and hence, we convert the images to HSV just prior to reasoning with the images. Data analysis We have used the raw images from the IKONOS subsets covering urban locations in New York City to detect shadows. The urban locations pose unique challenges due to the presence of different objects such as buildings, trees, roads, and water bodies. Our objective is to

Table 2 IKONOS satellite data characteristics Spatial resolution Spectral range

Swath width Off-nadir imaging Dynamic range Mission life expected Revisit time Orbital altitude Nodal crossing Acquisition date/time Fig. 5 A shadow pixel

0.82×0.82 m Pan band 526–929 nm Blue band 445–516 nm Green band 506–595 nm Red band 632–698 nm Near IR 757–853 nm 11.3 km Up to 60 ° 11 bits per pixel >8.3 years Approximately 3 days 681 km 10:30 a.m. (a) May 2008 (b) September 2003

Appl Geomat (2013) 5:299–310

303

Fig. 6 Study site—New York City

achieve the closest result to the ground truth in the detection of shadows for any IKONOS image without using multiple images to train an algorithm. We limited ourselves to using a single panchromatic image to achieve accurate shadow detection. The process of identifying the shadow area from an

image is detailed in the flowchart in Fig. 1. A more detailed description of the steps in the flowchart is given below. Step 1 Converting from RGB to HSV space The color images from IKONOS are stored as

304

Appl Geomat (2013) 5:299–310

a Original Image

b Shadows using RGB Space

c Shadows using HSV Space Fig. 7 Result of shadow detection in gray scale and HSV space. a Original image. b Shadows using RGB space. c Shadows using HSV space

RGB value for each pixel in the image. This combination contains the primary colors in the visual portion of the electromagnetic spectrum and is typically known as a pixel vector. The three values in the vector for each pixel are uniformly mapped on the same scale; but as noted above, their contribution to the human visual response (HVR) is not uniform. For example, the green values have a much larger effect on HVR than the blue values. We can transform the pixel vector

containing the RGB values into an intensity value to compare the brightness in different pixels. However, we found that the variation of the intensity calculated in the RGB space did not give us a threshold value that can clearly discriminate between sunlit and shadow regions because the RGB components contribute different weights towards brightness as perceived by the HVR. In order to overcome this problem, we converted the image into HSV space where the difference in the intensity of pixels is more in tune with the human perception of brightness and hence, we can obtain a better outcome. In HSV space, hue indicates an angle to indicate the color with red primary at 0 °, green primary at 120 °, and blue primary at 240 °. Saturation gives the purity of the color with 0 indicating a lack of color and 1 (or maximum value on a scale) indicating the pure color. Value specifies the largest component of any color, with 0 indicating black (Poynton 1996). Step 2 Histogram analysis to find intensity threshold We use the image from the previous step to perform histogram analysis in HSV space. Histogram gives us a count of individual intensity values in the image. Histogram is computed by creating a set of bins to hold the count of pixels, each bin corresponding to an intensity value, and incrementing its contents by 1 as a pixel with corresponding value is encountered. In this step, we decided to create the histogram by mapping the pixels of similar intensity into a single bin. We have used a range of five intensity points to go into one bin in our implementation. After creating the histogram, we calculate the variance of pixel counts in each bin from the mean value and identify the bins with a high degree of variance. We'd like to emphasize that the variance is not in terms of pixel intensity but in the count of pixels at given intensities. We have represented the acceptable level of variance at about half of the maximum variance in pixel count in any bin. This is shown with the solid vertical red line in Fig. 2. The bins with variance higher than the acceptable variance are excluded from the threshold calculations (Fig. 3). These bins do not represent the threshold point that discriminates on shadows but represent large geographic areas with same kind of characteristic such as lakes, meadows, or desert region.

Appl Geomat (2013) 5:299–310

305

Fig. 8 Detection of shadows on MSS IKONOS datasets. a IKONOS stack subset (13th January). b Application of intensity threshold. c Laplace edge detection d Final output (shadows colored in blue)

a IKONOS stack subset (13thJanuary)

b Application of intensity threshold

c Laplace edge detection

d Final output (shadows colored in Blue)

The histogram is plotted using only the value portion of the HSV pixels. In our implementation, the value portion is mapped to an 8-bit depth that gives us the range of value as [0, 255]. Our approach is to find bins with the first largest positive difference (low to high increase) in the pixel counts between the bins and take the intensity value that separates these bins as the threshold. Taking the first largest positive

difference ensures that the threshold point being computed is in the lower intensity region. The threshold point is represented by the vertical red line in Fig. 2. Step 3 Identify potential sunlit and shadow area Once the threshold value is identified, we create a binary image from the original image by painting all the pixels with intensity value higher than the threshold as white and shading

306

Appl Geomat (2013) 5:299–310

Fig. 9 Illustration of shadows and low-intensity areas. a. IKONOS stack subset (October 26, 2001). b Image converted to grayscale. c Intensity thresholding. d Final output with shadows and low-intensity areas

a IKONOS stack subset (October 26, 2001)

b Image converted to grayscale

c Intensity thresholding

d Final output with shadows and low-intensity areas

the remaining pixels black. We use the following criterion to create the binary image. Let us denote the computed threshold as T (x ,y ). Let the source pixel in the original image be denoted by src (x ,y ), and the corresponding destination pixel in the binary be denoted by dst (x ,y ). Then, we have

 dst ðx; yÞ ¼

maxValue 0

if srcðx; yÞ > T ðx; yÞ otherwise

This accurately identifies the shadows in most urban settings. However, if there are objects with lower intensity than shadows, the shadows are not isolated.

Appl Geomat (2013) 5:299–310

307

a Pan Test 1

b Laplace Edge

c Intensity thresholding

d Final output with shadows

Fig. 10 Shadow noon data; Pan bands. a Pan Test 1. b Laplace edge c Intensity thresholding d Final output with shadows

Step 4 Edge detection to isolate non-shadow, low-intensity areas To isolate shadows from the lower intensity areas in the image, we have developed an algorithm that makes use of the fact that a shadow always exists adjacent to some edge especially in the scale of satellite images such as the ones from IKONOS. The edges are defined as a sharp change in intensity of pixels along a contour. Typically, we can identify edges by using a transform based on the second-order partial derivative in both horizontal and vertical directions in the image. We first perform edge detection on the image to identify all the edges in the image. In the current system, we have used the Laplace transform (Gonzalez and Woods 2008) to perform the edge detection. Once we have identified the edges on the image, we analyze the proximity of each pixel in the image obtained from edge detection to an edge. If the pixel is very close to an edge, then we make a guess that the corresponding pixel in the original image is probably a shadow. In the alternate case where there is no edge in the proximity of the image, we conclude that the corresponding pixel is in the sunlit area but is of low intensity. We illustrate this using Figs. 4 and 5. In Fig. 4, we show a 3×3 pixel area that shows a center pixel surrounded by 8 dark pixels. We reason that the pixel

at the center is a sunlit pixel due to the absence of a nearby edge. In Fig. 5, we notice the center pixel as a shadow pixel due to some pixels being bright while others are dark. We should observe here that a pixel determined as a shadow pixel need not be a shadow in the original image. To detect the proximity of the edge, we convolve the image with an appropriately sized kernel. If any one of the pixels in the kernel is an edge pixel (white in color), we determine that the pixel is close to the edge and therefore can potentially be a shadow pixel. Once we make a determination about each pixel, we then compare the assignments with the corresponding pixel in the binary image and then determine if the pixel is a shadow or not using the criterion described in Table 1. Step 5 (Extension): Eliminating false shadows at the edges As a result of the previous step, we identify all the shadows in the image. However, there still remain a few false shadows at the edges of non-shadow, lowintensity areas. These are the low-intensity pixels that lie beyond the distance of kernel from the edge. These false shadows can be eliminated if we can identify them positively as belonging to nonshadow, low-intensity area. In order to achieve this, we do not only consider the proximity of pixels but also the direction. If the

308

Appl Geomat (2013) 5:299–310

a Input image

b Laplace edge

c Intensity thresholding

d Final output with shadows

Fig. 11 Shadow Noon Bands B-NIR-R. a Input image. b Laplace edge. c Intensity thresholding. d Final output with shadows

pixel under consideration is a low-intensity, nonshadow area, we also make sure that the lowintensity pixels are present in all directions (east, south, west, and north) to mark it as a non-shadow, low-intensity pixel.

Data We have used two cloud-free and orthorectified IKONOS MSS satellite data images (May 2008 and September 2003) in the study and analysis. There is a panchromatic band and four multispectral bands on the IKONOS satellite data that has spatial resolution

ranging from 1×1 to 4×4 m. The images were projected to the Universal Transverse Mercator, WGS84 datum, zone 18 and have a radiometric resolution of 11 bits per pixel. The data specifications and resolution characteristics are shown in Table 2. Study area The study site consists of Queens and Brooklyn boroughs in New York City as well as North Bergen/ Guttenberg county in New Jersey (Fig. 6). In both the study sites, the land use varies from densely built up to low-density built up, recreational sites, open spaces, trees, and water bodies. In the Queens/Brooklyn area, the East river virtually bisects the study area into two sections. The Northern boundary of the image comprises of dense residential apartments with some trees along the side of the roads.

Appl Geomat (2013) 5:299–310

309

a Input image

b Laplacian edge

c Intensity thresholding

d Final output with shadows

Fig. 12 Shadow noon; bands B-NIR-R. a Input image. b Laplacian edge. c Intensity thresholding. d Final output with shadows

Some recreational areas (parks) are also found in the northern parts of the study area. The northern and western parts of the image consist of large industrial buildings, open spaces with scattered vegetation, trees, and some residential houses. Several industrial areas and small residential houses are located in the southern and the eastern parts of the study site. The North Bergen/Guttenberg area comprises of the same land use as the Queens/Brooklyn area. The western boundary comprises of large commercial buildings bordered by the New Jersey Turnpike, swamps, and open space. The eastern boundary comprises of commercial/shopping districts such as the Palisades Medical Center and Promenade bordered by the Hudson River. The James L Braddock North Hudson County Park is found in the eastern center region of the study site. The surrounding area of the park to the northern and southern region is mainly comprised of dense residential districts, shopping complexes, and vegetation in the form of trees and recreational parks. The study sites in New York and New Jersey are shown in Fig. 6.

Results We developed processes using IKONOS bands to detect shadows from a time series of MSS data. We were able to

demonstrate shadow detection techniques using a single image by using extensive digital image analysis. The result of our work is displayed in Fig. 7. We have illustrated the process in Fig. 8 using an image of New York City's urban area from the MSS-stacked IKONOS imagery bands—red, NIR, and blue. This image is good for illustration as it contains the shadows cast by the tall buildings in the urban area and shows the effectiveness of our approach. Figure 8b shows the application of thresholding on the original image. The result of edge detection using Laplacian technique is shown in Fig. 8c. In Fig. 8d, we show the shadow regions in blue. It should be noted that our program shows the nonshadow, low-intensity regions in red but did not detect any such areas in this image. Figure 9 shows similar operations and illustrates the detection of non-shadow, low-intensity regions in red. It is interesting to note in Fig. 9 that our program was able to distinguish between actual shadows and the presence of a cloud and large bodies of water. We applied the procedure to images that were exposed closer to noon time. We selected images that consisted of features in different shapes and sizes such as tall buildings and spherical objects. We present an example of such an image in Fig. 10a. Fig. 10b–d shows, respectively, the results after applying the Laplacian edge filter, intensity thresholding, and shadows that were successfully identified. We also tested the same procedure on an MSS image that covers a section of

310

densely urban Manhattan overlooking the Central Park (Fig. 11a). The tall buildings cast long, near vertical shadows. The procedure was very effective and produced good results (Fig. 11b–d). We repeated the steps on a 4×4-m MSS band for the same area as Fig. 10a. Once again, the shadows were clearly identified as shown in Fig. 12b–d.

Conclusion and discussions Detection of shadow is an important process in image analysis since shadows obscure vital information and lead to inaccurate classifications of targets. We have demonstrated a methodology to detect shadow and tested its robustness on both panchromatic and MSS IKONOS time series of datasets; however, this methodology may have to be modified before its application on other different resolution datasets. Our future work will focus on testing this methodology to detect shadows on other datasets such as rapid eye, GeoEye-1, and airborne hyperspectral data such as HyMap.

References Arellano P (2003) Missing information in remote sensing: wavelet approach to detect and remove clouds and their shadows. International Institute for Geo-Information Science and Earth Observation, Enschede, The Netherlands Arevalo V, Gonzalez J, Ambrosio G (2008) Shadow detection in colour high-resolution satellite images. Int J Remote Sens 29(7):1945–1963 Asner GP, Warner AS (2003) Canopy shadow in IKONOS satellite observations of tropical forests and savannas. Remote Sens Environ 87(4):521–533 Bhaskaran S, Ramnarayan M, Paramananda S (2011) Determination of optimal scale parameters for segmentation of urban features from multispectral IKONOS imagery. Asian J Geoinformatics 11(2) Chung K-L, Lin Y-R, Huang Y-H (2009) Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans Geosci Remote Sens 47(2):671–682 Cucchiara R, Grana C, Piccardi M, Prati A, Sirotti S (2001) Improving shadow suppression in moving object detection with HSV color

Appl Geomat (2013) 5:299–310 information. Proceedings of the IEEE Conference on Intelligent Transport Systems, pp 334–339 Dare PM (2005) Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm Eng Remote Sens 71:169–177 Giles PT (2001) Remote sensing and cast shadows in mountainous terrain. Photogramm Eng Remote Sens 67(7):833–839 Gonzalez RC, Woods RE (2008) Digital image processing. Pearson Prentice Hall, Upper Saddle River, NJ Guo J, Liang L, Gong P (2010) Removing shadows from Google Earth images. Int J Remote Sens 31(6):1379–1389 Li Y, Gong P, Sasagawa T (2005) Integrated shadow removal based on photogrammetry and image analysis. Int J Remote Sens 26(18): 3911–3929 Lo K-H, Yang M-T (2006) Shadow detection by integrating multiple features. Proceedings of the 18th International Conference on Pattern Recognition, 1, pp 743–746 Lu D (2007) Detection and substitution of clouds/hazes and their cast shadows on IKONOS images. Int J Remote Sens 28(18):4027–4035 Ollis M, Stentz A (1997) Vision-based perception for an automated harvester. IROS '97: Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3. Grenoble, France, pp 1838–1844 Ozdemir I (2008) Estimating stem volume by tree crown area and tree shadow area extracted from pan-sharpened QuickBird imagery in open Crimean junior forests. Int J Remote Sens 29(19):5643– 5655 Poynton C (1996) A technical introduction to digital video. Wiley, New York Premože S, Thompson WB, Shirley P (1999) Geospecific rendering of Alpine terrain. In: Lischinski D, Larson GW (eds) Rendering techniques. Springer, Vienna, pp 107–118 Richter R, Muller A (2005) De-shadowing of satellite/airborne imagery. Int J Remote Sens 26(15):3137–3148 Simpson JJ, Zhonghai J, Stitt JR (2000) Cloud shadow detection under arbitrary viewing and illumination conditions. IEEE Trans Geosci Remote Sens 38(2):972–976 Singh KK, Pal K, Nigam MJ (2012) Shadow detection and removal from remote sensing images using NDI and morphological operators. Int J Comput Appl 42(10):37–40 Xu L, Qi F, Jiang R, Hao Y, Wu G (2006) Shadow detection and removal in real images: a survey. Computer Vision Lab, Dept. of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai Young SD, Kakarlapudi S, Uijt de Haag M (2005) A shadow detection and extraction algorithm using digital elevation models and x-band weather radar measurements. Int J Remote Sens 26(8):1531– 1549 Zhou W, Huang G, Troy A, Cadenasso ML (2009) Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas. Remote Sens Environ 113:1769–1777

Suggest Documents