Earthquake-Induced Building Damage Detection with Post ... - MDPI

4 downloads 257 Views 5MB Size Report
Oct 27, 2016 - Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital .... are fine enough to classify building signatures according to basic ...... by means of multi-temporal SAR-A comprehensive review and outlook.
remote sensing Article

Earthquake-Induced Building Damage Detection with Post-Event Sub-Meter VHR TerraSAR-X Staring Spotlight Imagery Lixia Gong 1,2 , Chao Wang 3 , Fan Wu 3, *, Jingfa Zhang 1 , Hong Zhang 3 and Qiang Li 1 1 2 3

*

Institute of Crustal Dynamics, China Earthquake Administration, Beijing 100085, China; [email protected] (L.G.); [email protected] (J.Z.); [email protected] (Q.L.) School of Civil Engineering and Geosciences, Newcastle University, Newcastle Upon Tyne NE1 7RU, UK Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China; [email protected] (C.W.); [email protected] (H.Z.) Correspondence: [email protected]; Tel.: +86-10-8217-8159

Academic Editors: Roberto Tomas, Zhenhong Li, Zhong Lu, Richard Gloaguen and Prasad S. Thenkabail Received: 30 June 2016; Accepted: 24 October 2016; Published: 27 October 2016

Abstract: Compared with optical sensors, Synthetic Aperture Radar (SAR) can provide important damage information due to its ability to map areas affected by earthquakes independently from weather conditions and solar illumination. In 2013, a new TerraSAR-X mode named staring spotlight (ST), whose azimuth resolution was improved to 0.24 m, was introduced for various applications. This data source made it possible to extract detailed information from individual buildings. In this paper, we present a new concept for individual building damage assessment using a post-event sub-meter very high resolution (VHR) SAR image and a building footprint map. With the building footprint map, the original footprints of buildings can be located in the SAR image. Based on the building imaging analysis of a building in the SAR image, the features in the building footprint can be extracted to identify standing and collapsed buildings. Three machine learning classifiers, including random forest (RF), support vector machine (SVM) and K-nearest neighbor (K-NN), are used in the experiments. The results show that the proposed method can obtain good overall accuracy, which is above 80% with the three classifiers. The efficiency of the proposed method is demonstrated based on samples of buildings using descending and ascending sub-meter VHR ST images, which were all acquired from the same area in old Beichuan County, China. Keywords: earthquake; damage assessment; building; Synthetic Aperture Radar; TerraSAR-X; high resolution

1. Introduction Damage detection after earthquakes is an important issue for post-disaster emergency response, impact assessment and relief activities. Building damage detection is particularly crucial for identifying areas that require urgent rescue efforts. Remote sensing has shown excellent capability for use in rapid impact assessments, as it can provide information for damage mapping in large areas and in an uncensored manner, particularly when information networks are inoperative and road connections are destroyed in areas impacted by earthquakes. Compared with optical sensors, Synthetic Aperture Radar (SAR) can provide important damage information due to its ability to map affected areas independently from the weather conditions and solar illumination, representing an import data source for damage assessment. Many approaches for damage detection with SAR imagery have been introduced [1,2]. Change detection with pre- and post-event SAR images is widely used for damage assessment, Remote Sens. 2016, 8, 887; doi:10.3390/rs8110887

www.mdpi.com/journal/remotesensing

Remote Sens. 2016, 8, 887

2 of 21

and it has become a standard procedure [3]. The differences between the backscattering intensity and intensity correlation coefficients of images before and after earthquakes indicate potential damage areas. In addition, the interferometric coherence of pre- and post-event SAR data can also provide important information for damage assessment [1]. In the case of change detection, the pre- and post-event data are needed for comparison. However, in many areas, almost no archived SAR images, especially for high-resolution SAR data, are available. Therefore, research on building damage assessment with only post-event SAR images is necessary. The new generation of high-resolution SAR satellite systems was first deployed for earthquake monitoring during the Wenchuan Earthquake in 2008 [3]. Meter-resolution SAR images from TerraSAR-X and COSMO-SkyMed were acquired for damage assessment. Although the image resolution has improved, it is difficult to identify collapsed buildings using single meter-resolution SAR images [3,4]. After 2013, a new TerraSAR-X mode named staring spotlight (ST), whose azimuth resolution was improved to 0.24 m, was introduced. The sub-meter very high resolution (VHR) SAR data source provides new opportunities for earthquake damage mapping because more details can be observed in images, making it possible to focus an analysis on individual buildings. Previous works have generally concentrated on the building block level of damage assessment with low- or medium-resolution SAR images. Few studies have focused on individual buildings due to lack of VHR SAR images. Balz et al. [3] analyzed the appearances of destroyed and partly destroyed buildings using high-resolution SAR images and proposed a technical workflow to identify building damage using post-seismic high-resolution SAR satellite data. Another method was developed to detect building damage in individual building units using pre-event high-resolution optical images and post-event high-resolution SAR data based on the similarity measurement between simulated and real SAR images [5]. Wang et al. [6] proposed an approach for post-earthquake building damage assessment using multi-mutual information from pre-event optical images and post-event SAR images. Brunner et al. [4] analyzed a set of decimeter-resolution SAR images of an artificial village with different types of destroyed buildings. The analysis showed that the decimeter-resolution SAR data are fine enough to classify building signatures according to basic damage classes. Kuny et al. [7] used simulated SAR images of different types of building damage for feature analysis and proposed a method for discriminating between debris heaps and high vegetation with simulated SAR data and TerraSAR-X images [8]. Wu et al. [9,10] analyzed different building damage types in TerraSAR-X ST mode data by visual interpretation, statistical comparison and classification assessment. The results showed that the data effectively separated standing buildings from collapsed buildings. However, the damaged building samples were obtained by expert interpretation and manual extraction. This method is time consuming and technique dependent because the exact edges of buildings and debris must be acquired. In sub-meter VHR SAR images, more features, such as points and edges of objects, become visible, more details of individual buildings can be observed and new features of buildings are shown [11]. It is difficult to detect specific buildings in complicated surroundings in VHR images. Moreover, collapsed building debris is visually similar to other objects such as high vegetation [8]. This could lead to false alarms when detecting debris heaps. Therefore, in this paper, we propose a method to assess the damage to individual buildings affected by earthquakes using single post-event VHR SAR imagery and a building footprint map. Given a building footprint map, which can be provided by a cadastral map or extracted from a pre-event VHR optical image, the original footprints of buildings can be located in the geometrically-rectified post-event VHR SAR image. Because collapsed and standing buildings exhibit different characteristics in their original footprints, features can be extracted from the image. Then, the buildings can be classified as different damage types. We demonstrate the feasibility of the proposed method in Qushan town (31.833◦ N, 104.459◦ E), old Beichuan County, China, which was heavily damaged in the Wenchuan Earthquake on 12 May 2008. The ruins in the area are preserved as the Beichuan Earthquake Memorial Park. Therefore, some damaged buildings in the area can be used for algorithm analysis. In our

Remote Sens. 2016, 8, 887

3 of 21

experiments, the post-event descending and ascending TerraSAR-X ST data are used for building damage assessment. Post-event airborne optical images, light detection and ranging (LIDAR) data, as well as in situ photography are used as reference data. The main novelty of the paper is the use of the VHR TerraSAR-X ST data for individual damage building analysis and demonstrating the concept for collapsed building assessment with building footprints. The of this paper is structured as follows. In Section 2, the properties of 3damaged Remoteremainder Sens. 2016, 8, 887 of 21 buildings in VHR SAR images and damage detection problems are stated. In Section 3, we introduce the VHR TerraSAR-X data formethod individual building analysis and demonstrating theVHR the basic principle of the ST proposed anddamage describe damage detection with sub-meter concept for collapsed building assessment with building footprints. SAR data in detail. The study area and dataset are introduced in Section 4. Section 5 gives the The remainder of this paper is structured as follows. In Section 2, the properties of damaged results and analysis of the experiments. Finally, the discussion and conclusions are presented in buildings in VHR SAR images and damage detection problems are stated. In Section 3, we introduce Sections 6 and 7, respectively.

the basic principle of the proposed method and describe damage detection with sub-meter VHR SAR data in detail. The study area and dataset are introduced in Section 4. Section 5 gives the results and 2. Properties of Damaged Buildings in VHR SAR Images and Damage Detection Problems analysis of the experiments. Finally, the discussion and conclusions are presented in Sections 6 and 7, Due torespectively. the side-looking image mode of SAR sensors, some unique features, such as layover,

fore-shortening effects and multi-path propagation, can be observed in SAR images. Those effects Properties of Damaged Buildings in VHR SAR Images and Damage Detection Problems make2.interpretation with SAR images difficult compared to using optical images. However, the effects Due to the side-looking image mode of SAR sensors, some unique such as are also the unique patterns of buildings, which can be differentiated fromfeatures, other objects inlayover, SAR images. fore-shortening effects and multi-path propagation, can be observed in SAR images. Those effects or In general, buildings have very distinct appearances in SAR images due to their cube-like make interpretation with SAR images difficult compared to using optical images. However, the effects regular shapes. Buildings in SAR images typically consist of four zones: a layover area, corner are also the unique patterns of buildings, which can be differentiated from other objects in SAR images. reflection, roof area and shadow area. When an earthquake occurs, buildings may be destroyed or In general, buildings have very distinct appearances in SAR images due to their cube-like or collapse. Destroyed buildings have different scattering distributions. Strong double-bounce scattering regular shapes. Buildings in SAR images typically consist of four zones: a layover area, corner reflection, may be missing because the vertical wall facing the sensor may be collapsed, and the wall-ground roof area and shadow area. When an earthquake occurs, buildings may be destroyed or collapse. dihedral cornerbuildings reflectorhave maydifferent be destroyed. The layover area maydouble-bounce also be missing. Furthermore, Destroyed scattering distributions. Strong scattering may be the shadow area of a building will be reduced and potentially missing, depending on the structure missing because the vertical wall facing the sensor may be collapsed, and the wall-ground dihedral and slopecorner of thereflector ruins. may be destroyed. The layover area may also be missing. Furthermore, the shadow area of a building will be reduced and potentially on the structure and of the ruins. Figure 1 illustrates a standing building missing, after andepending earthquake. Figure 1a is anslope in situ photograph; illustrates a standing after show an earthquake. Figurein 1a ascending is an in situ and photograph; Figure 1bFigure is an 1optical image; and building Figure 1c,d the building descending Figure 1b an optical Figure 1c,d The show the building in ascending TerraSAR-X STismode SAR image; images,and respectively. azimuth direction of Figureand 1c descending is from bottom TerraSAR-X ST mode SAR images, respectively. The azimuth direction of Figure 1c is from bottom to top, and the range direction is from left to right. The azimuth direction of Figure 1d is from top to to top, and the range direction is from left to right. The azimuth direction of Figure 1d is from top to bottom, and the range direction is from right to left. The building has a flat roof with slight damage. bottom, and the range direction is from right to left. The building has a flat roof with slight damage. In theInSAR image, the the double-bounce andshadow shadowarea area obvious. layover the SAR image, double-bounceline, line,layover layover area area and areare obvious. The The layover areasareas can be detected as the areas facing the sensor. The shadow areas are distinguishable can be detected as the areas facing the sensor. The shadow areas are distinguishable from theirfrom their surroundings. surroundings.

(a)

(b)

(c)

(d)

Figure 1. A standing building: (a) in situ photograph; (b) optical image acquired in 2013; (c) TerraSAR-X

Figure 1. A standing building: (a) in situ photograph; (b) optical image acquired in 2013; (c) TerraSAR-X staring spotlight (ST) ascending SAR image (the azimuth direction is from bottom to top, and the staring spotlight (ST) ascending SAR image (the azimuth direction is from bottom to top, and the range range direction is from left to right); (d) TerraSAR-X ST descending SAR image (the azimuth direction direction is from to right); (d) range TerraSAR-X STisdescending is from top toleft bottom, and the direction from right toSAR left).image (the azimuth direction is from top to bottom, and the range direction is from right to left).

Figure 2 illustrates a totally collapsed building after an earthquake. Figure 2a is an in situ photograph; Figure 2ba is an optical image; building and Figure 2c,dan show the buildingFigure in TerraSAR-X Figure 2 illustrates totally collapsed after earthquake. 2a is an ST in situ ascending and descending SAR images, respectively. The azimuth direction of Figure 2c is from photograph; Figure 2b is an optical image; and Figure 2c,d show the building in TerraSAR-X ST bottom to top, and the range direction is from left to right. The azimuth direction of Figure 2d is from ascending and descending SAR images, respectively. The azimuth direction of Figure 2c is from bottom top to bottom, and the range direction is from right to left. The damaged building was collapsed to top, and the range direction is from left to right. The azimuth direction of Figure 2d is from top to entirely, leaving a heap of randomly-oriented planes mainly made of concrete. In the SAR image, the bottom, and the range is from shows right to left. The damaged building wasscattering collapsed entirely, heap of debris of thedirection collapsed building random scattering effects. Some bright spots can be found, which are caused by corner reflectors, resulting from the composition of different planes. However, the double-bounce line, layover area and shadow area are missing in this case.

Remote Sens. 2016, 8, 887

4 of 21

leaving a heap of randomly-oriented planes mainly made of concrete. In the SAR image, the heap of debris of the collapsed building shows random scattering effects. Some bright scattering spots can be found, which are caused by corner reflectors, resulting from the composition of different planes. However, the double-bounce line, layover area and shadow area are missing in this case. 4 of 21 Remote Sens. 2016, 8, 887

Remote Sens. 2016, 8, 887

4 of 21

(a)

(b)

(c)

(d)

Figure 2. A collapsed building: (a) in situ photograph; (b) optical image acquired in 2013; (c) TerraSAR-X

Figure 2. A collapsed building: (a) in situ photograph; (b) optical image acquired in 2013; ST ascending SAR image (the azimuth direction is from bottom to top, and the range direction is from (b) (the azimuth direction (c) is from bottom to top, (d)and the range (c) TerraSAR-X(a) ST ascending SAR image left to right); (d) TerraSAR-X ST descending SAR image (the azimuth direction is from top to bottom, direction is from left to right); (d) TerraSAR-X ST descending SAR image (the in azimuth is from Figure A collapsed building: in situ photograph; (b) optical image acquired 2013; (c)direction TerraSAR-X and the2.range direction is from(a)right to left). top toSTbottom, and the range direction is from right to left). ascending SAR image (the azimuth direction is from bottom to top, and the range direction is from left to right); (d) TerraSAR-X STcan descending image (thebuilding azimuth and direction is fromcollapsed top to bottom, Thus, in Figures 1 and 2, we see thatSAR the standing the totally building rangeunique direction from right left).the haveand their own in the SARstanding images. However, high the vegetation shows features Thus, in the Figures 1 andfeatures 2,iswe can seetoVHR that building and totally collapsed building that are similar to those of the debris of the totally collapsed building and may cause false alarms [8]. have their own unique features in the VHR SAR images. However, high vegetation shows features Thus, in Figures 1 and 2, we can that the standing building totally collapsed building Moreover, it is not easy to extract the see exact edges of the debris due toand thethe similarity between the debris that are similar to those of features the debris of the totally collapsed building and may cause false alarms [8]. have their own unique in the VHR SAR images. high vegetation shows features and surroundings. Therefore, although the signature of theHowever, standing building shows different features Moreover, it similar is not easy to extract the exact edges ofcollapsed the debris due toand the may similarity between the[8]. debris that are to those the debris of the totally cause false alarms compared to those of theofcollapsed building in the VHR SAR building images, detecting the collapsed building and surroundings. Therefore, although the signature of the standing building shows different features Moreover, it is not easy to extract the exact edges of the debris due to the similarity between the debris debris using only post-event VHR SAR images by the traditional image processing method remains compared to thoseOvercoming of the collapsed building in the VHR SAR images, detecting the collapsed building and surroundings. Therefore, the motivation signature ofofthe building shows different features problematic. thisalthough issue is the thestanding paper. compared to those of the collapsed building in the VHR SAR images, detecting the collapsed building debris using only post-event VHR SAR images by the traditional image processing method remains debris using only post-event VHR by Using theof traditional image processing method remains 3. Proposed Methodology Damage Detection VHR SAR Images problematic. Overcoming thisfor issue isSAR theimages motivation the paper.

problematic. Overcoming this issue is the motivation of the paper. 3.1. Concept of Damage Detection 3. Proposed Methodology for Damage Detection Using VHR SAR Images 3. Proposed Methodology for Damage Detection Using VHR SAR Images Figure 3 gives an ideal example of SAR imaging of a flat roof building. In Figure 3b, the layover, 3.1. Concept of Damage Detection double-bounce line and shadow area can be observed. The shadow caused by the standing building 3.1. Concept of Damage Detection covers the majority of the building’soffootprint (denoted the roof red line). Generally, there is nothe return Figure 3 gives an ideal example SAR imaging of by a flat building. In Figure 3b, layover, Figure 3 gives or an the ideal example of SAR imaging ofTherefore, a flat roof building. In Figure 3b, the layover, from the building ground in the shadow area. the backscattering intensity the double-bounce line and shadow area can be observed. The shadow caused by the standinginbuilding double-bounce linethat andinshadow area can be observed. The shadow caused by the standing building area is lower than the surrounding area. covers the majority of the building’s footprint (denoted by the red line). Generally, there is no return covers the majority of the building’s footprint (denoted by the red line). Generally, there is no return from from the building or the in the shadow area. Therefore, the the area the building or ground the ground in the shadow area. Therefore, thebackscattering backscattering intensity intensity in in the is lower than that in the surrounding area. area is lower than that in the surrounding area.

(a)

(b)

Figure 3. An illustration of SAR imaging of a flat roof building: (a) Relationship between the sensor and the building; (b) building in the image plane.

(a)

(b)

Figure 3. An illustration of SAR imaging of a5flat roofexamples building: (a) betweenrange the sensor For further investigation, Figures 4 and show of Relationship the backscattering profiles of Figure 3. An illustration of SAR imaging of a flat roof building: (a) Relationship between the sensor and and the building; (b)buildings building inunder the image plane.boundary conditions [12]. In Figure 4, the scattering flat roof gable roof different and the building; (b) building in the image plane. effects of flat roof buildings for three different situations can be observed according to the boundary For further investigation, 4 and 5 show the backscattering range conditions among the buildingFigures height h, width w andexamples incidenceofangle θ (Figure 4a–c). Theprofiles symbolsofl flat roof and gable roof buildings under different boundary conditions [12]. In Figure 4, the scattering For further investigation, Figures 4 and 5 show examples of the backscattering range profiles of and s denote the lengths of the layover and shadow in the ground-projected image space, respectively; effects of flat roof buildings for three different situations can be observed according to the boundary flat roof buildings conditions In Figure the of scattering b isand the gable doubleroof bounce causedunder by thedifferent dihedral boundary corner reflector formed[12]. by the vertical4,wall the conditions among the buildingground; height h, width w and incidence angle (Figure 4a–c). Theissymbols building and the surrounding and e represents the shadow areaθ from which there no returnl and the lengths the layover and shadow the ground-projected image The space, respectively; froms denote the building or theofground. In Figure 4, someinregularity can be observed. double bounce b is the double bounce caused by the dihedral corner reflector formed by the vertical wall of the building and the surrounding ground; and e represents the shadow area from which there is no return from the building or the ground. In Figure 4, some regularity can be observed. The double bounce

Remote Sens. 2016, 8, 887

5 of 21

effects of flat roof buildings for three different situations can be observed according to the boundary conditions among the building height h, width w and incidence angle θ (Figure 4a–c). The symbols l and s denote the lengths of the layover and shadow in the ground-projected image space, respectively; b is the double bounce caused by the dihedral corner reflector formed by the vertical wall of the building and the surrounding ground; and e represents the shadow area from which there is no return from Remote the building or887 the ground. In Figure 4, some regularity can be observed. The double Sens. 2016, 8, 5 ofbounce 21 always appears as a linear feature corresponding to the building’s front wall, and it can indicate the always linearthe feature corresponding toshadow the building’s and it can indicate presence ofSens. aappears building. double bounce, the areas front (e in wall, Figure 4b,c) usually appear in Remote 2016, 8,as 887aNear 5 ofthe 21 presence of a building. Near the double bounce, the shadow areas (e in Figure 4b,c) usually appear the image and cover the area of the building’s footprint. In Figure 4a, there is backscattering in return and cover thedouble area ofbounce thecorresponding building’s footprint. In Figure 4a, there isand backscattering return always as athe linear feature to theIfbuilding’s front wall, itbackscattering can indicate thefrom from the the image roofappears between and shadow. the roof is flat, minimal from the roof between the double bounce and shadow. If the roof is flat, minimal backscattering from presence of a building. Near the double bounce, the shadow areas (e in Figure 4b,c) usually appear in the roof is received by the sensor. Thus, d (backscattering from the roof) in Figure 4a usually appears the roof is received thearea sensor. Thus, d (backscattering the roof) in Figure 4a usually appears image and coverbythe of the building’s footprint. from In Figure 4a, there is backscattering return as a dark region in SAR images. Figure 5 shows the scattering effects of a gable roof building. It shows as a dark region in SARthe images. Figure 5 shows the scattering effects of a minimal gable roof building. It shows from the roof between double bounce and shadow. If the roof is flat, backscattering from three three examples of backscattering profiles from a gable roof building with roof inclination anglesαα for examples of backscattering profiles from a gable roof building with roof inclination the roof is received by the sensor. Thus, d (backscattering from the roof) in Figure 4a usuallyangles appears different incidence angles θ. Nearθ.Figure the double-bounce line,line, theeffects shadow areas (e in Figure 5) also for incidence Near the double-bounce the shadow areas (ebuilding. in Figure alsocover as adifferent dark region in SARangles images. 5 shows the scattering of a gable roof It 5) shows the majority of the building’s footprint. cover the majority of the building’s footprint. three examples of backscattering profiles from a gable roof building with roof inclination angles α for different incidence angles θ. Near the double-bounce line, the shadow areas (e in Figure 5) also cover the majority of the building’s footprint.

(a)

(b)

(c)

Figure 4. Backscattering range profiles from a simple flat roof building model with width w and different

Figure 4. Backscattering range profiles from a simple flat roof building model with w and (a) scattering (b) (c) width heights h. Ground a, double bounce b, scattering from the vertical wall c, backscattering from different heights h. Ground scattering a, double bounce b, scattering from the vertical wall c, the roof4.dBackscattering and shadow area e are labeled. The gray values inbuilding the backscattering to Figure range profiles from a simple flat roof model withprofiles width wcorrespond and different backscattering from the roof dhand shadow area e are labeled. The gray valuesfrom in the backscattering the relative amplitudes. (a) < w·tan(θ); (b) h = w·tan(θ); (c) h > w·tan(θ) (re-edited [12]). heights h. Ground scattering a, double bounce b, scattering from the vertical wall c, backscattering from profiles correspond to the relative amplitudes. (a) h < w·tan(θ); (b) h = w·tan(θ); (c) h > w·tan(θ) the roof d and shadow area e are labeled. The gray values in the backscattering profiles correspond to (re-edited from [12]). the relative amplitudes. (a) h < w·tan(θ); (b) h = w·tan(θ); (c) h > w·tan(θ) (re-edited from [12]).

(a)

(b)

(c)

Figure 5. Backscattering range profiles from a gable roof building with roof inclination angles α. The (a) to that used for the flat roof buildings (b) in Figure 3. The gray values in the(c) legend is similar backscattering profiles correspond to the relative amplitudes. (a) θ < α; (b) θ = α; (c) θ > α (re-edited from [12]). Figure 5. Backscattering range profiles from a gable roof building with roof inclination angles α. The

Figure 5. Backscattering range profiles from a gable roof building with roof inclination angles α. legend is similar to that used for the flat roof buildings in Figure 3. The gray values in the backscattering The Generally, legend is similar to that used for thebuilding, flat roof buildings in Figure 3. The gray values in the backscattering for a flat or gable roof no(a) matter is changed, the building profiles correspond to the relative amplitudes. θ < α; how (b) θ the = α; aspect (c) θ > αangle (re-edited from [12]). profiles to the relative amplitudes. θ < α;wave (b) θ will = α; (c) θ >its α (re-edited from [12]). 6 shows profile correspond incised by the incidence plane of the(a) radar keep basic shape. Figure some examplesfor ofathe a flat roof and gable roof different aspect In Generally, flatprofiles or gableofroof building, noamatter how building the aspectwith angle is changed, theangles. building Figure 6, – byshow examplesplane of four different aspect angles; – are the widths of 6the four profile incised the incidence of the radar wave will keep its basic shape. Figure shows Generally, for a flat or gable roof building, no matter how the aspect angle is changed, the building building profiles.ofThe thatroof the and building profiles keep the with basic shapes, which similar some examples the figures profilesshow of a flat a gable roof building aspectare profileto incised by the incidence plane of the radar waveaspect will keep its basic different shape. Figure 6angles. showsInsome the building in Figures 5, if the angle is changed. Therefore, conclusion Figure 6, – profiles showshown examples of four4 and different aspect angles; – are the widthsaof the four examples ofmade the profiles of athe flat roof gableroof roofbuildings buildingare with differentbyaspect angles.different In Figure 6, can be that when flat roofand andathe gable illuminated the radar building profiles. The figures show that building profiles keep the basic shapes, whichin are similar φ1 –φ4aspect show examples of four different aspect angles; w –w are the widths of the four building profiles. angles, the backscattering range profiles of the buildings can also be explained by 4 angle is changed. Therefore, a conclusion to the building profiles shown in Figures 4 and 5, if the 1aspect Figures 4 and 5. The figures show that the building profiles keep the basic shapes, which are similar to the building can be made that when the flat roof and gable roof buildings are illuminated by the radar in different

aspect angles, the backscattering the buildings can also be explained profiles shown in Figures 4 and 5, if therange aspectprofiles angle isofchanged. Therefore, a conclusion can bebymade Figures 4 and 5.

Remote Sens. 2016, 8, 887

6 of 21

that when the flat roof and gable roof buildings are illuminated by the radar in different aspect angles, the backscattering Remote Sens. 2016, range 8, 887 profiles of the buildings can also be explained by Figures 4 and 5. 6 of 21 Remote Sens. 2016, 8, 887

6 of 21

(a) (a)

(c)

(b) (b)

(d)

Figure 6. Illustrations of the building profiles of a flat roof building and a gable roof building with

Figure 6. Illustrations of the building profiles of a flat roof building and a gable roof building different aspect angles. (a) radar waves in four directions; (c)A flat roof building model illuminated by(d) with different aspect angles. (a) A flat roof building model illuminated by radar waves in four (b) building profiles corresponding to the four different directions; (c) a gable roof building model Figure(b) 6. Illustrations of the building profilesto ofthe a flat roof buildingdirections; and a gable building with directions; building profiles four different (c)roof a gable illuminated by radar wavescorresponding in four directions; (d) building profiles corresponding to roof the building four different aspect by angles. (a)waves A flat roof building model illuminated byprofiles radar waves in four directions; model illuminated radar in four directions; (d) building corresponding to the four different directions. (b) building profiles corresponding to the four different directions; (c) a gable roof building model different directions. illuminated by radar waves in four directions; (d) building profiles corresponding to the four Based on the analysis above, we can conclude that if the building is still standing, the region of the different directions. building’s footprint in an SAR image usually has low intensity is dark the compared Based on the analysis above, we can conclude thatbackscattering if the building is stilland standing, regiontoof the surrounding areas under the condition of various incidence angles and aspect angles. However, if building’s footprint an SAR image has low intensity and dark compared Based on thein analysis above, we usually can conclude that ifbackscattering the building is still standing, theisregion of the the building is totally collapsed, debris piles form. The debris exhibits brighter scattering spots caused building’s footprint in an SAR image usually has low backscattering intensity and is dark compared to to surrounding areas under the condition of various incidence angles and aspect angles. However, by smaller corner reflectors resulting from the composition of different planes [4]. Therefore, if the surrounding areas under the condition of various incidence angles and aspect angles. However, if if the footprint buildingofis totally can collapsed, debris pilesbuildings form. The debris exhibits scattering a building be obtained, standing and collapsed buildingsbrighter can be separated basedspots the building is totally collapsed, debris piles form. The debris exhibits brighter scattering spots caused caused smaller corner reflectors resulting composition planes [4]. Therefore, onby features derived from the original footprintfrom of thethe building. Based on of thisdifferent concept, we proposed a new by smaller corner reflectors resulting from the composition of different planes [4]. Therefore, if the if the scheme footprint a building candestroyed be obtained, standing buildings single and collapsed buildings can be for of detecting totally buildings with post-event polarization VHR SAR footprint of a building can be obtained, standing buildings and collapsed buildings can be separated based images (Figure separated based on 7). features derived from the original footprint of the building. Based on this concept, on features derived from the original footprint of the building. Based on this concept, we proposed a new we proposed a new schemetotally for detecting totally destroyed post-event single polarization scheme for detecting destroyed buildings with buildings post-eventwith single polarization VHR SAR VHRimages SAR images (Figure 7). (Figure 7).

Figure 7. Block scheme of the proposed methodology.

3.2. Preprocessing

Figure 7. Block scheme of the proposed methodology. Figure 7. Block scheme of the proposed methodology. The main inputs of the method are a sub-meter resolution SAR image and a building footprint map. Before analysis, the digital numbers of the TerraSAR-X SAR data were converted into radar 3.2. Preprocessing sigma naught values as follows [13]: The main inputs of the method are a sub-meter resolution SAR image and a building footprint map. Before analysis, the digital numbers of the TerraSAR-X SAR data were converted into radar sigma naught values as follows [13]:

Remote Sens. 2016, 8, 887

7 of 21

3.2. Preprocessing The main inputs of the method are a sub-meter resolution SAR image and a building footprint map. Before analysis, the digital numbers of the TerraSAR-X SAR data were converted into radar sigma naught values as follows [13]: σ◦ = (ks × |DN|2 − NEBN) ×sinθ loc

(1)

where ks is the calibration and processor scaling factor given by the parameter calFactor in the annotated file, DN is the pixel intensity value and NEBN is the noise equivalent beta naught. It represents the influences of different noise contributions on the signal. θ loc is the local incidence angle [13]. The building footprint map in Figure 7 can be provided by a cadastral map from the government. If there is no building footprint map of the affected area, the map can be obtained by interpreting high-resolution pre-earthquake optical images. Unfortunately, we have no pre-earthquake high-resolution remote sensing images covering the research area. Therefore, we obtained the building footprint map by careful interpretation of the post-earthquake optical image with information of the in situ investigation and LIDAR data of the research area in our experiment. To register the building footprint map, the SAR data should be geometrically rectified. The ground control points, which provide the relationships between pixel coordinates in the image and geo-coordinates, can be found in the XML file in SAR products. After geometric rectification, the SAR image can be matched to the building footprint map, which also has geo-coordinates information. To improve the registration precision, registration between the geo-rectified SAR images and optical image is implemented with ENVI software by manually selecting the control points in the images in our experiment. The SAR images we used are single look complex (SLC) products. For real operation, the geocoded ellipsoid corrected (GEC) or enhanced ellipsoid corrected (EEC) products of the SAR data can be ordered to avoid geometric rectification by users. Although a layover, shadow area of a building usually occurs in the very high resolution SAR image, the footprint indicates the real location of the building. Therefore, with the building footprint map, the original footprints of buildings can be located in the descending and ascending SAR images. Then, features can be extracted based on the footprints in the SAR image. 3.3. Features for Classification Four first-order statistics and eight second-order image statistical measures are investigated for analysis and damage type classification. 3.3.1. First-Order Statistics Statistical features including mean, variance, skewness and kurtosis are selected for first-order statistical analysis. Setting f 1 = µ as the mean value and f 2 = σ2 as the variance value, the skewness and kurtosis can be expressed as follows. (1)

Skewness: f3 =

E ( x − µ )3 σ3

(2)

In Equation (2), E(·) represents the expected value of variables. Skewness is a measure of the asymmetry of the data distribution around the sample mean. If skewness is negative, the data are spread out more to the left of the mean than to the right. If skewness is positive, the data are spread out more to the right. The skewness of the normal distribution is zero [14]. (2)

Kurtosis: f4 =

E ( x − µ )4 σ4

(3)

Remote Sens. 2016, 8, 887

8 of 21

Kurtosis describes how peaked or flat a distribution is, with reference to the normal distribution. The kurtosis of the normal distribution is 3. If kurtosis has a high value, then it represents a sharp peak distribution in the intensity values about the mean and has a longer, fat tail. If kurtosis has a low value, it represents a flat distribution of intensity values with a short, thin tail [15]. Skewness and kurtosis are features that describe the position and shape of a data distribution. SAR data usually exhibit long-tail distributions, and standing and collapsed buildings show different pixel intensities based on the above analysis. Therefore, skewness and kurtosis are selected as features to describe these differences. 3.3.2. Second-Order Image Statistics The second-order image statistics measure the relationships between pixels and their neighbors [16]. To calculate the second order statistics, the pixel values for a given scale of interest are first translated into a gray level co-occurrence matrix (GLCM). Texture measures derived from the GLCM have been widely used for land cover classification and other applications with radar and optical data. GLCM describes the frequency of one gray tone appearing in a specified spatial linear relationship with another gray tone in the area under investigation [17]. The textural statistics are calculated based on GLCM. The GLCM-based textural features are second-order features widely used in texture analysis and image classification. Recently, GLCM features have been used to analyze the signatures of damaged buildings in real and simulated VHR SAR images [8,10,18]. Haralick et al. proposed 14 features based on GLCM for textural analysis [19]. These features measure different aspects of the GLCM, and some features are correlated. In this paper, eight texture features are chosen for analysis: the mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation. Let p (i, j) be the (i, j)-th entry in a normalized GLCM. The features are as follows [19–21]. (1)

Mean:

∑ ∑ i × (i, j)

(4)

∑ ∑ (i − µ)2 p (i, j)

(5)

g1 =

i

(2)

Variance: g2 =

i

(3)

j

Homogeneity: g3 =

1

∑ ∑ 1 + (i − j)2 p (i, j) i

(4)

j

(6)

j

Contrast: Ng −1

g4 =



(N

g

Ng

∑ ∑ p (i, j) ||i − j| = n

n2

n =0

) (7)

i =1 j =1

In Equation (7), Ng is the number of distinct gray levels in the quantized image. (5)

Dissimilarity: g5 =

∑ ∑ |i − j| × p (i, j) i

(6)

Entropy:

g6 = − ∑ ∑ p (i, j) log ( p (i, j)) i

(7)

Second moment: g7 =

(8)

j

(9)

j

∑ ∑ { p (i, j)}2 i

j

(10)

Remote Sens. 2016, 8, 887

(8)

9 of 21

Correlation: g8 =

∑i ∑ j (ij) p (i, j) − µ x µy σx σy

(11)

where µ x , µy , σx and σy are the means and standard deviations of p x and py . The GLCM mean measures the average gray level in the GLCM window. The GLCM variance measures the gray level variance and is a measure of heterogeneity. GLCM homogeneity, which is also called the inverse difference moment, measures the image homogeneity, assuming larger values for smaller gray tone differences in pair elements. GLCM contrast is a measure of the amount of local variation in pixel values among neighboring pixels. It is the opposite of homogeneity. GLCM contrast is related to first-order statistical contrast. Thus, high contrast values reflect highly contrasting textures. GLCM dissimilarity is similar to contrast and inversely related to homogeneity. GLCM entropy measures the disorder in an image. When the image is not texturally uniform, the value will be very large. The GLCM second moment is also called energy. It measures textural uniformity. If the image patch is homogeneous, the value of the second moment will be close to its maximum, which is equal to 1. GLCM correlation is a measure of the linear dependency of pixel values on those of neighboring pixels. A high correlation value (close to 1) suggests a linear relationship between the gray levels of pixel pairs. Therefore, we can see that the GLCM homogeneity and second moment are measures of homogenous pixel values across an image, while the GLCM contrast, dissimilarity and variance are measures of heterogeneous pixel values. Based on the analysis of Section 2, we can observe that the standing building shows relative homogeneity in the footprint, while the collapsed building shows more heterogeneity. Therefore, these features are selected for analysis. In addition, the GLCM mean, entropy and correlation are also considered for analysis. In this paper, image texture measures are calculated for every pixel using ENVI software. 3.4. Classifier Three machine learning methods involving random forest (RF) [22], support vector machine (SVM) [23] and K-nearest neighbor (K-NN) [24] classifiers are used to classify collapsed and standing buildings in our experiments. Ensemble learning algorithms (e.g., random forest) have received increased interest because they more accurately and robustly filter noise compared to single classifiers [22,25,26]. RF is an ensemble of classification trees, where each tree contributes a single vote to the assignment of the most frequent class of the input data. An RF can handle a number of input variables without variable deletion, and it estimates the importance of variables in the classification. Support vector machine (SVM) is a supervised learning model with associated learning algorithms that analyze data and recognize patterns based on the structural risk minimization principle in statistics. The K-NN algorithm is a statistical instance-based learning method [27,28]. It decides that a certain pattern X belongs to the same category as its closest neighbors given a training set of m labeled patterns. The algorithm estimates the probabilities of the classes using a Euclidian metric. The three machine learning methods are supervised classifiers, and they have been widely used for remote sensing image classification. We adopt the methods to evaluate the results of building damage detection using VHR SAR images. 4. Study Area and Materials The study area is located in old Beichuan County. On 12 May 2008, the Wenchuan Earthquake with a magnitude of 8.0 occurred in Sichuan province, China [29]. The earthquake devastated a huge area in Sichuan province, killing and injuring a large number of people and causing major damage to buildings and other infrastructures. Beichuan County (centered at approximately 31.833◦ N and 104.459◦ E) was one of the most affected areas. The Beichuan County was originally located in Qushan Town. After the earthquake, the county was moved to Yongchang Town, which belonged to An County before the

Remote Sens. 2016, 8, 887

10 of 21

earthquake. The old Beichuan County seat was abandoned, and some ruins were preserved as the Beichuan Earthquake Memorial Park. The study area belongs to the Beichuan Earthquake Memorial Park. Therefore, some damaged buildings are preserved in the area and can be selected for analysis. Three decimeter level resolution TerraSAR-X ST mode SAR images acquired in 2014 and 2015 are used for analysis and experimentation. One is ascending SAR image, and two are descending SAR images. TerraSAR-X ST SAR is a new mode that was introduced in 2013 by the German Aerospace Center (DLR). With the new mode, a spatial resolution of approximately 0.24 m can be achieved in the Remote Sens. 2016, 8, 887 10 of 21 azimuth direction of the system by widening the azimuth beam steering angle range [30]. With coarse Memorial Park. damaged preserved inon thethe areabuilding and can begroup selectedor forresidential spatial resolution data,Therefore, damagesome detection isbuildings usuallyare performed area level.analysis. While in the VHR SAR image, much more building features like edges and point structures Three decimeter level resolution TerraSAR-X ST mode SAR images acquired in 2014 and 2015 become visible. Therefore, it is possible to focus an analysis on individual buildings. Table 1 gives are used for analysis and experimentation. One is ascending SAR image, and two are descending detailed information regardingSTthe SAR images. TerraSAR-X SARdata. is a new mode that was introduced in 2013 by the German Aerospace Center (DLR). With the new mode, a spatial resolution of approximately 0.24 m can be achieved in the azimuth direction of the system by1.widening azimuth beam steering angle range [30]. With Table Detailedthe image information. coarse spatial resolution data, damage detection is usually performed on the building group or residential area level. While in the VHR SAR more building edges and (m) features like Date of image, much Resolution Incidence Orbit Look Number Polarization point structures become visible. Therefore, it is possible to focus an analysis on individual buildings. Acquisition Angle (◦ ) Direction Direction Ground Range Azimuth Table 1 gives detailed information regarding the data. 1 2 3

HH HH HH

Ascending Descending Descending

4 December 2014 0.86 8 December 2014 0.86 Table 1. Detailed 8 April 2015 image information. 0.86

0.23 0.23 0.23

Right Right Right

43.0 42.6 42.8

Resolution (m) Look Incidence Ground Direction Angle (°) Azimuth Range the study area are used as reference data. An optical multispectral image and LIDAR data covering 1 HH Ascending 4 December 2014 0.86 0.23 Right 43.0 May 2013, 52years after Airborne Remote Sensing HH the earthquake, Descending 8the December 2014 0.86 0.23 Center Right(ARSC) 42.6of the Institute 3 8 April 0.86 for monitoring 0.23 Right Remote Sensing andHHDigitalDescending Earth carried out2015 a campaign the areas42.8affected by the Number

Polarization

Orbit Direction

Date of Acquisition

In of Wenchuan earthquake in 2008. One remote sensing airplane carrying an optical ADS80 sensor collected An optical multispectral image and LIDAR data covering the study area are used as reference a number data. of images of the areas, including the old Beichuan County. The data(ARSC) acquired In May 2013, 5 years after the earthquake, the Airborne Remote Sensing Center of theby ADS80 Institute ofimages, Remote Sensing andresolution Digital Earthof carried out a campaign for monitoring0.6 the m. areas affected are multi-spectral and the the image is approximately The optical image by the Wenchuan earthquake in 2008. One remote sensing airplane carrying an optical ADS80 sensor has been rectified when we obtained it from ARSC. In situ investigations were carried out in 2014 and collected a number of images of the areas, including the old Beichuan County. The data acquired by 2015, corresponding to the ST SAR data acquisition dates. Based on the post-earthquake optical image, ADS80 are multi-spectral images, and the resolution of the image is approximately 0.6 m. The optical LIDAR data and in situ investigation, a reference standing and collapsed can be image has been rectified when we obtained it from map ARSC.of Inthe situ investigations were carried outbuildings in 2014 corresponding to the ST SAR data acquisition dates. on theand post-earthquake generatedand by 2015, careful interpretation. The reference map of theBased standing collapsed optical building can be image, LIDAR data and for in situ investigation, a reference map of the8standing and collapsed buildings regarded as the ground truth analysis and validation. Figure illustrates the research area, the SAR can be generated by careful interpretation. The reference map of the standing and collapsed building image andcan building footprint map. Figure 9 presents the optical image (Figure 9a) and the reference be regarded as the ground truth for analysis and validation. Figure 8 illustrates the research area, map (Figure Figureand 9c,d illustrates a descending ascending SAR image superimposed by the the 9b). SAR image building footprint map. Figure 9 and presents the optical image (Figure 9a) and the reference map (Figure 9b). illustrates a the descending ascending SAR image superimposed reference map, respectively. In Figure order9c,d to illustrate imageand more clearly, the values of the SAR image by thehave reference map, respectively. In order to illustrate the image clearly,footprint the values map of the SAR in Figure 9c,d been stretched to the 0–255 gray level. The more building and reference image in Figure 9c,d have been stretched to the 0–255 gray level. The building footprint map and map are generated by optical image, LIDAR data and in situ investigations. reference map are generated by optical image, LIDAR data and in situ investigations.

(a)

(b)

Figure 8. The study area: (a) a post-earthquake descending SAR image (the azimuth direction is from The area: (a)range a post-earthquake descending SAR image (the azimuth top tostudy bottom, and the direction is from right to left); and (b) building footprint map. direction

Figure 8. top to bottom, and the range direction is from right to left); and (b) building footprint map.

is from

Remote Sens. 2016, 8, 887

11 of 21

Remote Sens. 2016, 8, 887

11 of 21

(a)

(b)

(c)

(d)

Figure 9. Reference data: (a) optical image acquired in 2013; (b) reference map; (c) a post-earthquake

Figure 9. Reference data: (a) optical image acquired in 2013; (b) reference map; (c) a post-earthquake descending SAR image superimposed by the reference map (the azimuth direction of the SAR image descending SAR image superimposed by the reference map (the azimuth direction of the SAR image is is from top to bottom, and the range direction is from right to left); (d) a post-earthquake ascending SAR from image top tosuperimposed bottom, and the range direction from rightdirection to left); (d) a post-earthquake by the reference map is (the azimuth of the SAR image is fromascending bottom to SAR imagetop, superimposed the reference map azimuth direction of thebeen SAR image is and the rangeby direction is from to left(the right). The SAR images have stretched sofrom as to bottom be to top, and the more rangeclearly. direction is from to left right). The SAR images have been stretched so as to be illustrated illustrated more clearly. 5. Experimental Results and Analysis

5. Experimental Results and Analysis Three sub-meter VHR TerraSAR-X ST SAR images are used in the experiments and algorithm evaluation. With the building footprint map, the original footprints of the buildings can be

Three sub-meter VHR TerraSAR-X ST SAR images are used in the experiments and algorithm obtained in the SAR images. Then, the features are calculated, forming a feature vector evaluation. With the building footprint for map, the original footprints of the buildings can be = , , , , , , , , , , , classification. obtained in the SAR images. Then, the features are calculated, forming a feature vector FV =5.1. , f 2 , f 3 , fof g1 , g2 , g3 , gTextural { f 1Parameters 6, g7 , g8 } for classification. 4 , GLCM-Based 4 , g5 , gFeatures Generally, if the textural measures derived from the GLCM are used, some fundamental 5.1. Parameters of GLCM-Based Textural Features parameters should be defined, including the quantization levels of the image, the window size used

Generally, if the textural derived from thevalues GLCM aremeasurements. used, some fundamental to calculate the GLCM and themeasures displacement and orientation of the parameters should be defined, including the quantization levels of the image, the window size used to calculate the GLCM and the displacement and orientation values of the measurements.

Remote Sens. 2016, 8, 887

12 of 21

Sens. 2016, 8,Level 887 5.1.1.Remote Quantization

12 of 21

The 5.1.1.quantization Quantization level Levelis a fundamental parameter. A value that is too low can lead to substantial information reduction. However, fewer levels reduce noise-induced effects and directly affect the The quantization level is a fundamental parameter. A value that is too low can lead to substantial computing cost reduction. because the number of bins the size of the and co-occurrence matrix information However, fewer levelsdetermines reduce noise-induced effects directly affect the [7]. This tradeoff was discussed et al.of[20], the conclusion that levels are suggested for sea computing cost because by the Soh number binswith determines the size of the64co-occurrence matrix [7]. ice inThis SARtradeoff images. Additionally, they have that a 256-level representation is not necessary and was discussed by Soh et al. [20], with the conclusion that 64 levels are suggested an for searepresentation ice in SAR images. Additionally, A they have 256-level representation is notofnecessary eight-level is undesirable. level ofthat 256awas applied in [16]; a level 128 was and adopted eight-level representation is undesirable. of detect 256 was applied in [16]; debris a level in of SAR 128 was in [7];anand a 64-level quantization was appliedAinlevel [8] to and distinguish images. adopted in [7]; and 64-level a 64-leveland quantization applied inwas [8] toadopted detect and debris in SAR Additionally, 16-level, 32-level was quantization in distinguish [31–33], respectively, for sea images. Additionally, 16-level, 64-level and 32-level quantization was adopted in [31–33], respectively, ice classification and other applications. Based on [20] and [7,8], a 64-level quantization using ENVI for sea ice classification and other applications. Based on [20] and [7,8], a 64-level quantization using software is adopted in our experiments. Figure 10 illustrates the histograms of the sigma naught of the ENVI software is adopted in our experiments. Figure 10 illustrates the histograms of the sigma SAR images (Figure 10a) and the 64-level quantization images by ENVI (Figure 10b). It shows that the naught of the SAR images (Figure 10a ) and the 64-level quantization images by ENVI (Figure 10b). shapes of the histograms ofofthe are kept after It shows that the shapes theimages histograms of thewell images arequantization. kept well after quantization.

(a)

(b)

Figure 10. Histograms of (a) sigma naught of the SAR image and (b) 64-level quantization image by ENVI.

Figure 10. Histograms of (a) sigma naught of the SAR image and (b) 64-level quantization image by ENVI. 5.1.2. Window Size

In order to calculate texture features for each pixel in an image, a moving window is usually used to define the neighborhood of a pixel, and the texture measurement calculated from the window is assigned the center pixel [26,34]. The window size for texture analysis is usually related image used In order toto calculate texture features for each pixel in an image, a moving window is to usually resolution and the contentsofwithin theand image. it might be difficult to determine window to define the neighborhood a pixel, theHowever, texture measurement calculated fromthe the window is size through a formula. the experiments, we chose 13 ×texture 13 pixels as the window sizerelated to calculate assigned to the center pixelIn[26,34]. The window size for analysis is usually to image the texture features. The analysis is as follows.

5.1.2. Window Size

resolution and the contents within the image. However, it might be difficult to determine the window (1) Estimation by classification result: size through a formula. In the experiments, we chose 13 × 13 pixels as the window size to calculate the texture features. analysis is as follows. The overall The accuracy or kappa coefficient of the classification results is usually applied for estimating the window size [26,34]. If the texture features can give the highest classification accuracy, Estimation by classification result: then the widow size will be chosen as the optimal size. Figure 11 shows the influence of window size on the overall accuracy accuracy and of the classification results of texture features collapsed and for The overall or accuracy kappa coefficient of the classification results is for usually applied standing random forest. With can the give whole ofhighest 192 samples (36 collapsed estimating thebuildings. window The size classifier [26,34]. Ifis the texture features the classification accuracy, 156 standing buildings), 1/3 of samples are for training, and 2/3 of samples are for testing. then buildings, the widow size will be chosen as the optimal size. Figure 11 shows the influence of window The figure shows that the classification results of small window size are not stable. The overall size on the overall accuracy and accuracy of the classification results of texture features for collapsed accuracy increases when the window size is larger than 7 × 7 pixels and decreases when the window and standing buildings. The classifier is random forest. With the whole of 192 samples (36 collapsed size is larger than 15 × 15 pixels. The overall accuracy is stable when the window size is between 9 × buildings, 156 standing buildings), 1/3that of samples are sizes for training, 2/3 of 15 samples are can for be testing. 9 pixels and 15 × 15 pixels. It appears the window of 13 × 13and pixels and × 15 pixels The figure shows that the classification results of small window size are not stable. The overall accuracy identified as appropriate to achieve best overall accuracy.

(1)

increases when the window size is larger than 7 × 7 pixels and decreases when the window size is larger than 15 × 15 pixels. The overall accuracy is stable when the window size is between 9 × 9 pixels and 15 × 15 pixels. It appears that the window sizes of 13 × 13 pixels and 15 × 15 pixels can be identified as appropriate to achieve best overall accuracy.

Remote Sens. 2016, 8, 887

13 of 21

Remote Sens. 2016, 8, 887

13 of 21

Remote Sens. 2016, 8, 887

13 of 21

Figure 11. Influence of window size on the overall accuracy and the accuracy of the classification

Figure 11. Influence of window size on the overall accuracy and the accuracy of the classification results of texture features for collapsed and standing buildings. results of texture features for collapsed and standing buildings. 11. Influence of window size on the overall accuracy and the accuracy of the classification (2) Figure Estimation by coefficient of variation of texture measures:

(2)

results ofby texture features for collapsed and standing buildings. Estimation of an variation texture measures: To facilitatecoefficient the choice of optimal of texture window, the coefficient of variation of each texture

measure for eachbyclass in relation to window wasmeasures: also calculated [35]. The chosen optimal window (2) Estimation coefficient variation of size texture To facilitate the choice of anofoptimal texture window, the coefficient of variation of each texture size is usually that in which the value of the coefficient of variation starts to stabilize to the smallest measure for each class in relation to optimal window size was also calculated [35].ofThe chosen window To[35]. facilitate of the an the coefficient variation of optimal each value Figurethe 12 choice presents evolutiontexture of the window, variance in relation to different window sizestexture for the size is usually that which the value of the coefficient of variation starts to stabilize towindow the smallest measure for eachin class in relation toiswindow size was alsosecond calculated [35]. The chosen optimal textural measures. Figure 12a–c the results of the moment, homogeneity and entropy, is usually that in which the value of the coefficient of variation starts to stabilize to the smallest valuesize [35]. Figure 12 presents the evolution of the variance in relation to different window respectively. The red starred line indicates the collapsed building, and the blue open-circled sizes line for value [35]. Figure 12 presents the evolution of the variance in relation to different window sizes for the the textural measures. Figure 12a–c is the results of the second moment, homogeneity and entropy, represents the standing building values. The variances of textural features of the second moment and textural measures. Figure 12a–c is the results of the second moment, homogeneity and entropy, homogeneity at larger sizes; the values of entropy general increasing line respectively. Thedecrease red starred linewindow indicates thewhile collapsed building, and show the blue open-circled respectively. The red starred linedecrease indicates theincreasing collapsedwindow building, and line trendsthe at small window sizes and with size. Thethe variation starts tomoment stabilize represents standing building values. The variances of textural features ofblue theopen-circled second and represents the standing building values. The variances of textural features of the second moment and at a 13 × 13 pixels window in Figure 12a–c. The results show that a 13 × 13 pixels window size can be homogeneity decrease at larger window sizes; while the values of entropy show general increasing homogeneity decrease at largermeasures. window sizes; while the values of entropy show general increasing a good choice for the textural trends at small window sizes and decrease with increasing window size. The variation starts to trends at small window sizes and decrease with increasing window size. The variation starts to stabilize stabilize at ×a13 13pixels × 13 window pixels window Figure results show a window 13 × 13 size pixels at a 13 in Figurein 12a–c. The12a–c. resultsThe show that a 13 × 13 that pixels canwindow be size can be achoice goodfor choice for themeasures. textural measures. a good the textural

(a)

(b)

(c)

Figure 12. Evolution of the variation coefficient in relation to different window sizes for textural measures of the (a) second moment, (b) homogeneity and (c) entropy.

(a)

(b)

(c)

12. Evolution ofinterpretation: the variation coefficient in relation to different window sizes for textural (3) Figure Estimation by visual Figure 12. Evolution of the variation coefficient in relation to different window sizes for textural measures of the (a) second moment, (b) homogeneity and (c) entropy. If theofbuilding is still standing, thehomogeneity; footprint is usually measures the (a) second moment; (b) and (c)covered entropy.by shadow area. If the building is collapsed, the footprint is usually covered by debris. Figure 13 shows examples of shadow and debris (3) Estimation by visual interpretation: in building footprints in the SAR image. The red and yellow rectangles indicate the footprints of (3) Estimation by visual interpretation: If the building is still standing, footprint is usually covered by shadow area.some If the heaps building standing and collapsed buildings,therespectively. The debris usually contains of israndomly-oriented collapsed, the footprint is usually covered by debris. Figure 13 shows examples ofvery shadow and debris planes mainly made of concrete. The planes are usually not large and may If the building is still standing, the footprint is usually covered by shadow area. If the building is in footprints in or thelength. SAR image. The red and yellow rectangles the footprintstoofa bebuilding 1–3the meters in width After rectification, theexamples SARindicate image resampled collapsed, footprint is usually covered bygeometric debris. Figure 13 shows of is shadow and debris in standing and collapsed debris usuallyin contains some heapsSAR of 0.6-pixel spacing, which isbuildings, the same respectively. as the optical The image. Therefore, the 0.6-pixel spacing building footprints in the SARmainly image. The of red and yellow rectangles indicate the footprints ofmay standing randomly-oriented planes The Consequently, planes are usually not very large and image, the planes may be usually made 2–5 pixelsconcrete. (Figure 13). we think the window size of and collapsed buildings, respectively. The geometric debris usually contains some heaps of randomly-oriented be in width or length. After 13 1–3 × 13 meters pixels may be proper for texture analysis. rectification, the SAR image is resampled to a planes mainly made of concrete. The planes usually notTherefore, very largeinand be 1–3 m inSAR width or 0.6-pixel spacing, which is the same as theare optical image. the may 0.6-pixel spacing length. After the SAR(Figure image13). is resampled to awe 0.6-pixel spacing, image, thegeometric planes mayrectification, be usually 2–5 pixels Consequently, think the windowwhich size ofis the 13 pixels may be proper for texture analysis. same13as× the optical image. Therefore, in the 0.6-pixel spacing SAR image, the planes may be usually

2–5 pixels (Figure 13). Consequently, we think the window size of 13 × 13 pixels may be proper for texture analysis.

Remote Sens. 2016, 8, 887

14 of 21

Remote Sens. 2016, 8, 887

14 of 21

Remote Sens. 2016, 8, 887

14 of 21

Figure 13. Examples of shadows of standing buildings (red rectangle) and collapsed building (yellow

Figure 13. Examples of shadows of standing buildings (red rectangle) and collapsed building (yellow rectangle) debris in building footprints in the SAR image. rectangle) debris in building footprints in the SAR image.

Generally, the larger the window size buildings is, the coarser the information thatbuilding can be (yellow provided by Figure 13. Examples of shadows of standing (red rectangle) and collapsed textural features. Therefore, the window size used to compute textural features should not large. by rectangle) debris in building footprints in the SAR image. Generally, the larger the window size is, the coarser the information that can be be too provided Based on the quantitative and qualitative analysis, a 13 × 13 pixels window size is chosen to calculate textural features. Therefore, the window size used to compute textural features should not be too large. Generally, larger the window size is, the coarser the information that can be provided by in the experiments. Basedthe onGLCMs the quantitative and qualitative analysis, a 13 × 13 pixels window size is chosen to calculate textural features. Therefore, the window size used to compute textural features should not be too large. the GLCMs in the experiments. Based the quantitative and qualitative analysis, a 13 × 13 pixels window size is chosen to calculate 5.1.3. on Orientation and Displacement the GLCMs in the experiments. The orientation parameter is less important compared to other factors in the co-occurrence 5.1.3. Orientation and Displacement matrix [20]. We investigate the values of the textural measures of collapsed (red starred line) and 5.1.3. Orientation and Displacement The orientation is lessline) important compared to other factors in the standing buildingsparameter (blue open-circled in four directions, including 0°, 45°, 90° and 135°.co-occurrence Figure 14 matrix [20]. We investigate the values of the textural measures of collapsed (red starred line) and The orientation parameter is less important compared to other factors in the co-occurrence gives the results of the values of GLCM-based textural features with different orientations. It shows ◦ , 45(red ◦ , 90starred ◦ and 135 ◦ . and matrix [20]. We investigate the values of the textural measures of collapsed line) standing buildings (blue open-circled line) in four directions, including 0 Figure 14 that the features have relatively stable values when the orientation changes. However, we also found standing buildings (blue open-circled line) in four directions, including 0°, 45°, 90° and 135°. Figure texture of 45° 135° have similar values. Additionally, they have some differences givesthat thethe results of features the values ofand GLCM-based textural features with different orientations. It14shows gives the results of the values of GLCM-based textural features with different orientations. It shows from the values in relatively 0° and 90°. stable In Figure 9, it can be observed that most of the buildings’ that the features have values when the orientation changes. However,orientations we also found that the features have relatively stable values when thevalues orientation changes. However, we alsoa found ◦ ◦ are about 135° or 45°. This may be the reason that the of features in 45° and 135° have that the texture features of 45 and 135 have similar values. Additionally, they have some similar differences that the texture features of 45° and 135° have similar values. Additionally, they haveofsome differences Inin the orientation of 135° is chosen for the calculation GLCM. from tendency. the values 0◦experiments, and 90◦ . Inthe Figure 9, it can be observed that most of the buildings’ orientations from the values and 90°. In Figure 9, it can be observed that most of the buildings’ orientations ◦ or 45in ◦ .0° ◦ and 135 ◦ have a similar are about 135 This may be the reason that the values of features in 45 are about 135° or 45°. This may be the reason that the values of features in 45° and 135° have a similar tendency. In the experiments, of 135° 135◦isischosen chosen calculation of GLCM. tendency. In the experiments,the theorientation orientation of forfor thethe calculation of GLCM.

(a)

(b)

(c)

Figure 14. GLCM features with different orientations. (a) Mean of second moment; (b) Mean of homogeneity; (c) Mean of entropy.

(a)

(b)

(c)

The GLCM describes thewith probability finding two pixel gray-level values in a of defined Figure 14. GLCM features different of orientations. (a) given Mean of second moment; (b) Mean Figure 14. GLCM features with different orientations. (a) Mean of second moment; (b) Mean of homogeneity; of entropy. relative position (c) in Mean an image [36]. Displacement indicates the inter-pixel distance. Figure 15 presents homogeneity; (c) Mean of entropy. the investigation of GLCM features with different displacements. In the experiment, the features are The GLCM theofprobability givensize. pixel gray-level valuesthat in athe defined calculated at andescribes orientation 135°, withofa finding 13 × 13 two window The results show values relative position in an the image [36]. Displacement indicates thegiven inter-pixel distance. Figure 15 presents The GLCM the probability of finding two pixel gray-level values in isa not defined changed littledescribes when displacement increased, indicating that the parameter displacement the investigation of GLCM features with different displacements. In the experiment, the features are important in the co-occurrence matrix. The ideal distance of pixel offset depends on the level of detail relative position in an image [36]. Displacement indicates the inter-pixel distance. Figure 15 presents calculated at an orientation of 135°, with a 13cannot × 13displacements. window size. The show that the theif values of the texture is analyzed. The GLCM capture fine-scale textual information a large are the investigation ofthat GLCM features with different In results the experiment, features changed littleis when the Accordingly, displacement increased, indicating that the parameter displacement is not pixel offset used [7]. in our experiment, an orientation of 135° and a displacement ◦ calculated at an orientation of 135 , with a 13 × 13 window size. The results show that the of values important in the co-occurrence matrix. The ideal distance of pixel offset depends on the level of detail one are selected. changed little when the displacement increased, indicating that the parameter displacement is not of the texture that is analyzed. The GLCM cannot capture fine-scale textual information if a large important in the co-occurrence matrix.inThe distance pixel offset depends on the level of pixel offset is used [7]. Accordingly, our ideal experiment, an of orientation of 135° and a displacement of detail of the texture that is analyzed. The GLCM cannot capture fine-scale textual information if a large pixel one are selected.

offset is used [7]. Accordingly, in our experiment, an orientation of 135◦ and a displacement of one are selected.

Remote Sens. 2016, 8, 887

15 of 21

Remote Sens. 2016, 8, 887

15 of 21

(a)

(b)

(c)

Figure 15. GLCM features with different displacements. (a) Mean of second moment; (b) Mean of

Figure 15. GLCM features with different displacements. (a) Mean of second moment; (b) Mean of homogeneity; (c) Mean of entropy. homogeneity; (c) Mean of entropy.

5.2. Classification Results

5.2. Classification Results 5.2.1. Features for Classification

5.2.1. Features for Classification

Classification experiments had been carried out for features analysis. Random forest (RF) is the classifier. Basedexperiments on the reference map, we obtained 12 totallyRandom collapsed forest buildings Classification had been carried original out for footprints features of analysis. (RF) is and 52 standing buildings (Figure 9b) in each image. For each test, 6 collapsed buildings and 17 standing the classifier. Based on the reference map, we obtained original footprints of 12 totally collapsed buildings arestanding used to train samples, and the numbers of test, test samples are six and 35, and buildings and 52 buildings (Figure 9b)corresponding in each image. For each 6 collapsed buildings respectively. Table 2 shows the classification results with only first-order statistics. Table 3 is the 17 standing buildings are used to train samples, and the corresponding numbers of test samples are six classification results with only second-order statistics. Table 4 gives the classification results with and 35, respectively. Table 2 shows the classification results with only first-order statistics. Table 3 is all features.

the classification results with only second-order statistics. Table 4 gives the classification results with all features. Table 2. Classification results with only first-order statistics (CO = totally collapsed building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy).

Table 2. Classification results with only first-order statistics (CO = totally collapsed building, Classification ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy). Type 8 April 2015 4 December 2014 8 December 2014 CO ST Pa COClassification ST Pa CO ST Pa CO 5 1 83.3% 4 2 66.7% 6 0 100% 8 April 2015 4 December 2014 8 December 2014 Type ST 6 29 82.8% 6 29 82.8% 3 32 91.4% CO ST Pa 66.7% CO 100% ST Pa Ua CO 45.5% ST96.7% Pa 40% 93.5%

80.5% CO Overall5 1 82.9%83.3% 4 2 66.7% 6 92.7% 0 100% ST 6 29 82.8% 6 29 82.8% 3 32 91.4% Table results with only second-order statistics (CO = totally Ua 3. Classification 45.5% 96.7% 40% 93.5% 66.7% collapsed 100% building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy).

Overall

82.9%

80.5%

92.7%

Classification Type 8 Aprilwith 2015only second-order 4 December 2014 (CO 8=December 2014 building, Table 3. Classification results statistics totally collapsed COPa = producer’s ST Paaccuracy, CO Ua ST CO ST Pa ST = standing building; = user’s Pa accuracy). CO 5 1 83.3% 5 1 83.3% 5 1 83.3% ST 3 32 91.4% 5 Classification 30 85.7% 6 29 82.9% Ua 62.5% 97.0% 50% 96.8% 45.5% 96.7% 8 April 2015 4 December 2014 8 December 2014 Type Overall 90.2% 85.4% 82.9% CO ST Pa CO ST Pa CO ST Pa Table results all features 5(CO = totally building,5 ST = standing building; CO 4. Classification 5 1 with 83.3% 1 collapsed 83.3% 1 83.3% PaST = producer’s 3 accuracy, 32Ua = user’s 91.4%accuracy). 5 30 85.7% 6 29 82.9%

Ua

62.5%

OverallType

CO ST Ua Overall

97.0%

50% 96.8% Classification 90.2% 85.4% 2014 8 April 2015 4 December CO ST Pa CO ST Pa 6 0 100% 5 1 83.3% 5 30 85.7% 3 32 91.4% 54.5% 100% 62.5% 97.0% 87.8% 90.2%

45.5%

96.7%

82.9% 8 December 2014 CO ST Pa 6 0 100% 4 31 88.6% 60% 100% 90.2%

Remote Sens. 2016, 8, 887

16 of 21

Table 4. Classification results with all features (CO = totally collapsed building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy). Classification 8 April 2015

Type

CO ST Ua Overall

4 December 2014

8 December 2014

CO

ST

Pa

CO

ST

Pa

CO

ST

Pa

6 5 54.5%

0 30 100%

100% 85.7%

5 3 62.5%

1 32 97.0%

83.3% 91.4%

6 4 60%

0 31 100%

100% 88.6%

Remote Sens. 2016, 8, 887

87.8%

90.2%

90.2%

16 of 21

Figure 16 16 illustrates illustrates the the overall overall accuracy accuracy of of the the classification classification results results of of different different images images with with Figure different features. features. The red line line in in Figure Figure 16 16 is is the the results results with with only only first-order first-order statistics statistics of the images. different The green green line indicates ofof thethe images. TheThe blue lineline is the The indicates the the results resultswith withonly onlysecond-order second-orderstatistics statistics images. blue is results withwith all features of theofimages. It shows that thethat classification result with all features good and the results all features the images. It shows the classification result with all isfeatures is stableand for stable all images, the overall accuracies are all above However, the resultsthe areresults relatively good for alland images, and the overall accuracies are 85%. all above 85%. However, are unstable with only with first-order statistics and only second-order statistics. As mentioned in Section 3.3, relatively unstable only first-order statistics and only second-order statistics. As mentioned in the mean variance can describe the difference of backscattering of theofstanding and Section 3.3,and the mean and values variance values can describe the difference of backscattering the standing collapsed buildings. The The skewness and and kurtosis reflect the the difference of shapes of histograms of the and collapsed buildings. skewness kurtosis reflect difference of shapes of histograms of standing andand collapsed buildings. TheThe textural measures can can reveal textural characteristics of the the standing collapsed buildings. textural measures reveal textural characteristics of two the kinds of buildings. Therefore, all features are recommended for classification due todue the to good two kinds of buildings. Therefore, all features are recommended for classification theability good to discriminate the standing and collapsed buildings. ability to discriminate the standing and collapsed buildings.

Figure 16. 16. Overall Overall accuracy accuracy of of the the classification classification results results of of different different images imageswith withdifferent differentfeatures. features. Figure

5.2.2. Classification Classification Results Results 5.2.2. Based on on the footprints of 12 collapsed buildings and Based the reference referencemap, map,we weobtained obtainedoriginal original footprints of totally 12 totally collapsed buildings 52 standing buildings (Figure 8b) in each image. For each classifier, six collapsed buildings and 17 and 52 standing buildings (Figure 8b) in each image. For each classifier, six collapsed buildings and standing buildings areare used to to train samples, and thethe corresponding numbers of test samples areare six 17 standing buildings used train samples, and corresponding numbers of test samples and 35, respectively. In the experiments, 10, 20, 50, 100 and 200 trees are used for the RF classifier, six and 35, respectively. In the experiments, 10, 20, 50, 100 and 200 trees are used for the RF classifier, and the the classification classificationresult resultwith withthe thehighest highestaccuracy accuracyisisrecorded recordedand andreported. reported.For For the the SVM SVM classifier, classifier, and the C-support vector classification (C-SVC), which was introduced by Cortes et al. [37] and presented the C-support vector classification (C-SVC), which was introduced by Cortes et al. [37] and presented by Chang Chang and and Lin Lin [23], [23], is is adopted. adopted. As As suggested suggested by by Chang, Chang, the the radial radial basis basis function function (RBF) (RBF) is is chosen chosen by as the the kernel kernel function function to to construct construct the the classifier. classifier. Furthermore, Furthermore, the the two two parameters parameters (the (the kernel kernel width width as and penalty coefficient) of the RBF are determined by performing a grid search and cross-validation and penalty coefficient) of the RBF are determined by performing a grid search and cross-validation method in in our our study. study. method Tables5–7 5–7present presentthe theclassification classificationresults resultsof ofdifferent differentclassifiers classifiersin in the the three three images. images. For For Image Image 1, 1, Tables Table 5 shows that the collapsed buildings can be classified well by the three classifiers. Only one Table 5 shows that the collapsed buildings can be classified well by the three classifiers. Only one collapsed building was misclassified as a standing building by the RF. The K-NN classifier gives the lowest overall accuracy of 80.5%, as there are six standing buildings misclassified as collapsed buildings. The overall accuracies of RF and SVM reach 90.2%. For Image 2, Table 6 shows that RF and SVM yield the same results, with overall accuracies of 90.2%. The overall accuracy of K-NN is 87.8%. For Image 3, Table 7 shows that SVM gives the best result, with an overall accuracy of 92.7%. The overall accuracies

Remote Sens. 2016, 8, 887

17 of 21

collapsed building was misclassified as a standing building by the RF. The K-NN classifier gives the lowest overall accuracy of 80.5%, as there are six standing buildings misclassified as collapsed buildings. The overall accuracies of RF and SVM reach 90.2%. For Image 2, Table 6 shows that RF and SVM yield the same results, with overall accuracies of 90.2%. The overall accuracy of K-NN is 87.8%. For Image 3, Table 7 shows that SVM gives the best result, with an overall accuracy of 92.7%. The overall accuracies of RF and K-NN are 87.8% and 82.9%, respectively. Table 5. Classification results of Image 1 using different methods (CO = totally collapsed building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy). Classification RF

Type

CO ST Ua

SVM

K-NN

CO

ST

Pa

CO

ST

Pa

CO

ST

Pa

5 3 62.5%

1 32 97.0%

83.3% 91.4%

6 4 60%

0 31 100%

100% 88.6%

6 9 40%

0 26 100%

100% 74.3%

Overall

90.2%

90.2%

80.5%

Table 6. Classification results of Image 2 using different methods (CO = totally collapsed building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy). Classification RF

Type

CO ST Ua

SVM

K-NN

CO

ST

Pa

CO

ST

Pa

CO

ST

Pa

6 4 60%

0 31 100%

100% 88.6%

6 4 60%

0 31 100%

100% 88.6%

6 5 54.5%

0 30 100%

100% 85.7%

Overall

90.2%

90.2%

87.8%

Table 7. Classification results of Image 3 using different methods (CO = totally collapsed building, ST = standing building; Pa = producer’s accuracy, Ua = user’s accuracy). Classification RF

Type

CO ST Ua Overall

SVM

K-NN

CO

ST

Pa

CO

ST

Pa

CO

ST

Pa

6 5 54.5%

0 30 100%

100% 85.7%

6 3 66.7%

0 32 100%

100% 91.4%

6 7 46.2%

0 28 100%

100% 80%

87.8%

92.7%

82.9%

Based on the analysis above, collapsed buildings and standing buildings can be effectively separated by the three classifiers, as the overall accuracies of the three classifiers all exceed 80%. The results of RF and SVM are similar, and these methods outperform the K-NN classifier. Thus, collapsed buildings can be detected well by the proposed method. 5.2.3. Further Analysis about the Classification In our experiments, the classifiers adopted are supervised classifiers. Furthermore, they are also machine learning techniques. The training dataset is very important for these classifiers. To obtain the training samples, the ground truth must be known. However, after the training dataset has been built, the classifier can be performed when a new SAR image is input. Table 8 shows the results of

Remote Sens. 2016, 8, 887

18 of 21

three tests by the RF classifier. The aim of the tests is to evaluate the ability to classify a new image based on an existing training dataset. In Test 1 and Test 2, the training dataset is samples from Image 3 (8 April 2015) and Image 2 (8 December 2014), respectively. In Test 3, the training dataset is all samples from Images 2 and 3. Image 1 (4 December 2014) is the input image for the classification of all of the tests. As mentioned in Table 1, Images 2 and 3 are descending images, and Image 1 is an ascending image. In Test 1, the accuracy of the collapsed building is not good. However, the result of Test 2 is acceptable. In Test 3, the results are stable and improved slightly. The results indicate that it is possible to classify a new image if there is an existing training dataset for the classifier. The result will be better if the training dataset contains more training samples. In future work, more samples should be acquired, and images of other research sites should be tested for a more precise conclusion. Table 8. RF classification results of different tests (CO = totally collapsed building, ST = standing building, Pa = producer’s accuracy, Ua = user’s accuracy. In Test 1 and Test 2, the training dataset is samples from Image 3 (8 April 2015) and Image 2 (8 December 2014), respectively. In Test 3, the training dataset is all samples from Images 2 and 3. Image 1 (4 December 2014) is the input for classification of all of the tests). Classification Test 1

Type

CO ST Ua Overall

Test 2

Test 3

CO

ST

Pa

CO

ST

Pa

CO

ST

Pa

6 1 85.7%

6 51 89.5%

50.0% 98.1%

8 4 66.7%

4 48 92.3%

66.7% 92.3%

8 3 72.7%

4 49 92.5%

66.7% 94.2%

89.1%

87.5%

89.1%

6. Discussion Compared with low- or medium-resolution SAR images, VHR SAR image can provide more detailed information of man-made objects, such as buildings. In 2013, the TerraSAR-X mission was extended by implementing two new modes including the staring spotlight. The azimuth resolution of this mode is significantly increased to approximately 0.24 m by widening the azimuth beam steering angle range [30]. The sub-meter VHR SAR data source provides new opportunities for earthquake damage mapping and makes it possible to focus an analysis on individual buildings. A new method is proposed in the paper to use these VHR SAR data for collapsed building detection. The method is based on the concept that the original footprints of collapsed and standing buildings show different features in sub-meter VHR SAR images. In an SAR image, the shadow of a standing building covers the majority of the building’s footprint; thus, the footprint of a standing building usually exhibits low backscattering intensity. For a collapsed building, debris piles up above the footprint. Some bright spots can be observed, which are caused by corner reflectors resulting from the composition of different planes. Therefore, the collapsed building displays different features than do standing buildings in their original footprints. For the proposed method, the features for classification are derived from building footprints. Therefore, the high image resolution is the key factor to ensure that the building footprint contains more pixels for classification. If the building size is large, it also will benefit building damage detection. The proposed method avoids the difficulties of finding exact edges for building damage detection. In VHR SAR images, more features, such as the points and edges of objects, become visible. Debris exhibits similar features as the surroundings (e.g., vegetation). It is not easy to identify debris from surroundings such as vegetation with a single SAR image. To solve this problem, the proposed method uses a building footprint map to locate the original footprints of buildings. The key algorithm begins with the footprint of a building in an SAR image. The footprint of a building can be provided by cadastral maps or directly extracted from a pre-event SAR image or other VHR optical data.

Remote Sens. 2016, 8, 887

19 of 21

The approach shows a good ability for isolated buildings. However, if the buildings are too close, the footprint of a low building may be affected by a nearby tall building’s layover in the image. Because the layover usually shows relatively high backscattering intensity, the low standing building may be misclassified as a collapsed building by the method. If the debris of a collapsed building is near a standing building, the incidence wave from the senor may be obstructed by the standing building, as little of the backscatter wave from the collapsed building can be received by the sensor. In this case, the collapsed building also shows low backscattering intensity in the footprint, and it may be misclassified as a standing building. The high resolution of an SAR image usually means small coverage of a single scene. The scene extent of a TerraSAR-X ST mode image usually varies between 2.5 and 2.8 km in the azimuth direction and 4.6 km and 7.5 km in the range [30,38]. The small coverage of ST mode data may not be adequate for damage detection over a wide area with a single image. Commercial TerraSAR-X services commenced in January 2008. A TerraSAR-X rebuild was launched as TanDEM-X in 2010 to fly in close formation with TerraSAR-X [39]. The almost identical Spanish PAZ satellite will be launched in 2016 [40] and added to the TerraSAR-X reference orbit. The three almost identical satellites will be operated collectively and deliver optimized revisit times, increase coverages and improve services [41]. When we monitor Earth surface deformation by earthquakes, SAR images with large coverages are preferable. When detecting building damage in a town or a city, the coverage requirements of the image may be smaller. If the study area is a large city, the data can be acquired from TerraSAR-X, TanDEM-X and PAZ to cover a large area. Therefore, in real applications, we can use large-coverage SAR images to locate the city or town influenced by an earthquake and perform preliminary damage assessments. The ST data from TerraSAR-X , TanDEM-X and PAZ can be obtained to monitor the region for a more detailed and accurate assessment. In addition, airborne VHR SAR images covering the study area can be used to detect building damage. 7. Conclusions In this paper, we present a new damage assessment method for buildings using single post-earthquake sub-meter resolution VHR SAR images and original building footprint maps. The method can work at the individual building level and determines whether a building is destroyed after an earthquake or is still standing. First, a building footprint map covering the study area is obtained as prior knowledge. Then, an SAR image is geometrically rectified by the ground control points provided by the SAR product files. After rectification, the SAR image can be registered as the building footprint map. Thus, with the building footprint map, the original footprint of a building can be located in the SAR image. Then, features can be extracted in the image patch of a building’s footprint to form a feature vector. Finally, the buildings can be classified into damage classes with classifiers. We demonstrated the effectiveness of the proposed approach using spaceborne post-event sub-meter VHR TerraSAR-X ST mode data from old Beichuan County, China, which was heavily damaged in the Wenchuan earthquake in May 2008. The results show that the method is able to distinguish between collapsed and standing buildings, with high overall accuracies of approximately 90% for the RF and SVM classifiers. We tested the method using both ascending and descending data, demonstrating the effectiveness of the proposed method. The results also show that the first-order statistics and second-order image features derived from building footprints have a good ability to discriminate standing and collapsed buildings. Furthermore, for the supervised classifiers in our experiments, after the training dataset has been built, the classifier can be well performed when a new SAR image is input. In future work, more samples should be acquired, and images of other research sites should be tested for a more precise conclusion.

Remote Sens. 2016, 8, 887

20 of 21

Acknowledgments: This work was supported in part by the National Natural Science Foundation of China under Grant No. 41371413 and by the Key Program of the National Natural Science Foundation of China under Grant No. 41331176. The authors would like to thank the German Aerospace Center for providing the TerraSAR-X ST data from the TerraSAR-X AO project (LAN2456). The airborne optical image and LIDAR data are provided by the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences. Author Contributions: Chao Wang and Hong Zhang conceived of and designed the in situ investigations. Lixia Gong, Fan Wu and Qiang Li performed the experiments and analyzed the data. Jingfa Zhang and Qiang Li provided some in situ investigation photographs and data. Lixia Gong and Fan Wu wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4.

5. 6.

7.

8.

9.

10. 11. 12.

13.

14.

15.

16.

Plank, S. Rapid damage assessment by means of multi-temporal SAR-A comprehensive review and outlook to sentinel-1. Remote Sens. 2014, 6, 4870–4906. [CrossRef] Gong, L.; Zhang, J.; Zeng, Q.; Wu, F. A survey of earthquake damage detection and assessment of building using SAR imagery. J. Earthq. Eng. Eng. Vib. 2013, 33, 195–201. Balz, T.; Liao, M. Building-damage detection using post-seismic high-resolution SAR satellite data. Int. J. Remote Sens. 2010, 31, 3369–3391. [CrossRef] Brunner, D.; Schulz, K.; Brehm, T. Building damage assessment in decimeter resolution SAR imagery: A future perspective. In Proceedings of the 2011 Joint Urban Remote Sensing Event, Munich, Germany, 11–13 April 2011. Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake damage assessment of buildings using VHR optical and SAR imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420. [CrossRef] Wang, T.; Jin, Y. Post-earthquake building damage assessment using multi-mutual information from pre-event optical image and post-event SAR image. IEEE Geosci. Remote Sens. Lett. 2012, 9, 452–456. [CrossRef] Kuny, S.; Schulz, K.; Hammer, H. Signature analysis of destroyed buildings in simulated high resolution SAR data. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Melbourne, Australia, 21–26 July 2013. Kuny, S.; Hammer, H.; Schulz, K. Discriminating between the SAR signatures of debris and high vegetation. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015. Wu, F.; Gong, L.; Wang, C.; Zhang, H.; Zhang, B. Damage building analysis in TerraSAR-X new staring spotlight mode SAR imagery. In Proceedings of the IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Singapore, 1–4 September 2015. Wu, F.; Gong, L.; Wang, C.; Zhang, H.; Zhang, B.; Xie, L. Signature analysis of damage building with TerraSAR-X new staring spotlight mode data. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1696–1700. [CrossRef] Auer, S.; Gisinger, C.; Tao, J. Characterization of façade regularities in high-resolution SAR images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2727–2737. [CrossRef] Brunner, D.; Lemoine, G.; Bruzzone, L.; Greidanus, H. Building height retrieval from VHR SAR imagery based on an iterative simulation and matching technique. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1487–1504. [CrossRef] Airbus Defence and Space. Radiometric Calibration of TerraSAR-X Data, Beta Naught and Sigma Naught Coefficient Calculation. Available online: http://www.intelligence-airbusds.com/files/pmedia/public/ r465_9_tsx-x-itd-tn-0049-radiometric_calculations_i3.00.pdf (accessed on 26 October 2016). Agrawal, N.; Venugopalan, K. Speckle reduction in remote sensing images. In Proceedings of the International Conference on Emerging Trends in Networks and Computer Communications, Rajasthan, India, 22–24 April 2011. Varish, N.; Pal, A.K. Content based image retrieval using statistical features of color histogram. In Proceedings of the 3rd International Conference on Signal Processing, Communication and Networking, Chennai, India, 26–28 March 2015. Ulaby, F.T.; Kouyate, F.; Brisco, B.; Lee Willimams, T.H. Textural information in SAR images. IEEE Trans. Geosci. Remote Sens. 1986, GE-24, 235–245. [CrossRef]

Remote Sens. 2016, 8, 887

17. 18. 19. 20. 21. 22. 23. 24. 25.

26. 27. 28. 29. 30. 31. 32. 33.

34. 35. 36. 37. 38.

39.

40. 41.

21 of 21

Baraldi, A.; Parmiggiani, F. An investigation of the textural characteristics associated with gray level cooccurrence matrix statistical parameters. IEEE Trans. Geosci. Remote Sens. 1995, 33, 293–304. [CrossRef] Shi, L.; Sun, W.; Yang, J.; Li, P.; Lu, L. Building collapse assessment by the use of postearthquake chinese VHR airborne SAR. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2021–2025. [CrossRef] Haralick, R.M.; Shanmugan, K.; Dinstein, I. Texture features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [CrossRef] Soh, L.K.; Tsatsoulis, C. Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans. Geosci. Remote Sens. 1999, 37, 780–795. [CrossRef] Wood, E.M.; Pidgeon, A.M.; Radeloff, V.C.; Keuler, N.S. Image texture as a remotely sensed measure of vegetation structure. Remote Sens. Environ. 2012, 121, 516–526. [CrossRef] Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [CrossRef] Chang, C.C.; Lin, C.J. LIBSVM: A Library For Support Vector Machines, 2013. Available online: http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf (accessed on 27 June 2016). Theodoridis, S.; Koutroumbas, K. Pattern Recognition, 4th ed.; Academic Press: New York, NY, USA, 2009. Rodriguez-Galiano, V.F.; Chica-Olmo, M.; Abarca-Hernandez, F.; Atkinson, P.M.; Jeganathan, C. Random forest classification of mediterranean land cover using multi-seasonal imagery and multi-seasonal texture. Remote Sens. Environ. 2012, 121, 93–107. [CrossRef] Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [CrossRef] Popescu, A.A.; Gavat, I.; Datcu, M. Contextual descriptors for scene classes in very high resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 80–84. [CrossRef] Alpaydin, E. Introduction to Machine Learning; MIT Press: Cambridge, MA, USA, 2004. Guo, H.D.; Ma, J.W.; Zhang, B.; Li, Z.; Huang, J.; Zhu, L.W. Damage consequence chain mapping after the Wenchuan Earthquake using remotely sensed data. Int. J. Remote Sens. 2010, 31, 3427–3433. [CrossRef] Mittermayer, J.; Wollstadt, S.; Prats-Iraola, P.; Scheiber, R. The TerraSAR-X staring spotlight mode concept. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3695–3706. [CrossRef] Dumitru, C.O.; Datcu, M. Information content of very high resolution SAR images: Study of feature extraction and imaging parameters. IEEE Trans. Geosci. Remote Sens.. 2013, 51, 4591–4610. [CrossRef] Ressel, R.; Frost, A.; Lehner, S. A neural network-based classification for sea ice types on X-Band SAR images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3672–3680. [CrossRef] Liu, H.; Guo, H.; Zhang, L. SVM-based sea ice classification using textural features and concentration from RADARSAT-2 Dual-Pol ScanSAR data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 1601–1613. [CrossRef] Zhang, Q.; Wang, J.; Gong, P.; Shi, P. Study of urban spatial patterns from SPOT panchromatic imagery using textual analysis. Int. J. Remote Sens. 2003, 24, 4137–4160. [CrossRef] Pusssant, A.; Hirsch, J.; Weber, C. The utility of texture analysis to improve per-pixel classification for high to very high spatial resolution imagery. Int. J. Remote Sens. 2005, 26, 733–745. [CrossRef] Kandaswamy, U.; Adjeroh, D.A.; Lee, M.C. Efficient texture analysis of SAR imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2075–2083. [CrossRef] Cortes, C.; Vapnik, V. Support-vector network. Mach. Learn. 1995, 20, 273–297. [CrossRef] Kraus, T.; Brautigam, B.; Mittermayer, J.; Wollstadt, S.; Grigorov, C. TerraSAR-X staring spotlight mode optimization and global performance predictions. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 1015–1027. [CrossRef] Gantert, S.; Kern, A.; During, R.; Janoth, J.; Petersen, L.; Herrmann, J. The future of X-Band SAR: TerraSAR-X next generation and WorldSAR constellation. In Proceedings of the Asia-Pacific Conference on Synthetic Aperture Radar, Tsukuba, Japan, 23–27 September 2013. TerraSAR-X/PAZ Constellation. Available online: http://www.intelligence-airbusds.com/en/5055-terrasarx-paz-radar-satellite-constellation (accessed on 18 June 2016). TerraSAR-X Radar Satellite Imagery. Available online: http://www.intelligence-airbusds.com/terrasar-x/ (accessed on 18 June 2016). © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).