Texture-Cognition-Based 3D Building Model Generalization - MDPI

3 downloads 0 Views 12MB Size Report
Aug 23, 2017 - have considered model-group generalization. This paper focuses on model-group generalization. For large-scale city models, single-building ...
International Journal of

Geo-Information Article

Texture-Cognition-Based 3D Building Model Generalization Po Liu 1 , Chengming Li 1, * and Fei Li 2 1 2

*

Chinese Academy of Surveying & Mapping, Beijing 100830, China; [email protected] Key Laboratory of Watershed Geographic Sciences, Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences, Nanjing 210008, China; [email protected] Correspondence: [email protected]; Tel./Fax: +86-010-6388-0555

Academic Editors: Jianming Liang, Jianhua Gong, Yu Liu and Wolfgang Kainz Received: 28 May 2017; Accepted: 21 August 2017; Published: 23 August 2017

Abstract: Three-dimensional (3D) building models have been widely used in the fields of urban planning, navigation and virtual geographic environments. These models incorporate many details to address the complexities of urban environments. Level-of-detail (LOD) technology is commonly used to model progressive transmission and visualization. These detailed groups of models can be replaced by a single model using generalization. In this paper, the texture features are first introduced into the generalization process, and a self-organizing mapping (SOM)-based algorithm is used for texture classification. In addition, a new cognition-based hierarchical algorithm is proposed for model-group clustering. First, a constrained Delaunay triangulation (CDT) is constructed using the footprints of building models that are segmented by a road network, and a preliminary proximity graph is extracted from the CDT by visibility analysis. Second, the graph is further segmented by the texture–feature and landmark models. Third, a minimum support tree (MST) is created from the segmented graph, and the final groups are obtained by linear detection and discrete-model conflation. Finally, these groups are conflated using small-triangle removal while preserving the original textures. The experimental results demonstrate the effectiveness of this algorithm. Keywords: 3D building generalization; texture; SOM; cognition; constrained Delaunay triangulation

1. Introduction Virtual Geographic Environments (VGEs) have been proposed as a new generation of geographic analysis tools to improve our understanding of the geographic world and assist in solving geographic problems at a deeper level [1,2]. Cities are the centers of living; people mainly work and live in cities; three-dimensional urban-building models are key elements of cities that have been widely applied in urban planning, navigation, virtual geographic environments and other fields [3–8]. Building visualization is the basis for collaborative work, planning and decision making in city VGEs. These models include many details to address the complexities of urban environments; thus, level-of-detail (LOD) technology is often used to reduce the amount of data and increase the rendering speed [9]. These detailed models can be replaced by less-detailed models from a more distant view using generalization [10]. Two different tasks exist for building generalization: simplification and generalization [11]. Single-building simplification has been widely studied [10,12,13], but few works have considered model-group generalization. This paper focuses on model-group generalization. For large-scale city models, single-building simplification alone does not suffice for visualization. The huge number of single objects contributes most to both the computational complexity and cognitive load. Building LOD models can significantly assist the interactive rendering of massive urban-building models. Therefore, we must apply the principles of generalization to their visualization to efficiently communicate spatial information [14]. A building-generalization technique for linear building groups ISPRS Int. J. Geo-Inf. 2017, 6, 260; doi:10.3390/ijgi6090260

www.mdpi.com/journal/ijgi

ISPRS Int. J. Geo-Inf. 2017, 6, 260

2 of 19

was created by [15]. Chang et al. [16] proposed a single link-based clustering method to group city-sized collections of 2.5D buildings while maintaining the legibility of the city [17]. Glander and Döllner [18] used an infrastructure network to group building models and replace them with cell blocks while preserving local landmarks. Kada [19] extended cell decomposition to use morphological operations on a raster representation of the initial vector data to generalize building groups. Yang, Zhang, Ma, Xie and Liu [14] proposed a new distance measurement to improve the single-link clustering algorithm that was proposed by Chang, Butkiewicz, Ziemkiewicz, Wartell, Ribarsky and Pollard [16]. Guercke et al. [20] expressed different aspects of the aggregation of building models in the form of mixed-integer-programming problems. He et al. [21] also proposed a footprint-based generalization approach for 3D building groups in the context of city visualization by converting 3D generalization tasks into 2D issues via buildings’ footprints while reducing both the geometric complexity and information density. Zhang et al. [22] also proposed a spatial cognition analysis technique for building displacement and deformation in city maps based on Gestalt and urban-morphology principles. These approaches are mainly based on geometric characteristics. However, several problems remain unresolved. (1) Texture features have not yet been fully considered for generalization. Compared to geometric features, texture features more closely reflect people’s visual perception. (2) Landmarks play an important role in cities and thus should be preserved during generalization. (3) Gestalt clustering in local regions is still a difficult problem. In addition, the discrete models after segmentation must be further combined, and detailed textures should be preserved in different LOD models. This paper is the first to introduce texture features into 3D-model-group clustering and generalization based on Gestalt and urban-morphology principles. A self-organizing mapping (SOM)-based classification approach is proposed for texture–feature analysis. In addition, a new hierarchical-clustering method is proposed. First, a constrained Delaunay triangulation (CDT) is constructed from the footprints of models and other urban morphologies that are segmented by the road network. Then, the preliminary graph is extracted from the CDT by visibility analysis, the proximity graph is further segmented by texture analysis and landmark detection, and the final group is obtained by Gestalt-based local-structure analysis. Finally, these groups of models are conflated using a small-triangle-removal strategy. The remainder of this paper is organized as follows. Section 2 describes the related work, namely, cognition-based 3D-model generalization and texture–feature analysis. Section 3 introduces the framework of the algorithm. A cognition-based hierarchical generalization and an SOM-based texture-classification approach are described in detail. Section 4 shows the experiment. Finally, the study’s results are summarized and discussed in Section 5. 2. Related Work This section discusses building-model generalization and texture feature. 2.1. Cognition-Based 3D-Building Generalization A large number of simplification algorithms that are based on different perspectives in computer graphics have been proposed. Garland and Heckbert [23] developed the first simplification method based on quadric matrices to produce high-quality approximations of polygonal models. An improved quadric error metric for simplifying meshes was further proposed based on geometric correspondence in 3D to preserve the color, texture and normal attributes [24,25]. Lindstrom and Turk [26] introduced a framework that uses images to determine the portions of a model that must be simplified. High-quality textured surfaces are produced with the edge-collapse algorithm. These works are mainly based on single buildings. As an exception, Chang, Butkiewicz, Ziemkiewicz, Wartell, Ribarsky and Pollard [16] proposed a single link-based clustering method to group city-sized collections of 2.5D buildings while maintaining the legibility of the city, and the texture for each face was generated by placing an orthographic camera so that its near clipping plane lay on the face.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

3 of 19

Buildings are the main content of map generalization, and structural characteristics are the main purpose of generalization. The urban-morphology theory and Gestalt principles are used to form global and local constraints during building-model generalization, respectively [27]. Regnauld [28] detected and organized building relationships for 2D map generalization using a minimum support tree (MST) and a proximity graph of the building set according to various criteria from Gestalt theory. Li et al. [29] presented a cognition method that combines an urban morphology and Gestalt theory for 2D-building groupings and generalizations, but this method is difficult to apply to complex building shapes. Yan et al. (2008) improved this method for buildings, but the map generalization efficiency is still rather low. The existing map-generalization method can be extended to 3D-model generalization. Global groups are suitable for urban-morphology theory. Mao et al. [30] used an urban-road network to segment models while maintaining the significant city layout. Yang, Zhang, Ma, Xie and Liu [14] proposed an approach that is based on the single-linking clustering method to generalize and render urban-building models while maintaining their main characteristics, such as neighborhoods, paths, and intersections [17]. Gestalt theory can also be used to cluster buildings. Mao et al. [31] proposed a linear group-detection method for building classification based on MST, but this method is not suitable for complex building models in urban environments. Zhang, Hao, Dong and Zhen [22] proposed a support vector machine (SVM)-based method to group models according to their proximity, similarity, and direction. Local constraints from Gestalt theory and global constraints from urban morphology have also been used to update and visualize groups of 3D models [32,33]. These generalization methods are based on geometric characteristics and rarely consider the texture. Texture characteristics are used to assist generalization in this paper based on this previous cognition-based 3D-building-generalization approach. 2.2. Texture Feature Texture is typically defined as an image of a local feature or an indicator of pixels in the local region. Texture is a value for the characteristics of internal variance and is associated with the location of the texture, size, and shape but is not related to the brightness. Structural texture-space differences can be converted into different gray values and expressed by a mathematical model. The texture itself contains the direction, position and shape of these characteristics, which can be described by the geometrical characteristics of the model. Texture-analysis methods include both statistical and structural analysis. Statistical characteristics include the texture area of the gray histogram, the gray mean and variance, and the minimum and maximum pixel values. Structural analysis is similar to the geometric analysis of building footprints and is highly suitable for artificial textures, such as buildings. Structural analysis is treated as a geometric clustering problem in this paper. Current 3D-model generalization is mainly based on geometric features because texture information has still not received sufficient attention. Unlike other methods, Noskov and Doytsher [11] proposed a generalization method that is based on the rasterization of the 2D footprints of 3D buildings, but the pixel characteristics still do not include the texture. The texture can be used as an important classification indicator during generalization, which is more in line with visual-cognitive habits. Cabral et al. [34] proposed a geometry- and texture-matching method to reshape texture models that can be applied to an existing model of a small grid but is not suitable for a large-scale 3D-city model. Zhang et al. [35] realized the unified management of the texture and geometric structure but did not make full use of texture information. In this paper, the texture features of a 3D model are combined with geometric information for model generalization. The texture must occasionally be reshaped after changing the geometry of the model groups [16,18,36] because the original textures are often eliminated during model conflation; this task is also addressed in this paper through the triangle-removal strategy.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

4 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260

3. Cognition-Based Hierarchical Clustering and Generalization

4 of 19

3. Cognition-Based Hierarchical Clustering Generalization First, the texture is introduced based onand urban morphology and Gestalt theory to assist the ISPRS Int. J. Geo-Inf. 2017, 6, 260 of 19 group clustering generalization of 3D-building models, and detailed preserved First, the and texture is introduced based on urban morphology andthe Gestalt theorytextures to assist are the 4group andFigure generalization of the 3D-building models, and textures are preserved aftersteps: after clustering conflation. 1 shows main framework of the thedetailed algorithm, which includes six 3. Cognition-Based Hierarchical Clustering and Generalization conflation. Figure 1 shows main framework the algorithm, which (2) includes steps: (1) data (1) data preprocessing: extractthe the footprint and of texture of the model; urbansixmorphology-based First, the texture isthe introduced on urbanofmorphology theory to assist the group preprocessing: extract footprintbased and texture the model; and (2) urban morphology-based global clustering: CDT construction from building footprints, theGestalt segmentation of roads global according clustering and generalization of 3D-building models, and the detailed textures are preserved afterto clustering: CDT construction from building footprints, the segmentation of roads according to urban-morphology theory, and the extraction of the initial proximity graph by visibility analysis; conflation. Figure 1 shows the main framework of the algorithm, which includes six steps: (1) data urban-morphology theory, and the extraction of the initial proximity graph by visibility analysis; (3) (3) the preprocessing: further segmentation offootprint the proximity graphofbythe texture analysis and landmark detection, which extract the and texture model; (2) urban morphology-based global the further segmentation of the proximity graph by texture analysis and landmark detection, which is is obviously different from the 2D generalization method; (4) Gestalt theory-based local clustering: clustering:different CDT construction building footprints, segmentation of roads according the to the obviously from the 2Dfrom generalization method; (4)the Gestalt theory-based local clustering: generation of an from the segmented graph based on thenearest nearest distance, followed by urban-morphology theory, and the extraction of the initialon proximity graph by visibility analysis; (3)linear generation of MST an MST from the segmented graph based the distance, followed by linear the further segmentation of the proximity graph by texture analysis and landmark detection, which detection and discrete-polygon conflation, which originate from Gestalt theory; (5) the conflation detection and discrete-polygon conflation, which originate from Gestalt theory; (5) the conflation isof of obviously different from the 2D generalization method; (4) Gestalt theory-based local clustering: the the grouping models and regeneration of texture; and visualization of these models. the grouping models and regeneration of the the texture; and (6)(6) thethe visualization of these models. generation of an MST from the segmented graph based on the nearest distance, followed by linear detection and discrete-polygon conflation, which originate from Gestalt theory; (5) the conflation of the grouping models and regeneration of the texture; and (6) the visualization of these models.

Figure 1. Algorithm framework.

Figure 1. Algorithm framework.

3.1. Data Processing

3.1. Data Processing

Figure 1. Algorithm framework.

First, a 3D model must be preprocessed using the following steps: (1) extract the footprint: the 3.1. model be projected a horizontal direction andfollowing then merged into (1) a polygon GEOS the First,Data ashould 3DProcessing model must beinpreprocessed using the steps: extract[37]; the the footprint: DLLs can be used for polygon merging; (2) calculate the centric coordinates of the footprints, and model should projected in a be horizontal direction andfollowing then merged into a polygon [37]; the GEOS First,be a 3D model must preprocessed using the steps: (1) extract the footprint: the obtain the building height; the maximum Z of the vertex is taken as the building height; and (3) and a horizontal and then merged into a polygon [37];footprints, the GEOS DLLs model can beshould used be forprojected polygoninmerging; (2)direction calculate the centric coordinates of the the roof and for wall of maximum the model from normal of triangles; the roof facade textures DLLs can be used polygon merging; (2)the calculate centric coordinates of and the footprints, andextract obtainextract the building height; the Z of the vertexthe is the taken as the building height; and (3) can also the be extracted from the original models. Figure 2a shows a visualization effectheight; of the and original obtain building height; the maximum Z of the vertex is taken as the building the roof and wall of the model from the normal of the triangles; the roof and facade textures (3) can also model detailed textures; the green below the model marks were extractwith the roof and wall of the model fromline the normal of the triangles; thethe rooffootprints and facadethat textures be extracted from the original models. Figure 2a shows a visualization effect of the original model extracted from the original Figure 2b presents can also be extracted from model. the original models. Figureits 2atexture shows image. a visualization effect of the original

with detailed textures; the green line below the model marks the footprints that were extracted from model with detailed textures; the green line below the model marks the footprints that were the original model. Figure 2b presents its texture image. extracted from the original model. Figure 2b presents its texture image.

Figure 2. Experimental data: (a) original model-visualization effect with footprints; (b) texture image of the selected model. Figure 2. Experimental data: (a) original model-visualization effect with footprints; (b) texture Figure 2. Experimental data: (a) original model-visualization effect with footprints; (b) texture image image of the selected model.

of the selected model.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

5 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260

5 of 19

3.2. 3.2. Urban Urban Morphological-Based Morphological-Based Global Global Clustering Clustering We We select select aa typical typical region region from from the the research research data, data, which which has has 63 63 buildings, buildings, to to better better explain explain the the process and effect of the algorithm; the algorithm is described with these polygons in this section. process and effect of the algorithm; the algorithm is described with these polygons in this section. In areare preliminarily clustered by the urban-morphological theory, and In this thispaper, paper,the thebuilding buildingmodels models preliminarily clustered by the urban-morphological theory, proximity graphs are used, which include the following two two steps: (1) a(1) CDT is generated by the and proximity graphs are used, which include the following steps: a CDT is generated by building footprints and further segmented by the road network; and (2) a proximity graph is extracted the building footprints and further segmented by the road network; and (2) a proximity graph is from the CDT analysis. extracted fromvia thevisual CDT via visual analysis. 3.2.1. 3.2.1. CDT CDT Proximity used in map or model generalization [22,28,29]. Zhang, Hao, Dong Proximity graphs graphsare arewidely widely used in map or model generalization [22,28,29]. Zhang, Hao, and Zhen [22] constructed an MST from an improved Delaunay triangulation, in which the vertices Dong and Zhen [22] constructed an MST from an improved Delaunay triangulation, in which the of the triangles are oftenare located the centers the buildings (Figure 3a). However, the adjacent vertices of the triangles oftenatlocated at theofcenters of the buildings (Figure 3a). However, the polygons within the visual scope are only directly consolidated during model merging. In this paper, adjacent polygons within the visual scope are only directly consolidated during model merging. In we a constraint triangulation on the footprints of the modelsof (Figure 3b), thisconstruct paper, we constructDelaunay a constraint Delaunaybased triangulation based on the2.5D footprints the 2.5D and the triangles in the polygons are removed. models (Figure 3b), and the triangles in the polygons are removed.

Figure 3. 3. Delaunay Delaunay triangulation: triangulation: (a) (a) non-constraint; non-constraint; (b) (b) boundary boundary constraint. constraint. Figure

This Delaunay triangulation must be further segmented based on urban-morphology theory. This Delaunay triangulation must be further segmented based on urban-morphology theory. In this paper, roads are used to divide these triangles, and the triangles that intersect with roads are In this paper, roads are used to divide these triangles, and the triangles that intersect with roads are removed. In Figure 3b, the triangles that intersect with roads are represented by dotted lines, while removed. In Figure 3b, the triangles that intersect with roads are represented by dotted lines, while the the remaining triangles conform to the urban-morphology distribution. When no road data were remaining triangles conform to the urban-morphology distribution. When no road data were available, available, the models could be preliminarily segmented by the buffer-like image-based method the models could be preliminarily segmented by the buffer-like image-based method from Zhang, from Zhang, Zhang, Takis Mathiopoulos, Xie, Ding and Wang [35]. Zhang, Takis Mathiopoulos, Xie, Ding and Wang [35]. 3.2.2. Visibility Analysis 3.2.2. Visibility Analysis We used CDT to detect the topological adjacency relationships of buildings and generated We used CDT to detect the topological adjacency relationships of buildings and generated two-building groups. When the triangles were segmented, the proximity graph was further two-building groups. When the triangles were segmented, the proximity graph was further extracted extracted by visibility analysis. Figure 4a shows a portion of the Delaunay triangulations in Figure by visibility analysis. Figure 4a shows a portion of the Delaunay triangulations in Figure 3a. Polygons 3a. Polygons a and e and polygons e and c could not be directly combined in the generalization, so a and e and polygons e and c could not be directly combined in the generalization, so the edges in the edges in Figure 4a do not represent the likely conflation relationship. The corresponding CDT is Figure 4a do not represent the likely conflation relationship. The corresponding CDT is represented by represented by dotted lines in Figure 4b. These triangles can be divided into two catalogues. The dotted lines in Figure 4b. These triangles can be divided into two catalogues. The first category has first category has one/two vertices on one polygon and the other two/one vertices on another one/two vertices on one polygon and the other two/one vertices on another polygon. The second polygon. The second category, which connects three polygons, was removed. Finally, if two category, which connects three polygons, was removed. Finally, if two buildings still had linking buildings still had linking triangles, then they were connected by a line, and the vertex of the line triangles, then they were connected by a line, and the vertex of the line was on the gravity of the was on the gravity of the building. Figure 4c presents the initial proximity graph from visibility building. Figure 4c presents the initial proximity graph from visibility analysis. Each edge shows the analysis. Each edge shows the directly merging relationship of two adjacent buildings. Subsequent classifications were all based on this proximity graph.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

6 of 19

directly merging relationship of two adjacent buildings. Subsequent classifications were all based on this proximity graph. ISPRS Int. J. Geo-Inf. 2017, 6, 260 6 of 19

Figure 4. 4. Visibility Visibility analysis: analysis: (a) (a) Delaunay Delaunay triangulation triangulation (DT); (DT); (b) (b) constrained constrained Delaunay Delaunay triangulation triangulation Figure (CDT); (c) preliminary proximity graph. (CDT); (c) preliminary proximity graph.

3.3. SOM-Based Texture Classification and Analysis 3.3. SOM-Based Texture Classification and Analysis Texture is an unquantifiable feature; the demarcation between texture features is often unclear, Texture is an unquantifiable feature; the demarcation between texture features is often unclear, although it is intuitive in human vision. Thus, an intelligent algorithm is a realistic choice. To although it is intuitive in human vision. Thus, an intelligent algorithm is a realistic choice. To increase increase the degree of automation of the algorithm, we used a SOM-based method to automatically the degree of automation of the algorithm, we used a SOM-based method to automatically classify classify texture in this paper, with the appropriate classification indicators and strategies carefully texture in this paper, with the appropriate classification indicators and strategies carefully selected. selected. 3.3.1. Texture-Feature Analysis 3.3.1. Texture-Feature Analysis Texture has color and image characteristics. These features reflect both the structure of has color and imageinformation characteristics. Thesethe features both the structure of gray gray Texture information and the spatial between texturereflect image’s gray-level distributions. information and the spatial information between the texture image’s gray-level distributions. 3D-building models mainly include facade and roof textures; only roof textures were analyzed in this 3D-building models mainly facade andhave roofsimilar textures; only roof textures analyzed in paper. Building models in the include same region often characteristics because were buildings that are this paper. Building models in the same region often have similar characteristics because buildings constructed at the same time may have the same texture. that are at the the sameset timeofmay have the texture. mesh is defined as f (v) = We constructed assumed that planes for same a building assumed set roof of and planes a were building is based defined as c, d)| ax + by + czthat + d =the 0}. The facadefor planes initiallymesh clustered on the {( a, b,We . The roof and facade planes were initially clustered f ( v ) = ( a , b , c , d ) | ax + by + cz + d = 0 { the plane. If we assume one plane } F, the zenith angle θ of the normal is defined as normal of

based on the normal of the plane. If we assume  one plane F,the zenith angle θ of the normal is c √ θ = arccos . (1) defined as a2 + b2 + c2

 plane is  as a roof. The characteristics of c catalogued If the angle is less than 10 degrees, then this θ = arccos (1)  2 2 2 the texture pixels can be evaluated by texture mapping. b shown + c  .in Figure 5, triangle a(V4 V5 V6 ) has  a + As three vertices in the model mesh, and triangle b(t4 t5 t6 ) is its corresponding triangles in the texture image. If we assume thatthan the 10 width and height of the texture image is wasand h, the texture coordinates If the angle is less degrees, then this plane is catalogued a roof. The characteristics of of are (x,y), where andevaluated y are between 0 and mapping. 1; then, theAs coordinate t4 (t x ,t5, ) in Figure 5c is 4 V 5 V 6) theV4texture pixels canx be by texture shown inofFigure triangle a(V y has three vertices in the model mesh, and triangle b(t4t5t6) is its corresponding triangles in the t x = height w ∗ x of the texture image is w and h, the texture texture image. If we assume that the width and . (2) ty between = h ∗ y 0 and 1; then, the coordinate of t4 ( t , t )in coordinates of V4 are (x,y), where x and y are x

y

Figure 5c is

tx = w * x ty = h * y

(2) .

The pixels in the triangle can be computed by an intersection operation, and the mean and variance value of triangle a are defined as

ISPRS Int. J. Geo-Inf. 2017, 6, 260

7 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260 ISPRS Int. J. Geo-Inf. 2017, 6, 260

7 of 19 7 of 19

The pixels in the triangle can be computed by an intersection operation, and the mean and variance value of triangle a are defined as n

p  n p  p = ∑ pi n

i =1 i =1

p = i=1n nn

i i

(3) (3) (3)

p=

δ= δδ ==

n n

n ( p − p)  − pp))2 pi − ∑ (( p  i =1

s

i =1 i =1

i i

2 2

n

nn where nn is is the the number number of of pixels pixelsinside insidetriangle triangleb, b,and and pipis the is the pixel value. where pixel value. where n is the number of pixels inside triangle b, and pii is the pixel value.

(4) (4) (4)

Figure 5. Texture mapping: (a) original triangle; (b) texture image; (c) pixel in the triangles. Figure 5. 5. Texture Texture mapping: Figure mapping: (a) (a) original original triangle; triangle; (b) (b) texture texture image; image; (c) (c) pixel pixel in in the the triangles. triangles.

Figure 6 shows the buildings with detailed textures in cities. Distinguishing buildings 5 and 7 Figure 6 shows buildings 55 and 7 shows the the buildings buildings with with detailed detailed textures textures in in cities. cities. Distinguishing Distinguishing buildings from surrounding buildings (such as buildings 1, 2, 3, 4, 5 and 6) is difficult when using only the from surrounding buildings buildings (such (such as as buildings buildings 1, 1, 2, 2, 3, 3, 4, 5 and 6) is difficult difficult when when using using only the the geometric features, such as the area, direction and height index, but this process is easier when using geometric features, such as the area, direction and height index, but this process is easier when using the texture. the texture.

Figure 6. Texture features. Figure 6. Texture features. Figure 6. Texture features.

3.3.2. Texture Classification 3.3.2. Texture Classification 3.3.2. Texture Classification A statistical texture-analysis method was used in this paper, and the mean value and variance A statistical texture-analysis method was used in this paper, and the mean value and variance of the pixels were used as two classification Theand pixels of textures A texture statistical texture-analysis method was used in indices. this paper, the mean value are andrepresented variance of of the texture pixels were used as two classification indices. The pixels of textures are represented by atexture color. pixels The RGB be converted to The HSVpixels values andby the the werecolor usedvalues as twomust classification indices. of before texturesclassification, are represented a by a color. The RGB color values must be converted to HSV values before classification, and the tone is an input value of the training. Figure 7 shows the clustering results from the mean value of tone is an input value of the training. Figure 7 shows the clustering results from the mean value of the pixels; the numbers in the polygon represent the identification of the buildings, and the blue the pixels; the numbers in the polygon represent the identification of the buildings, and the blue

ISPRS Int. J. Geo-Inf. 2017, 6, 260

8 of 19

color. The RGB color values must be converted to HSV values before classification, and the tone is an input value of the training. Figure 7 shows the clustering results from the mean value of the pixels; the numbers inGeo-Inf. the polygon ISPRS Int. J. 2017, 6, 260represent the identification of the buildings, and the blue lines 8represent of 19 roads. Distinguishing buildings 51-2 and 50-4 is difficult, but they can be easily distinguished if the ISPRS Int. J. Geo-Inf. roads. 2017, 6, 260 8 of 19 linesindex represent buildings 51-251and is difficult, theybuildings, can be easily variance is added as Distinguishing an indicator. The numbers and50-4 50 are the FIDsbut of the whereas distinguished if the variance index is added as an indicator. The numbers 51 and 50 are the FIDs of 2 andlines 4 arerepresent the corresponding texture-classification categories. roads. Distinguishing buildings 51-2 and 50-4 is difficult, but they can be easily the buildings, whereas 2 and 4 are the corresponding texture-classification categories. distinguished if the variance index is added as an indicator. The numbers 51 and 50 are the FIDs of the buildings, whereas 2 and 4 are the corresponding texture-classification categories.

Figure Textureclassification classification by by statistical Figure 7. 7. Texture statisticalanalysis. analysis. 7. can Texture statisticalthe analysis. Classification in local Figure regions alsoclassification significantlybyimprove accuracy of the SOM-clustering

Classification local regions can also significantly the and accuracy of the SOM-clustering method. Figurein 8a,b present the texture-clustering resultsimprove in the global local regions. Buildings 46 Classification in local regions can also significantly improve the accuracy of the SOM-clustering method. 8a,b present the texture-clustering resultsbut inbelong the global local regions. Buildings and Figure 47 in Figure 8a are divided into different categories to theand same category in Figure 8b. 46 method. Figure present thewith texture-clustering results intherefore, globaltexture regions. Buildings 46 We demarcate the texture surrounding buildings; classification in the and 47 inoften Figure 8a 8a,b are divided into different categories butthe belong toand thelocal same category in global Figure 8b. and 47 in Figure 8a are divided into different categories but belong to the same category in Figure 8b. region is not a good choice. Thewith local regions can be obtained fromtherefore, urban-morphology We often demarcate the texture surrounding buildings; texture clustering. classification in the We often demarcate the texture with surrounding buildings; therefore, texture classification in the global globalregion region is not a good choice. The local regions can be obtained from urban-morphology clustering. is not a good choice. The local regions can be obtained from urban-morphology clustering.

Figure 8. Texture classification in the (a) global region and (b) local region. Figure 8. Textureimply classification in the (a) global region and local region. Building-texture the structural of(b) the Figure 8. features Texture classification in the (a)characteristics global region and (b) 3D localmodel, region.and buildings that are clustered by texture more closely match human-vision characteristics. Figure 9 shows the the structural characteristics final Building-texture proximity graph,features which isimply segmented by the types of textures. of the 3D model, and buildings Building-texture imply structural of the 3D model, and buildings that are clustered features by texture morethe closely match characteristics human-vision characteristics. Figure 9 shows the that final proximity graph,more whichclosely is segmented the types of textures. are clustered by texture matchby human-vision characteristics. Figure 9 shows the final

proximity graph, which is segmented by the types of textures.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

9 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260

9 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260

9 of 19

Figure9. 9. Proximity Proximity graph graph that thatisissegmented segmentedby bythe thetypes typesof oftextures. textures. Figure Figure 9. Proximity graph that is segmented by the types of textures.

3.4. Landmark Landmark Detection Detection 3.4.

3.4. Landmark Detection

Landmarks are are important important components components of aa city city because because they they distinguish distinguish the the city from from others others Landmarks Landmarks are important components of of a city because they distinguish the city city from others considerably during generalization [16], and the sum of mesh planes is often complex [17]. considerably during generalization the sum sumofofmesh meshplanes planes is often complex considerably during generalization [16], [16], and and the is often complex [17]. [17]. Landmarks are are widely widely used used in in navigation navigation and must must be be preserved preserved during during model model generation generation [14]. [14]. Landmarks Landmarks are widely used in navigation and and must be preserved during model generation [14]. Landmark models can be detected from surrounding models by calculating differences in the Landmark models candetected be detected surrounding models by calculating differences Landmark models can be fromfrom surrounding models by calculating differences in in thethe height, height, area and volume [38]. Of these geometric features, the height of a building is extremely height, area and [38]. Of thesefeatures, geometric thea height of ais building is important extremely and area and volume [38].volume Of these geometric thefeatures, height of building extremely important and must be carefully considered during generalization [16]. important and must be carefully considered during generalization [16]. must be carefully considered during generalization [16]. Here, we demonstrate demonstrate height detection. The height heightofof ofbuilding building B1 Figure in Figure h, the and the Here, we demonstrate heightdetection. detection. The B1 B in 10 is10 h, is and Here, we height The height building 1 in Figure 10 is h, and the surrounding buildings B 2 , B 3 , B 4 , B 5 , B 6 , B 7 and B 8 in the visual region have corresponding heights of surroundingbuildings buildingsBB,2,BB3,, BB4,, BB5,, B B6,, B 7 and B8 in the visual region have corresponding heights of surrounding 2 3 4 5 6 B7 and B8 in the visual region have corresponding heights of h . The average height of the surrounding buildings is i average . The averageheight heightofofthe thesurrounding surroundingbuildings buildingsisis hh i .i The n nn

h ∑h  h  ii

h= = ii=i==1n11 i . h = n δ = h − n h

δ = h−h

(5)

(5) (5)

= height h − h is greater than a certain threshold, then the The δ of each model can be calculated. Ifδthe . δ of each model can be calculated. If the is obtained greater than a certain threshold,the then The building can be considered a landmark. The thresholdheight can be with data training; default the building can be considered a landmark. The threshold can be obtained with data training; each model can be calculated. theFigure height7)isare greater than a certain then valueThe is 10 δm. of The heights of buildings 63 and 24If(in significantly differentthreshold, from the those of default value is 10 m. The heights of buildings 63 and 24 (in Figure 7) are significantly different the building can be considered a landmark. The threshold can be obtained with data training; the the surrounding buildings and thus can be easily distinguished through height detection, although fromvalue those of10the buildings and thus can 24 be easily distinguished through height default m.surrounding The heights buildings 63be and Figure are significantly different other geometricisfeatures (such as theofarea) can also used. (in Figure 10b 7) shows the proximity graph, detection, although other geometric features (such as the area) can also be used. Figure 10b shows from those of the by surrounding buildings and thus can be easily distinguished through height which is segmented landmarks. the proximity graph, which is segmented by landmarks. detection, although other geometric features (such as the area) can also be used. Figure 10b shows the proximity graph, which is segmented by landmarks.

(a)

.

(b)

Figure 10. Landmark detection: (a) proximity relationship; (b) graph from landmark detection.

Figure 10. Landmark detection: (a) proximity relationship; (b) graph from landmark detection.

(a)

(b)

Figure 10. Landmark detection: (a) proximity relationship; (b) graph from landmark detection.

ISPRS Int. J. Geo-Inf. 2017, 6, 260 ISPRS Int. J. Geo-Inf. 2017, 6, 260

10 of 19 10 of 19

3.5. 3.5. Gestalt Gestalt Theory-Based Theory-Based Local Local Clustering Clustering IfIf the the nearest nearest building building isis not not suitable suitable for for merging merging during during generalization, generalization, other other characteristics, characteristics, such such as as the the direction direction or or spacing, spacing, should should be be considered. considered. Three principles principles of Gestalt theories, namely, proximity, proximity,similarity, similarity,and andcommon commondirection, direction,were wereconsidered consideredto toextract extractpotential potentialGestalt-building Gestalt-building clusters clusters in in local local regions. regions. Texture Texture features features also also include include geometric-structure geometric-structure characteristics, characteristics, and and the the proximity graph requires further structure analysis. In this paper, the structure analysis included three proximity graph requires further structure analysis. In this paper, the structure analysis included steps: MST construction, linear linear detection and discrete-polygon conflation. three steps: MST construction, detection and discrete-polygon conflation. 3.5.1. 3.5.1. MST MST MSTs MSTsare areaapowerful powerfulsupport supporttool toolfor forspatial spatial clustering. clustering. MSTs MSTs are are aapolygon polygon group group of ofadjacent adjacent graph connections, including all interlinking points, non-closed rings, connection-edge distances and graph connections, including all interlinking points, non-closed rings, connection-edge distances minimum characteristics of the tree. In this paper, MSTs were extracted from the proximity graph, and minimum characteristics of the tree. In this paper, MSTs were extracted from the proximity which segmented through texture and landmark detection, baseddetection, on the nearest distance. 11 graph,was which was segmented through texture and landmark based on theFigure nearest shows the MST of the proximity graph in Figure 10b: its vertices are the building’s center of gravity, distance. Figure 11 shows the MST of the proximity graph in Figure 10b: its vertices are the and the edge is theofdistance the adjacent buildings. building’s center gravity,between and the edge is the distance between the adjacent buildings.

Figure11. 11. Minimum Minimum support support tree tree(MST). (MST). Figure

3.5.2. Linear Detection 3.5.2. Linear Detection In visual perception, we tend to merge buildings in the collinear direction [39], and MSTs In visual perception, we tend to merge buildings in the collinear direction [39], and MSTs require require further linear detection. Figure 11a shows the preliminary proximity graph. Figure 12b further linear detection. Figure 11a shows the preliminary proximity graph. Figure 12b shows the shows the corresponding MST, and the alphabet in the polygon shows the gravity of the building. corresponding MST, and the alphabet in the polygon shows the gravity of the building. Linear Linear detection involved three steps. (1) From the starting point a of the MST (Figure 12b), ab is the detection involved three steps. (1) From the starting point a of the MST (Figure 12b), ab is the starting starting edge, and bc is the next edge. If the direction difference of ab and bc is below a certain edge, and bc is the next edge. If the direction difference of ab and bc is below a certain threshold, threshold, remove bc. (2) Look up the initial proximity graph (in Figure 8b) and find the edge bc that remove bc. (2) Look up the initial proximity graph (in Figure 8b) and find the edge bc that links point links point b. If the direction difference between ba and ab is below a certain threshold, then link bc. b. If the direction difference between ba and ab is below a certain threshold, then link bc. (3) After (3) After processing polygon b, repeat the first and second steps for polygon c until the entire tree is processing polygon b, repeat the first and second steps for polygon c until the entire tree is traversed. traversed. Figure 12c is the final proximity graph from linear detection. Figure 12c is the final proximity graph from linear detection.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

11 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260 ISPRS Int. J. Geo-Inf. 2017, 6, 260

11 of 19 11 of 19

Figure Figure12. 12.Linear Lineardetection: detection:(a) (a) preliminary preliminaryproximity proximitygraph; graph;(b) (b)MST; MST;(c) (c)final finalgraph. graph. Figure 12. Linear detection: (a) preliminary proximity graph; (b) MST; (c) final graph.

3.5.3. Discrete-PolygonClustering Clustering 3.5.3. 3.5.3. Discrete-Polygon Discrete-Polygon Clustering After lineardetection, detection, anydiscrete discrete polygons must must befurther further processed[29]. [29]. Compared to to the After After linear linear detection, any any discrete polygons polygons must be be further processed processed [29]. Compared Compared to the the preliminary graph, discrete polygons can be combined with the existing clustering groups according preliminary graph, discrete polygons can be combined with the existing clustering groups according preliminary graph, discrete polygons can be combined with the existing clustering groups according to theorientation, orientation, heightand and area features. Figure 13a shows the discrete polygons g and h; g link may to features. Figure 13a13a shows thethe discrete polygons g and h; g may tothe the orientation,height height andarea area features. Figure shows discrete polygons g and h; g may link to polygons e fand f inpreliminary the preliminary graph. If the direction difference between he and eb below is below to polygons e and in the graph. If the direction difference between he and eb is a link to polygons e and f in the preliminary graph. If the direction difference between he and eb is below a certain threshold, then polygons h and e can be linked,and and polygonsg gand and e arealso also linkedby by linear certain threshold, then polygons a certain threshold, then polygonsh hand ande can e canbebelinked, linked, andpolygons polygons g andeeare are alsolinked linked bylinear linear detection. Figure Figure 13b 13b is the final proximity graph from discrete-polygon clustering. detection. graph from discrete-polygon clustering. detection. Figure 13b is the final proximity graph from discrete-polygon clustering.

Figure 13. Discrete-model clustering. Figure 13. Discrete-model clustering: (a) Discrete-model discrete-polygon linking; (b) the final proximity graph. Figure 13. clustering.

3.6. Conflation 3.6. 3.6. Conflation Conflation Building models must must befurther further conflatedafter after clustering. Yang Yang etal. al. (2011) used used Delaunay Building Building models models must be be further conflated conflated afterclustering. clustering. Yang et et al. (2011) (2011) used Delaunay Delaunay triangulationfor for3D-model 3D-modelsimplification simplificationand andconflation; conflation;however, however, this approach is not practical for triangulation this approach is not practical forfor a triangulation for 3D-model simplification and conflation; however, this approach is not practical a detailed texture model. When the vertices are changed, we must merge the original textures. A detailed texture model. When the vertices are changed, we must the original textures.textures. A simple a detailed texture model. When the vertices are changed, we merge must merge the original A simple model-conflation algorithm is proposed in this paper to merge the original textures. When model-conflation algorithm is proposed in this paper to merge textures. When the models simple model-conflation algorithm is proposed in this paperthe to original merge the original textures. When the models belong to the same group, they are conflated and simplified by the area threshold. The the models belong to the same group, they are conflated and simplified by the area threshold. The main process is as follows: main process is as follows: (1) Compute the area of every triangle and sort the triangles by area. (1) Compute the area of every triangle and sort the triangles by area.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

12 of 19

ISPRS Int. J. Geo-Inf. 2017, 6, 260

12 of 19

belong to the same group, they are conflated and simplified by the area threshold. The main process is (2)follows: Compute the average area of the triangles in the group and calculate the area threshold as as follows: (1) Compute the area of every triangle and sort the triangles by area. n (2) Compute the average area of the triangles in the group and calculate the area threshold as follows: Si (6) i =1

Δs = δ *

where

Si

is the area of the 3D model, and



n

n∑ Si

∆s = δ ∗ i=1 n is the multiplication coefficient.

δ

(6)

where Sall thetriangles, area of the 3Dremove model, any andtriangles δ is the multiplication (3) Traverse and whose area iscoefficient. below the threshold. i isthe (4) Conflate the remaining triangles in the group and compress their textures. (3) Traverse all the triangles, and remove any triangles whose area is below the threshold. (4) Figure Conflate14a theshows remaining the group and compress textures. the triangles conflatedinand simplified building their models, while the original group models are 14a shown in Figure 14b. Asand is shown in the Figuremodels, 14, the while original 2010 Figure shows the conflated simplified building themodels originalcontained group models

triangles 603014b. vertices; after in conflation set tomodels 0.2), the models contained 1330 are shown with in Figure As is shown the Figure(with 14, theδoriginal contained 2010 triangles with triangles withafter 3990conflation vertices. (with The texture coordinates of 1330 the conflation models the 6030 vertices; δ set to and 0.2), vertex the models contained triangles with 3990were vertices. The andoriginal vertex coordinates of the conflation models were the same in the original sametexture as in the model because of the triangle-removal process. The as threshold of δ model can be because of the triangle-removal process. The threshold of δ can be achieved by experience, or tested achieved by experience, or tested with existing models, the main feature must be preserved after with existing models, the main feature must be preserved after triangles are removed. triangles are removed.

Figure 14. 14. Visualization of model model conflation: conflation: (a) (a) conflation conflation model; model; (b) (b) original original model. model. Figure Visualization of

3.7. Model Visualization 3.7. Model Visualization We used a tree structure to store the simplified models; these data were discussed in detail by We used a tree structure to store the simplified models; these data were discussed in detail by [14]. [14]. We only generated a new mesh, and the original textures were all preserved. As shown in We only generated a new mesh, and the original textures were all preserved. As shown in Figure 15a, Figure 15a, the original model footprints a, b, c, d, e, and f were sequentially clustered in the order the original model footprints a, b, c, d, e, and f were sequentially clustered in the order that is shown that is shown in Figure 15b. The nodes that were generated by the previous steps could be merged in Figure 15b. The nodes that were generated by the previous steps could be merged into one node by into one node by model conflation. In the real world, the distances between adjacent buildings in model conflation. In the real world, the distances between adjacent buildings in the same district are the same district are small, whereas the distances between different districts may be much larger. small, whereas the distances between different districts may be much larger. After the discrete-model After the discrete-model clustering, these models were further conflated, and the depth of the nodes clustering, these models were further conflated, and the depth of the nodes was computed by the was computed by the method that was proposed in [16]. method that was proposed in [16]. We established LOD urban-building models in the previous step. These building models near We established LOD urban-building models in the previous step. These building models near the viewpoint have a high level of detail, whereas more distant models can be rendered by coarse the viewpoint have a high level of detail, whereas more distant models can be rendered by coarse models. Different from the criterion of LOD division that was proposed by [16], the original and models. Different from the criterion of LOD division that was proposed by [16], the original and coarse-building models were changed into computer-screen pixels in this paper. As shown in Figure 16, when the model is rendered, then the screen pixel of model is computed:

ISPRS Int. J. Geo-Inf. 2017, 6, 260

ISPRSInt. Int.J.J.Geo-Inf. Geo-Inf.models 2017,6,6,260 260 coarse-building were changed into computer-screen pixels in this paper. ISPRS 2017,

13 of 19

13of of16, 19 As shown in Figure 13 19

when the model is rendered, then the screen pixel of model is computed:

ddepth

mat ModelView ==Matrix Matrixmod el••Matrix Matrixview mat ModelView mod el mat =viewMatrixmodel • Matrixview ModelView d depth=-(center[0]*mat(0,2) =-(center[0]*mat(0,2)++center[1]*mat(1,2) center[1]*mat(1,2) + center[2]*mat(2,2)++mat(3,2)) mat(3,2)) d depth = −(center center[2] ∗ mat(2, 2) + mat(3, 2)) (7) [0] ∗ mat(0, 2) + center[1] ∗ mat(1, 2)++center[2]*mat(2,2)  (7) (7) ratio==2.0 2.0**dd Nearratio / (d Top-d -d Bottom ratio 2.0 ))∗ dNear / dTop − dBottom Near / (d Top = Bottom screenPixel ==ratio ratio radius//dd depth screenPixel = ratio ∗ radius / ddepth screenPixel **radius depth

Matrixmod el Matrix Matrix is the thethe model and viewport matrix in the visualization pipeline, pipeline, the the where Matrix viewis Matrix is model and viewportmatrix matrixin inthe the visualization visualization pipeline, Matrix model and viewport where model mod el , , viewview center is center of the radius isis the Ratio is the center is is center center of of the the model, model, radius radius is the radius radius of ofsphere spherebounding bounding box boxfor formodel. model. Ratio Ratio is is the the center model, the radius of sphere bounding box for model. parameter for camera in the rendering, the relate parameter d , d and d in the near plane parameter for for camera camera in in the the rendering, rendering, the the relate relate parameter parameter d dNear Near, d top and d bottom near plane Near bottom parameter , dtop top and dbottom in the near plane can be referenced in OpenGL manual. The screen pixel, which considered both the size of the model can be referenced in OpenGL manual. The screen pixel, which considered both the size of themodel model can be referenced in OpenGL manual. The screen pixel, which considered both the size of the and the position of the camera, was used for models visualization. When the camera moves, the and the position of the camera, was used for models visualization. When the camera moves, the and the position of the camera, was used for models visualization. When the camera moves, the conflated models are dynamically scheduled and rendered. conflatedmodels modelsare aredynamically dynamicallyscheduled scheduledand andrendered. rendered. conflated

Figure 15. Model organization: (a) model’s footprints; (b) clustering order. Figure15. 15.Model Modelorganization: organization:(a) (a)model’s model’sfootprints; footprints;(b) (b)clustering clusteringorder. order. Figure

Figure 16.Level Level ofdetail detail (LOD)switch switch withpixel pixel threshold. Figure Figure 16. 16. Level of of detail (LOD) (LOD) switch with with pixel threshold. threshold.

Thisapproach approachconsiders considersboth boththe thedistance distancebetween betweenthe theviewpoint viewpointand andbuilding buildingmodels modelsand andthe the This This approach considers both the distance between the viewpoint andasbuilding models and the geometrical complexity of the cluster itself. Large physical objects, such landmarks, are usually geometrical complexity of the cluster itself. Large physical objects, such as landmarks, are usually geometrical the cluster itself. Large physical objects, such as landmarks, are usually much much taller tallercomplexity and more moreof complex than their surrounding objects. Nodes that consist consist of aa merged merged much and complex than their surrounding objects. Nodes that of taller and more complex than buildings their surrounding objects. Nodes that consist ofcan a merged landmark andif landmark and surrounding surrounding are always always large. Thus, landmarks be rendered rendered even landmark and buildings are large. Thus, landmarks can be even if surrounding buildings are always large. Thus, landmarks can be rendered even if they are located far theyare arelocated locatedfar farfrom fromthe theviewpoint. viewpoint. they from the viewpoint. Experiment 4.4.Experiment We used used an an experimental experimental platform platform on on aa Microsoft Microsoft Windows Windows 77 operating operating system system to to verify verify the the We proposedmethod. method.The Theclustering clusteringalgorithms algorithmswere wererealized realizedin inMATLAB MATLAB7.0, 7.0,and andthe thevisualization visualizationof of proposed

ISPRS Int. J. Geo-Inf. 2017, 6, 260

14 of 19

4. Experiment We used an experimental platform on a Microsoft Windows 7 operating system to verify the proposed algorithms were realized in MATLAB 7.0, and the visualization of ISPRS method. Int. J. Geo-Inf.The 2017,clustering 6, 260 14 of 19 the 3D-model groups was based on the OpenGL development kit. The experiments were performed the 3D-model groupswith was abased on the OpenGL development kit.4 The experiments were performed on a personal computer 1.7-GHz Intel (R) Core(R) I7 CPU, GB of main memory and a NVIDIA on a personal computer with a 1.7-GHz Intel (R) Core(R) I7 CPU, 4 GB of main a QUADRO FX880M graphic card. Evaluating the visualization results is very difficult,memory and theand adequacy NVIDIA QUADRO FX880M graphic card. Evaluating the visualization results is very difficult, and of a generalization and visualization result depends on various factors. the adequacy of a generalization and visualization result depends on various factors. For this paper, we selected a typical district in Beijing as the experimental area. The total number For this paper, we selected a typical district in Beijing as the experimental area. The total of buildings 56,783; these man-made models were mainly constructed using the 3ds the Max/Maya number was of buildings was 56,783; these man-made models were mainly constructed using 3ds Software. The total sizeThe of the texture image is image 11.5G,isand theand size geometry mesh is Max/Maya Software. totaloriginal size of the original texture 11.5G, theofsize of geometry aboutmesh 1.5 G, the total number of triangles and vertices is, respectively, 106,524,968 and 319,574,904. is about 1.5 G, the total number of triangles and vertices is, respectively, 106,524,968 and After 319,574,904. conflation, the volume of texture including the original is 12.6 G, andis the After conflation, the volume of texture includingtextures the original textures 12.6geometry G, and themesh geometry to 1.86 G byand LODthe generation and theoftotal numberand of triangles vertices increase to 1.86mesh G byincrease LOD generation total number triangles verticesand is, respectively, respectively, up393,077,130. to 131,025,710The andmodel-clustering 393,077,130. The model-clustering time was 45 while the up tois, 131,025,710 and time was 45 min, while themin, model conflation model conflation lasted 1 h and 24 min because of the large amount of triangles that were sorted lasted 1 h and 24 min because of the large amount of triangles that were sorted and removed. and removed. During model conflation, the multiplication coefficient δ was 0.2. This value has been tested for δ was 0.2. This value has been tested During modelin conflation, thedistrict, multiplication coefficient these building models the Beijing and it must be retested in other regions. During rendering, for these building models in the Beijing district, and it must be retested in other regions. During the pixel threshold of the LOD was 120 pixels; when the screen pixel of model was below 120 pixels, rendering, the pixel threshold of the LOD was 120 pixels; when the screen pixel of model was below more detailed models were rendered. Figure 17a shows the visualization effect from a distant 120 pixels, more detailed models were rendered. Figure 17a shows the visualization effect from a viewpoint; more detailed models were rendered from closer viewpoint in Figure 17b,c. distant viewpoint; more detailed models were rendered from closer viewpoint in Figure 17b,c.

Figure 17. Model visualization from the (a) far viewpoint; (b) medium viewpoint; and (c) near viewpoint.

Figure 17. Model visualization from the (a) far viewpoint; (b) medium viewpoint; and (c) near viewpoint. Figure 18 shows views of the scene. Figure 18a,c show the rendering results of the generalization models, Figure 18b,d show the rendering results of the original building models. Some obvious cracks exist in the conflated becausethe of rendering triangle removal, butthe these cracks Figure 18 shows views of the scene. Figuremodel 18a,c shows results of generalization could not be visualized theresults LOD threshold when building rendering;models. thus, theSome original texture models, Figure 18b,d shows by thechanging rendering of the original obvious cracks was preserved and the rendering speed could be improved. exist in the conflated model because of triangle removal, but these cracks could not be visualized

ISPRS Int. J. Geo-Inf. 2017, 6, 260

15 of 19

by changing the LOD threshold when rendering; thus, the original texture was preserved and the rendering speed could be improved. ISPRS Int. J. Geo-Inf. 2017, 6, 260 15 of 19

Figure model with triangle Figure18. 18.Three-dimensional Three-dimensionalbuilding-model building-modelvisualization: visualization:(a,c) (a) conflation and (c) conflation model with removal; original detailed-texture model. triangle (b,d) removal; (b) and (d) original detailed-texture model.

Thetopological topologicalerrors errorsamong amongthe theclusters clustersbarely barelyinfluenced influencedthe thefinal final visual effects. Figure 19a– The visual effects. Figure 19a–c c show the visualization effect of different screen-pixel thresholds when using our hierarchical shows the visualization effect of different screen-pixel thresholds when using our hierarchical method. method. The screen-pixel threshold 60, 120 and 400, The respectively. The different resolution The screen-pixel threshold was 60, 120 was and 400, respectively. different resolution group models group models were scheduled based on the camera position and the size of the model. When were scheduled based on the camera position and the size of the model. When the camera moved,the if camera the models screen was pixelsmaller of group models was than threshold, then the the screenmoved, pixel of if group than threshold, thensmaller the corresponding more detailed corresponding more detailed models in were scheduled and rendered in the scene. As shown the models were scheduled and rendered the scene. As shown the Figure 19, coarser models were Figure 19, coarser models were rendered with a larger threshold, while more detailed models rendered with a larger threshold, while more detailed models rendered with a smaller threshold at the rendered with a smaller threshold at the same camera position. same camera position. Figure 20 20illustrates illustratesthe theframe framerates ratesfor forrendering renderingthe thebuilding buildingmodels modelswith/without with/without our our Figure hierarchical generalization method in the Beijing distinct with 56,783 buildings. These two figures hierarchical generalization method in the Beijing distinct with 56,783 buildings. These two figures demonstratethat thatour ourproposed proposedapproach approacheffectively effectivelyreduced reducedthe therendering renderingtime timeand andstruck struckaagood good demonstrate balance between rendering efficiency and visual quality while maintaining the texture features. balance between rendering efficiency and visual quality while maintaining the texture features. Thismethod methodcan cannotably notablyimprove improvethe therendering renderingspeed speedfor forlarger largeramounts amountsofofbuildings buildingsduring during This modelvisualization. visualization. model

ISPRS Int. J. Geo-Inf. 2017, 6, 260

ISPRS Int.Int. J. Geo-Inf. 2017, 6,6,260 ISPRS J. Geo-Inf. 2017, 260

16 of 19

1619 of 19 16 of

Figure Building-LOD-modelvisualization visualization with with different 6060 pixels; Figure 19.19. Building-LOD-model different screen screenpixels pixelsthresholds: thresholds:(a)(a) pixels; (b) 120 pixels; (c) 400 pixels. (b) 120 pixels; (c) 400 pixels. Figure 19. Building-LOD-model visualization with different screen pixels thresholds: (a) 60 pixels;

(b) 120 pixels; (c) 400 pixels.

Figure 20. Comparison of the frame rates between rendering the origin model and LOD models.

A similar idea to our approach was proposed by Chang, Butkiewicz, Ziemkiewicz, Wartell,

Figure 20. 20. Comparison of of the frame frame rates rates between between rendering renderingthe theorigin originmodel modeland andLOD LODmodels. models. Figure Ribarsky and Comparison Pollard [16]. the These authors clustered buildings according to their distance and

simplified the hulls for each cluster. On the one hand, this method does not use the texture A similar idea to our approach was proposed by Chang, Butkiewicz, Ziemkiewicz, Wartell, information thetoclustering, and the texture can be as anButkiewicz, effective index for city-building A similar in idea our approach was proposed byused Chang, Ziemkiewicz, Wartell,

Ribarsky and Pollard [16]. These authors clustered buildings according to their distance and Ribarsky and Pollard [16]. These authors clustered buildings according to their distance and simplified simplified the hulls for each cluster. On the one hand, this method does not use the texture information in the clustering, and the texture can be used as an effective index for city-building

ISPRS Int. J. Geo-Inf. 2017, 6, 260

17 of 19

the hulls for each cluster. On the one hand, this method does not use the texture information in the clustering, and the texture can be used as an effective index for city-building clustering, which was proven in the previous section. On the other hand, the texture for each face was generated by placing an orthographic camera so that its near clipping plane lay on the face. This texture generation is very complex, and the original texture became faded; more detailed textures can be retained with our method. Compared to existing methods [22,35], our method exhibits three significant advantages. First, the primary proximity graph is used with an MST; consequently, this method can process intricate relationships in complex urban environments. In addition, this method processes discrete polygons. Second, the texture is introduced into model generalization alongside an SOM-based classification method for texture analysis during auxiliary three-dimensional-model clustering. Landmark detection is independently introduced during generation. Third, the group models are conflated using a small triangle-removal strategy that preserves the original texture and coordinates of the image. 5. Discussion and Conclusions Building-model generalization is a complex problem that has been studied by researchers for many years. This paper introduced the texture features of 3D models into the generalization process and used an SOM-based classification method for texture analysis for auxiliary three-dimensional-model clustering. The mean value and variance of the texture pixels was used to improve the classification accuracy. The results of an experimental comparison showed that this intelligent classification algorithm should be used in local regions. In addition, this paper proposed a new hierarchical-clustering method. First, a proximity graph was constructed by CDT and visibility analysis. Each edge of the graph represented potential two-building groups, which could be used for further linear and directional detection according to the Gestalt principle. Second, texture analysis and landmark detection were independently introduced during generation. Third, models were further structurally analyzed using MST, linear detection and discrete-polygon clustering. The experimental results demonstrated the effectiveness of this algorithm. This method is mainly proposed for a large scale of building models visualization in VGE and GIScience, a large number of applications and systems incorporate 3D city model as an integral component to serve urban planning and redevelopment, facility management, security, telecommunications, location-based services, real estate portals as well as urban-related entertainment and education products [7]. However, it does not have to be integrated for building models in small regions, building model simplification is enough. On the one hand, the model conflation is very time-consuming because of the large amount of triangles that were sorted and removed; on the other hand, the data volume is larger than the original model, which may produce transmission pressure. This method still requires further optimization because the semantic information of 3D models has not yet been considered. Semantic identification is mainly based on names and properties, which are mainly based on similarity matching [40]. Semantic information can also identify significant landmark models. This information will be integrated into the clustering and generalization processes in future work. Acknowledgments: The authors thank the editors and the anonymous reviewers for their valuable comments and suggestions, which have helped improve the context and the presentation of the article. This research is supported the National Key Technology R&D Program of China (2015BAJ06B01), and Research projects of Surveying and mapping, geographic information public welfare industry in China (201412003, 201512020), and the Fundamental Research Project of the Chinese Academy of Surveying and Mapping (7771720). Author Contributions: Po Liu is mainly responsible for the writing of this paper, the main algorithm and the technical implementation. Chengming Li provides the main research idea, and relevant technical and data support. Fei Li is mainly responsible for the design and technical guidance of SOM based texture classification algorithm. Conflicts of Interest: The authors declare no conflict of interest.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

18 of 19

References 1. 2. 3.

4. 5. 6. 7. 8. 9. 10. 11.

12.

13. 14. 15. 16. 17. 18. 19. 20. 21.

22. 23.

Priestnall, G.; Jarvis, C.; Burton, A.; Smith, M.J.; Mount, N.J. Virtual Geographic Environments; John Wiley & Sons, Ltd.: Etobicoke, ON, Canada, 2009; pp. 255–288. Lin, H.; Chen, M.; Lu, G.; Zhu, Q.; Gong, J.; You, X.; Wen, Y.; Xu, B.; Hu, M. Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool. Earth-Sci. Rev. 2013, 126, 74–84. [CrossRef] Goodchild, M.F.; Guo, H.; Annoni, A.; Bian, L.; de Bie, K.; Campbell, F.; Craglia, M.; Ehlers, M.; van Genderen, J.; Jackson, D. Next-generation digital earth. Proc. Natl. Acad. Sci. USA 2012, 109, 11088–11094. [CrossRef] [PubMed] Ehlers, M. Advancing Digital Earth-Beyond the Next Generation. Int. J. Digit. Earth 2014, 7, 3–16. [CrossRef] Liu, P.; Gong, J.; Yu, M. Graphics processing unit-based dynamic volume rendering for typhoons on a virtual globe. Int. J. Digit. Earth 2014, 8, 431–450. [CrossRef] Liu, P.; Gong, J.; Yu, M. Visualizing and analyzing dynamic meteorological data with virtual globes: A case study of tropical cyclones. Environ. Model. Softw. 2015, 64, 80–93. [CrossRef] Liang, J.; Shen, S.; Gong, J.; Liu, J.; Zhang, J. Embedding user-generated content into oblique airborne photogrammetry-based 3D city model. Int. J. Geogr. Inf. Sci. 2017, 31, 1–16. [CrossRef] Liang, J.; Gong, J.; Li, W.; Ibrahim, A.N. A visualization-oriented 3D method for efficient computation of urban solar radiation based on 3D–2D surface mapping. Int. J. Geogr. Inf. Sci. 2014, 28, 780–798. [CrossRef] Danovaro, E.; De Floriani, L.; Magillo, P.; Puppo, E.; Sobrero, D. Level-of-detail for data analysis and exploration: A historical overview and some new perspectives. Comput. Graph. 2006, 30, 334–344. [CrossRef] Fan, H.; Meng, L. A three-step approach of simplifying 3D buildings modeled by CityGML. Int. J. Geogr. Inf. Sci. 2012, 26, 1091–1107. [CrossRef] Noskov, A.; Doytsher, Y. Urban Perspectives: A Raster-Based Approach to 3D Generalization of Groups of Buildings. In Proceedings of the GEOProcessing 2013, the Fifth International Conference on Advanced Geographic Information Systems, Applications, and Services, Nice, France, 24 February–1 March 2013; pp. 67–72. Zhao, J.; Zhu, Q.; Du, Z.; Feng, T.; Zhang, Y. Mathematical morphology-based generalization of complex 3D building models incorporating semantic relationships. ISPRS J. Photogramm. Remote Sens. 2012, 68, 95–111. [CrossRef] Forberg, A. Generalization of 3D building data based on a scale-space approach. ISPRS J. Photogramm. Remote Sens. 2007, 62, 104–111. [CrossRef] Yang, L.; Zhang, L.; Ma, J.; Xie, J.; Liu, L. Interactive visualization of multi-resolution urban building models considering spatial cognition. Int. J. Geogr. Inf. Sci. 2011, 25, 5–24. [CrossRef] Karl-heinrich, A. Level of Detail Generation of 3D Building Groups by Aggregation and Typification. In Proceedings of the XXII International Cartographic Conference, A Coruña, Spain, 9–16 July 2005. Chang, R.; Butkiewicz, T.; Ziemkiewicz, C.; Wartell, Z.; Ribarsky, W.; Pollard, N. Legible Simplification of Textured Urban Models. IEEE Comput. Graph. Appl. 2008, 28, 27–36. [CrossRef] Lynch, K. The Image of the City; MIT Press: Cambridge, MA, USA, 1960; Volume 1. Glander, T.; Döllner, J. Abstract representations for interactive visualization of virtual 3D city models. Comput. Environ. Urban Syst. 2009, 33, 375–387. [CrossRef] Kada, M. Aggregation of 3D Buildings Using a Hybrid Data Approach. Cartogr. Geogr. Inf. Sci. 2011, 38, 153–160. [CrossRef] Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M. Aggregation of LoD 1 building models as an optimization problem. ISPRS J. Photogramm. Remote Sens. 2011, 66, 209–222. [CrossRef] He, S.; Moreau, G.; Martin, J.-Y. Footprint-Based 3D Generalization of Building Groups for Virtual City Visualization. In Proceedings of the GEOProcessing 2012, The Fourth International Conference on Advanced Geographic Information Systems, Applications, and Services, Valencia, Spain, 30 January–4 February 2012; pp. 177–182. Zhang, L.; Hao, D.; Dong, C.; Zhen, W. A spatial cognition-based urban building clustering approach and its applications. Int. J. Geogr. Inf. Sci. 2013, 27, 721–740. Garland, M.; Heckbert, P.S. Surface simplification using quadric error metrics. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 3–8 August 1997; pp. 209–216.

ISPRS Int. J. Geo-Inf. 2017, 6, 260

24.

25.

26. 27. 28. 29. 30. 31. 32. 33. 34.

35. 36.

37. 38. 39. 40.

19 of 19

Garland, M.; Heckbert, P.S. Simplifying surfaces with color and texture using quadric error metrics. In Proceedings of the Conference on Visualization’98, Research Triangle Park, NC, USA, 18–23 October 1998; pp. 263–269. Hoppe, H. New quadric metric for simplifiying meshes with appearance attributes. In Proceedings of the Conference on Visualization’99: Celebrating Ten Years, San Francisco, CA, USA, 24–29 October 1999; pp. 59–66. Lindstrom, P.; Turk, G. Image-driven simplification. ACM Trans. Graph. 2000, 19, 204–241. [CrossRef] Patricios, N. Urban design principles of the original neighborhood concepts. Urban Morphol. 2002, 6, 21–32. Regnauld, N. Contextual building typification in automated map generalization. Algorithmica 2001, 30, 312–333. [CrossRef] Li, Z.; Yan, H.; Ai, T.; Chen, J. Automated building generalization based on urban morphology and Gestalt theory. Int. J. Geogr. Inf. Sci. 2004, 18, 513–534. [CrossRef] Mao, B.; Ban, Y.; Harrie, L. A multiple representation data structure for dynamic visualisation of generalised 3D city models. ISPRS J. Photogramm. Remote Sens. 2011, 66, 198–208. [CrossRef] Mao, B.; Harrie, L.; Ban, Y. Detection and typification of linear structures for dynamic visualization of 3D city models. Comput. Environ. Urban Syst. 2012, 36, 233–244. [CrossRef] Zhang, L.; Han, C.; Zhang, L.; Zhang, X.; Li, J. Web-based visualization of large 3D urban building models. Int. J. Digit. Earth 2014, 7, 53–67. [CrossRef] Xie, J.; Zhang, L.; Li, J.; Wang, H.; Yang, L. Automatic simplification and visualization of 3D urban building models. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 222–231. [CrossRef] Cabral, M.; Lefebvre, S.; Dachsbacher, C.; Drettakis, G. Structure-Preserving Reshape for Textured Architectural Scenes. In Proceedings of the Computer Graphics Forum, Munich, Germany, 30 March–3 April 2009; pp. 469–480. Zhang, M.; Zhang, L.; Takis Mathiopoulos, P.; Xie, W.; Ding, Y.; Wang, H. A geometry and texture coupled flexible generalization of urban building models. ISPRS J. Photogramm. Remote Sens. 2012, 70, 1–14. [CrossRef] Glander, T.; Döllner, J. Automated cell based generalization of virtual 3D city models with dynamic landmark highlighting. In Proceedings of the 11th ICA Workshop on Generalization and Multiple Representation, Monpellier, France, 20–21 June 2008. Fan, H.; Meng, L.; Jahnke, M. Generalization of 3D Buildings Modelled by CityGML. In Advances in GIScience; Sester, M., Bernard, L., Paelke, V., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 387–405. Ganitseva, J.; Coors, V. Automatic landmark detection for 3D Urban models. Archives 2010, XXXVIII, 37–43. Zhang, X.; Ai, T.; Stoter, J.; Kraak, M.-J.; Molenaar, M. Building pattern recognition in topographic data: Examples on collinear and curvilinear alignments. GeoInformatica 2013, 17, 1–33. [CrossRef] Al-Bakri, M.; Fairbairn, D. Assessing similarity matching for possible integration of feature classifications of geospatial data from official and informal sources. Int. J. Geogr. Inf. Sci. 2012, 26, 1437–1456. [CrossRef] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents