Semi-Automatic 3D City Model Generation from Large-Format ... - MDPI

9 downloads 0 Views 11MB Size Report
Aug 22, 2018 - A 3D city model is a digital representation of the building and objects in the urban area, as well as the terrain model [2]. 3D city models usually ...
International Journal of

Geo-Information Article

Semi-Automatic 3D City Model Generation from Large-Format Aerial Images Mehmet Buyukdemircioglu 1 , Sultan Kocaman 1, * and Umit Isikdag 2 1 2

*

ID

Department of Geomatics Engineering, Hacettepe University, Ankara 06800, Turkey; [email protected] Department of Informatics, Mimar Sinan Fine Arts University, Istanbul 34427, Turkey; [email protected] Correspondence: [email protected]; Tel.: +90-312-780-5792

Received: 28 June 2018; Accepted: 20 August 2018; Published: 22 August 2018

 

Abstract: 3D city models have become crucial for better city management, and can be used for various purposes such as disaster management, navigation, solar potential computation and planning simulations. 3D city models are not only visual models, and they can also be used for thematic queries and analyzes with the help of semantic data. The models can be produced using different data sources and methods. In this study, vector basemaps and large-format aerial images, which are regularly produced in accordance with the large scale map production regulations in Turkey, have been used to develop a workflow for semi-automatic 3D city model generation. The aim of this study is to propose a procedure for the production of 3D city models from existing aerial photogrammetric datasets without additional data acquisition efforts and/or costly manual editing. To prove the methodology, a 3D city model has been generated with semi-automatic methods at LoD2 (Level of Detail 2) of CityGML (City Geographic Markup Language) using the data of the study area over Cesme Town of Izmir Province, Turkey. The generated model is automatically textured and additional developments have been performed for 3D visualization of the model on the web. The problems encountered throughout the study and approaches to solve them are presented here. Consequently, the approach introduced in this study yields promising results for low-cost 3D city model production with the data at hand. Keywords: 3D city model; visualization; DSM; DTM; aerial imagery

1. Introduction According to United Nations World Urbanization Prospects [1], more than half of the world’s population (54%) are currently living in cities, and this number will increase to 66% by 2050. For efficient management of large cities and urban population, new technologies must be developed so that higher living standards can be obtained. Geographical Information Systems (GIS) are inevitable platforms for the efficient management of cities, and 3D modeling and visualization have become a crucial component of GIS. A 3D city model is a digital representation of the building and objects in the urban area, as well as the terrain model [2]. 3D city models usually consist of digital terrain models (DTMs), building models, street-space models, and green space models [3]. They can also be used to perform simulations in different scenarios in virtual environments [4]. The focus in 3D GIS lies in the management of more comprehensive and exhaustive 3D models of the construction, management of domains, architectural, engineering, construction and facility management, and wide area and georeferenced 3D models. 3D city models are becoming important tools for urban decision-making processes and information systems, especially planning, simulation, documentation, heritage planning, mobile networking planning, and navigation. 3D city models are being produced and developed ISPRS Int. J. Geo-Inf. 2018, 7, 339; doi:10.3390/ijgi7090339

www.mdpi.com/journal/ijgi

ISPRS Int. J. Geo-Inf. 2018, 7, 339

2 of 20

in many countries nationwide. Berlin [5], Vienna [6], and Konya [7] city models can be shown as examples of pioneer models. 3D city model reconstruction can be performed from different data resources, such as LiDAR point clouds [8], airborne images [9,10], satellite images [11,12], UAV (unmanned air vehicle) images or combination of DSM (digital surface model) data with cadastral maps [13]. The buildings are the base elements of such models, and the building reconstruction can mainly be done in three process steps: building detection, extraction, and reconstruction [14]. The reconstruction of the buildings is a process for generating a model using features obtained from building detection and extraction. With the use of photogrammetric methods, building rooftops can be measured manually from stereo pairs or extracted by edge detection methods with their radiometric characteristics or classification methods can be applied. They can be also extracted from point clouds generated from photogrammetric image matching methods [15] or LiDAR sensors. Additionally, combining different data resources is a popular approach [16,17]. CityGML is an XML-based open-source data format for storing and exchanging 3D city models [18]. This format is adopted by Open Geospatial Consortium [18] as an international standard format [19]. CityGML has a structure suitable for presenting city models with semantic data. These semantic features allows users to perform functions that are not possible without the metadata provided by CityGML. Modules in CityGML reflect the appearance, spatial and theme characteristics of an object [19]. In addition to the geometric information of the city model, CityGML contains semantic and thematic information. These data enable users to make new simulations and queries and analyzes of 3D city models [20]. There are five different levels of detail (LoD) defined in the CityGML data schema. As the LoD level icnreases, it has a more detailed architecture with the complexity of the structures, so different levels of detail can be used for different purposes. CityGML allows the same objects to be visualized at different levels of detail. Thus, analysis can be performed at different LoDs [20]. A building in LoD0 is represented by 2.5D polygons either at the roof level height or at the ground level height [21]. LoD0 can also be called building footprints. In LoD1, the building is represented as a solid model or a multi-faced block model without any roof structures. The building can be separated into different building surfaces called “BuildingParts”. LoD2 adds generalized roof structures to LoD1. In addition, thematic details can be used to represent the boundary surfaces of a building. The main difference between LoD1 and LoD2 is that the outer walls and roof of a building can be represented by more than one face and the curved geometry of the building can be represented in the model structure [22]. LoD3 is generated by expanding LoD2 with openings (windows, doors), detailed roof structures (roof windows, chimneys, roof tops) and detailed façade constructions. LoD4 is the highest level of detail and adds interior architecture to the building. At this level of detail, all interior details, such as furniture and rooms, are represented with textures. The building reconstruction methods can be divided into two categories: model-driven and data-driven [23]. In data-driven methods, the algorithm reconstructs buildings by using the DSM as the main data source and analyzes it as a whole without associating it to any set of parameters [24]. The aim of the model-driven method is to match the DSM generated roof geometry with the roof types that it has in its library [8] and uses the best-matched roof type with preliminary assumptions made of the roof [24]. This approach guarantees that the reconstructed roof model is topologically correct, though problems may occur if the roof shapes do not have any match in the roof library [23]. Other studies using the model-driven building reconstruction can be found in [24–26]. The method proposed here uses model-driven approach. Manual mensuration of building geometries, especially in large cities, requires a great deal of time, money, and labor. Photogrammetric approaches are widely used for 3D extraction of surfaces and features, and it is one of the most preferred technologies in data production for 3D city models [27]. With the help of photogrammetric methods, feature geometries and digital elevation models (DEM) can be extracted automatically from stereo images [28]. Semi-automatic reconstruction of buildings can be done by combining DSMs with the cadastral footprints so that manual production time and labor

ISPRS Int. J. Geo-Inf. 2018, 7, 339

3 of 20

can be reduced. Such DSMs are mostly obtained by stereo and/or multi-image matching approaches and by using aerial LiDAR sensors. So far it is not possible to reconstruct building geometries fully automatically with high accuracy and level of detail (i.e., resolution). Fully-automatic 3D city model generation is still an active agenda which attracts the attention of many researchers and there are studies on automation of the production and reliability improvements [29]. One of the main criteria for 3D city model production is to create models that are close to reality both in geometry and also in appearance (visual fidelity). Textures obtained from optical images are used for providing visually correct models quickly and economically. There are also cases where untextured building models are in use, however, 3D city models should be textured where the visualization is the front planer and the user needs to have an idea of the area at first sight. Regardless of the level of detail in their geometries, untextured 3D city models will always be visually incomplete. With the development of aerial photogrammetric systems equipped with nadir and oblique camera systems, automatic texturing of 3D city models became quite possible. Nadir cameras are the most ideal for texturing roofs of the buildings whereas the oblique cameras are better at texturing the building façades. Similarly, for automatic texturing of 3D city models, the best strategy is to select the rooftop textures from the nadir images and building façades from the oblique images. The view of the façades can be obtained clearly with this approach and the spatial resolution would also be higher. Similar to the nadir image acquisition, the season and the time of the flight would affect the radiometric quality of the oblique images. Früh et al. [30] have used an approach which combines 3D city model obtained from aerial and ground-based laser scans with oblique aerial imagery for texture mapping. Früh and Zakhor [31] presented a method that merges ground-based and airborne laser scans and images. Additional terrestrial data used in the method naturally leads to more detailed models, but also increases the size of the datasets considerably. A more effective technique to model the façades is to extract textures from terrestrial images and locate them on the polygonal models. This process is done in a manual fashion, however, texturing of a single building is a tedious task and can easily take up to several hours. For a large number of building façades, such an approach is not efficient and usually not applicable. Kada et al. [32] presented a new approach that automatically extracts façade textures from terrestrial photographs and maps them to geo-referenced 3D building models. In contrast to the application of standard viewers, this technique allows to model even complex geometric effects, like self-occlusion or lens distortion. This allows for a very fast and flexible on-the-fly generation of façade textures using real-world imagery. Aerial photogrammetry for map production is a widely used approach throughout the world. Aerial LiDAR sensors are not yet commonly used for point cloud acquisition as it has high initial costs [33]. In addition, although LiDAR sensors can provide highly accurate and dense elevation data, the spectral and radiometric resolutions of the intensity images are quite low in comparison to large-format aerial cameras, thus, not suitable for feature extraction for basemap production purposes. Stereo data acquisition with large format aerial cameras is still the most suitable approach for 3D city model generation, as point clouds and other types of information can be extracted from those with sufficient density, accuracy, and resolution (level of detail, feature types, etc.) at the city scale. It is possible to produce a point cloud with 100 points per square meter density from stereo aerial images with a resolution of 10 cm (image matching per pixel). Such a point cloud is denser than most LiDAR datasets in practical projects [29]. Nowadays large cities are composed of hundreds of thousands of buildings and manual extraction of their geometries for city model generation purposes is very costly in terms of time and money. Considering the amount and variety of the data obtained with aerial photogrammetric methods for over decades, the possibility of using these datasets for automatic or semi-automatic 3D city model generation without additional data acquisition is discussed in this study. A new approach is introduced for this purpose and applied using the data of photogrammetric flight mission carried out over Cesme town near Izmir in 2016. The speed, accuracy and capacity of existing technologies have been investigated within the study to analyze the challenges and open issues in the overall city

ISPRS Int. J. Geo-Inf. 2018, 7, 339

4 of 20

model generation process. The generated model is then used to create a 3D GIS environment on a web platform. Queries, analysis, or different simulations can be performed on this model with the help of ISPRSdata. Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW approach has been validated using reference data, 4 ofand 20 the semantic The building reconstruction problems encountered during the processing and the potential improvements to the process have been and the problems encountered during the processing and the potential improvements to the process investigated. Suitability of the basemaps and aerial imagery and the outputs have also been analyzed have been investigated. Suitability of the basemaps and aerial imagery and the outputs have also and the results are presented here. are In addition, proposedusing method, it is possible to reconstruct been analyzed and the results presentedusing here. the In addition, the proposed method, it is the cities of the past with little effort using the datasets obtained from aerial photogrammetric missions. possible to reconstruct the cities of the past with little effort using the datasets obtained from aerial photogrammetric missions.

2. Materials and Methods

2. Materials and Methods

2.1. Study Area and Dataset

2.1. Study Area and Dataset

Izmir is the largest metropolitan city in Turkey’s Aegean Region. Cesme is a coastal town located Izmir is the and largest metropolitan city inholiday Turkey’s Aegean Region. Cesme is a coastal located 85 km west of Izmir also a very popular resort with several historical sites.town A large majority km west of and also a are veryinpopular holiday resort withstyle several A large are of the85structures in Izmir the Cesme area the Aegean architecture andhistorical most of sites. the buildings majority the structures in usually the Cesme area elevations. are in the Aegean architecture style mosthas of athe located on theofcoastline and are at low The highest building inand Cesme height buildings are located on the coastline and are usually at low elevations. The highest building in of approximately 45 m. A general view of the study area is given in Figure 1. Cesme has a height of approximately 45 m. A general view of the study area is given in Figure 1.

Figure 1. A general view of the study area in Cesme, Izmir on the Google Earth image.

Figure 1. A general view of the study area in Cesme, Izmir on the Google Earth image.

The study area covers the whole Cesme region with a size of 260 km2. Within the project, a total 2 . Within of 4468 aerial photos werethe taken in May 2016 region with 80% forward lateralthe overlap using The study area covers whole Cesme with a sizeoverlap of 260and km60% project, a total UltraCam Falcon large-format digital camera from Vexcel Imaging, Graz, Austria [34]. The photos of 4468 aerial photos were taken in May 2016 with 80% forward overlap and 60% lateral overlap using have been taken from an average 1500Vexcel m fromImaging, mean seaGraz, level Austria and have[34]. an average GSD have UltraCam Falcon large-format digitalaltitude cameraoffrom The photos (ground sampling distance) of 8 cm and a footprint of 1400 m × 900 m. A total of 306 ground control been taken from an average altitude of 1500 m from mean sea level and have an average GSD (ground points (GCPs) have been established prior to the flight with an interval of 1.5 km. Figure 2 shows the sampling distance) of 8 cm and a footprint of 1400 m × 900 m. A total of 306 ground control points level of detail in the images and the ground signalization of one control point. The block configuration (GCPs) have beeninestablished prior to theheights flight with interval of 1.5bykm. Figure 2method shows the is provided Figure 3. Orthometric have an been measured a leveling and level of detail in the images and the ground signalization of one control point. The block configuration is planimetric coordinates have been measured using Global Navigation Satellite System (GNSS) provided in Figure 3. Orthometric heights have been measured by a leveling method and planimetric devices. The GCPs have been used in the photogrammetric bundle block adjustment process. Manual extractionusing for basemap from stereoSystem aerial images hasdevices. been done byGCPs coordinates have feature been measured Globalproduction Navigation Satellite (GNSS) The Contour lines (1 m,block 2 m, adjustment 5 m, 10 m), slopes, and breaklines have been have photogrammetry been used in theoperators. photogrammetric bundle process. measuredfeature along with the generated photogrammetric height points, which have later been Manual extraction for basemap production from stereo aerial images hasconverted been done by to a TIN (triangular irregular network) dataset and used for grid DTM generation. photogrammetry operators. Contour lines (1 m, 2 m, 5 m, 10 m), slopes, and breaklinesLastly, have been orthophotos with 10 cm pixel size have been generated for the whole area.

measured along with the generated photogrammetric height points, which have later been converted to a TIN (triangular irregular network) dataset and used for grid DTM generation. Lastly, orthophotos with 10 cm pixel size have been generated for the whole area.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

5 of 20 5 of 20

5 of 20

Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right). Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right). Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right).

Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study area in Cesme, Izmir.

Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study

Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study 2.2. The area in Overall Cesme, Workflow Izmir. area in Cesme, Izmir.

The overall workflow of the study involves photogrammetric data production and building 2.2.reconstruction The Overall Workflow together with texturing as depicted in Figure 4. The photogrammetric processing 2.2. The Overall Workflow mainly includes flight planning, GCP signalization on the ground, image acquisition and The overall workflow of the study involves photogrammetric data production and building The overalland workflow offeature the study involves photogrammetric data production and building triangulation, manual extraction for basemap production in CAD (computer aided reconstruction together with texturing as depicted in Figure 4. The photogrammetric processing reconstruction together with texturing as reconstructed depicted in Figure 4. The using photogrammetric processing design) format. The buildings have been automatically the CAD data. A post mainly includes flight planning, GCP signalization on the ground, image acquisition and mainly includes flight planning, GCP signalization the ground, image acquisition and triangulation, photogrammetric data collection process has beenonperformed to generate a dense DSM of the area triangulation, and manual feature extraction for basemap production in CAD (computer aided with a 30 cm grid interval usingfor Agisoft Photoscan Pro, St.in Petersburg, Russia [35]. All design) contour lines, and manual feature extraction basemap production CAD (computer aided format. design) format. The buildings have been reconstructed automatically using the CAD data. A post breaklines, height points measured from Systems, The buildingsand havephotogrammetric been reconstructed automatically using in theMicrostation CAD data. A postBentley photogrammetric photogrammetric data collection process has been performed to generate a dense DSM of the area Exton, PA, USA [36] and in CAD format (Microstation DGN) have been converted intocm TIN data collection process has saved been performed to generate a dense DSM of the area with a 30 grid with a 30 cm grid interval using Agisoft Photoscan Pro, St. Petersburg, Russia [35]. All contour lines, datasets and Agisoft grid DTMs using FME Workbenches developed SafeAll Software, Stlines, Surrey, BC, Canada interval using Photoscan Pro, St. Petersburg, Russiaby [35]. contour breaklines, and breaklines, and photogrammetric height points measured in Microstation (BREC) from Bentley Systems, [37]. The buildings have been reconstructed using BuildingReconstruction software from photogrammetric height points measured in Microstation from Bentley Systems, Exton, PA, USA [36] Exton, PA, USA [36] and savedBerlin, in CAD format [38] (Microstation DGN) have been converted into TIN VirtualCity Systems GmbH, Germany and generated models have been and saved in CAD format (Microstation DGN) have beenthe converted into TIN datasets andconverted grid DTMs datasets and grid DTMs using FME Workbenches developed by Safe Software, St Surrey, BC, Canada into FME CityGML format as developed output. Thebytexturing has been automatically using CityGRID using Workbenches Safe Software, Stdone Surrey, BC, Canada [37].the The buildings [37].Texturizer The buildings have reconstructed using BuildingReconstruction software from software frombeen UVM Systems, Klosterneuburg, Austria [39]. As a last(BREC) step, textured models have been reconstructed using BuildingReconstruction (BREC) software from VirtualCity Systems VirtualCity Systems GmbH, Berlin, Germany [38] and the generated models have been converted have been converted into the CityGML format to ensure data interoperability between different GmbH, Berlin, Germany [38] and the generated models have been converted into CityGML format intosystems. CityGML format as output. The texturing has been done automatically using the CityGRID as output. The texturing has been done automatically using the CityGRID Texturizer software from Texturizer software from UVM Systems, Klosterneuburg, Austria [39]. As a last step, textured models UVM Systems, Klosterneuburg, Austria [39]. As a last step, textured models have been converted into have been converted into the CityGML format to ensure data interoperability between different the CityGML format to ensure data interoperability between different systems. systems.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

6 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

6 of 20

6 of 20

Figure 4. Overall workflow of 3D city model implementation.

Figure 4. Overall workflow of 3D city model implementation.

3. Implementation of the 3D City Model

3. Implementation of the 3D City Model 3.1. Image Georeferencing with a Photogrammetric Bundle Block Adjustment

3.1. Image Georeferencing with a Photogrammetric Bundle Block Adjustment

Exterior orientation parameters (EOPs) of the aerial photos have already been measured during Figure 4. Overall workflow of 3D city model implementation.

the photogrammetric flight mission(EOPs) using GNSS and INS (inertial navigation system). In Exterior orientation parameters of the antenna aerial photos have already been measured during 3.toImplementation of theaccuracy, 3D City Model order achieve sub-pixel a photogrammetric triangulation process with self-calibration the photogrammetric flight mission using GNSS antenna and INS (inertial navigation system). In order has been applied using GCPs and tie points, and the existing GNSS and INS measurements have been 3.1. Image Georeferencing a Photogrammetric Bundle Block Adjustment to achieve sub-pixel accuracy,with a photogrammetric triangulation process with self-calibration has been used as weighted observations in the adjustment. The project area has been split into seven subapplied usingExterior GCPs orientation and tie points, and the existing GNSS and measurements have parameters (EOPs) of the aerial photos haveINS already duringbeen used blocks for optimization of the bundle block adjustment process (Figure 5).been Themeasured image measurement the photogrammetric flight mission using GNSS antenna and INS (inertial navigation system). In as weighted observations in the adjustment. The project area has been split into seven sub-blocks for of the GCPs and initial tie points in strip and cross-strip directions have been done manually by order to achieve sub-pixel accuracy, a photogrammetric triangulation process with self-calibration optimization of the bundle block adjustment process (Figure 5). The image measurement of the GCPs operators, andapplied further tie points have extracted automatically with Intergraphhave Z/I automatic has been using GCPs and tiebeen points, and the existing GNSS and INSthe measurements been and initial tie points in strip and cross-strip directions have been done manually by operators, and triangulation module [40]. The bundle adjustment has been performed using the same software used as weighted observations in the adjustment. The project area has been split into seven sub- with further tieblocks points extracted automatically with the Intergraph automatic triangulation self-calibration and been blunder removal by statistical analysis. The systematic error modeling with forhave optimization of the bundle block adjustment process (Figure 5). TheZ/I image measurement of the GCPs and initial tie points in been strip refraction, and cross-strip directions haveand been done manually by(radial additional parameters include atmospheric Earth curvature, camera distortion module [40]. The bundle adjustment has performed using the same software with self-calibration operators, andby further tie points have been extracted automatically the Intergraph Z/I and de-centering). The number of images in each block, the numbers and properties ofautomatic the GCPs, size and blunder removal statistical analysis. The systematic error with modeling with additional parameters triangulation module [40]. The bundle adjustment has been performed using the same software with of the block area and the accuracy measures (sigma naught of the adjustment and the standard include atmospheric refraction, Earth curvature, and camera distortion (radial and de-centering). self-calibration and blunder removal by statistical analysis. The systematic error modeling with deviations of GCPs in image space) are shown in Table 1. The number of images in each block, the numbers and properties the GCPs, size of(radial the block area additional parameters include atmospheric refraction, Earth curvature,of and camera distortion and de-centering). The number of images in each block, the numbers and properties of the GCPs, sizeof GCPs in and the accuracy measures (sigma naught of the adjustment and the standard deviations of the block area and the accuracy measures (sigma naught of the adjustment and the standard image space) are shown in Table 1. deviations of GCPs in image space) are shown in Table 1.

Figure 5. Photogrammetric blocks generated for the area. Figure 5. Photogrammetric blocks generated for the area.

Figure 5. Photogrammetric blocks generated for the area.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

7 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, 339

7 of 20

Table 1. Photogrammetric block properties and triangulation results.

Number and Features of GCPs SubSigma GCP Image No. of Table 1. Photogrammetric block properties and triangulation results. Block Naught σx Full Planimetric Height Images No (microns) (microns) (XYZ) (XY) (Z) Number and Features of GCPs No. of GCP Image Sigma Naught Sub-Block 1No 424 34 1.9 Images σx1.3 (microns) (microns) Full (XYZ) Planimetric (XY) Height (Z) 21 661 51 1 1.7 1.1 424 34 1.9 1.3 32 287 22 -- 1 1.51.7 0.91.1 661 51 287 22 43 745 55 3- 1.61.5 1.00.9 4 745 55 3 1.6 1.0 55 751 61 -- 1.81.8 1.11.1 751 61 807 62 66 807 62 -1 1 1.21.2 1.01.0 7 793 61 1 1.4 0.8 7 793 61 1 1.4 0.8

GCP Image σy (microns) GCP Image 1.0 σy (microns) 0.8 1.0 0.7 0.8 0.7 0.8 0.8 0.9 0.9 0.8 0.8 0.7 0.7

3.2. Building Footprint and Attribute Generation from Basemaps 3.2. Building Footprint and Attribute Generation from Basemaps Although aerial photographs contain almost all visible information related to the surface of the Although aerial photographs contain almost all visible information related to the surface of the project area, they cannot be compared to traditional vector maps before they are interpreted by humans project area, they cannot be compared to traditional vector maps before they are interpreted by or computers. Any good map user can hardly understand and interpret aerial photographs directly humans or computers. Any good map user can hardly understand and interpret aerial photographs and image interpretation for map production is still an expert task. A vector map is an abstraction directly and image interpretation for map production is still an expert task. A vector map is an of selected details of the scene/project area. Figure 6 shows an overlay of an aerial orthoimage and abstraction of selected details of the scene/project area. Figure 6 shows an overlay of an aerial manually extracted features stored as vectors. In addition, maps contain symbols and text as further orthoimage and manually extracted features stored as vectors. In addition, maps contain symbols information. Interpretation, analysis, implementation, and use of vector map information are easier and text as further information. Interpretation, analysis, implementation, and use of vector map than orthophotos. Details and attribute information in vector maps are gathered by models from information are easier than orthophotos. Details and attribute information in vector maps are the stereo analysis so that they are clearer, understandable, and interpretable. Feature points and gathered by models from the stereo analysis so that they are clearer, understandable, and lines showing the terrain characteristics (e.g., creeks, ridges, etc.) are also produced manually or interpretable. Feature points and lines showing the terrain characteristics (e.g., creeks, ridges, etc.) semi-automatically during the processing of aerial photos, such as contour lines, DTM grid points, are also produced manually or semi-automatically during the processing of aerial photos, such as break-lines and steep slopes. In addition, information stored in vector maps have been extracted to contour lines, DTM grid points, break-lines and steep slopes. In addition, information stored in vector create an attribute scheme for buildings in this study. maps have been extracted to create an attribute scheme for buildings in this study.

Figure 6. Manual feature extraction from stereo aerial images. Figure 6. Manual feature extraction from stereo aerial images.

BuildingReconstruction, which is used for 3D city model generation in this study, requires BuildingReconstruction, whichasisinput used for 3D city model generation in this study, requiresproduced building building footprints and attributes in ESRI Shapefile (.shp) format. The basemaps footprints and attributes asprojects input inare ESRI Shapefile (.shp) basemapsand produced in aerial in aerial photogrammetric often stored in CADformat. formatsThe as standard every mapping photogrammetric projects are often stored in CAD formats as standard and every mapping project project generally contains more than 200 layers. The data and information stored in these layers are generally contains more than 200 layers. The data and information stored in these layers are usually

ISPRS Int. J. Geo-Inf. 2018, 7, 339

8 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

8 of 20

adequate for extracting the building footprints and a few types of attributes. The building footprints usually adequate for extracting the building footprints and a few types of attributes. The building used here can be considered as roof boundaries since it is usually not possible to see the building footprints used here can be considered as roof boundaries since it is usually not possible to see the footprints on the ground in aerial images. The buildings are collected and stored in different CAD building footprints on the ground in aerial images. The buildings are collected and stored in different layers such as residential, commercial, public, school, etc., according to their use types. In addition CAD layers such as residential, commercial, public, school, etc., according to their use types. In to the buildings, other structures, such as sundries, lighting poles, or roads, are also measured by addition to the buildings, other structures, such as sundries, lighting poles, or roads, are also photogrammetry operators. measured by photogrammetry operators. In BuildingReconstruction it is required to store the building geometries and attributes in a In BuildingReconstruction it is required to store the building geometries and attributes in a single ESRI shapefile. These geometries and attributes are later converted into CityGML format while single ESRI shapefile. These geometries and attributes are later converted into CityGML format while exporting the building geometry after the reconstruction is completed. The attributes can be used to exporting the building geometry after the reconstruction is completed. The attributes can be used to implement queries on the reconstructed city model. However, the CAD files contain just vector data implement queries on the reconstructed city model. However, the CAD files contain just vector data and don’t include the attributes as text assigned to geometries. While converting the CAD data into and don’t include the attributes as text assigned to geometries. While converting the CAD data into shapefile format, some implicit attributes about the buildings have been extracted generated using ETL shapefile format, some implicit attributes about the buildings have been extracted generated using (extract-transform-load) transformers of the FME software. As an example, the number of building ETL (extract-transform-load) transformers of the FME software. As an example, the number of floors is stored in a different CAD layer as text (also vector unit) and their location fall inside the building floors is stored in a different CAD layer as text (also vector unit) and their location fall inside building polygons as can be seen in Figure 6. A spatial transformer (i.e., “SpatialFilter” of FME) is the building polygons as can be seen in Figure 6. A spatial transformer (i.e., “SpatialFilter” of FME) used to transform these numbers into “number of floors” attribute of every building. The transformer is used to transform these numbers into “number of floors” attribute of every building. The compares two sets of features to see if their spatial relationships meet the selected test conditions transformer compares two sets of features to see if their spatial relationships meet the selected test (i.e., filter is within candidate) and if a number (i.e., text) is located inside a polygon, it is assigned as conditions (i.e., filter is within candidate) and if a number (i.e., text) is located inside a polygon, it is attribute to that polygon (i.e., building). In addition, center coordinates of building footprints have assigned as attribute to that polygon (i.e., building). In addition, center coordinates of building been extracted and saved as new attributes using another spatial operation (i.e., CenterPointExtractor). footprints have been extracted and saved as new attributes using another spatial operation (i.e., A total of eight attributes have been extracted from the CAD layers using the spatial ETLs and become CenterPointExtractor). A total of eight attributes have been extracted from the CAD layers using the ready to be input into BuildingReconstruction. A part of the project area with all CAD layers and the spatial ETLs and become ready to be input into BuildingReconstruction. A part of the project area building footprints extracted thereof are given in Figure 7. with all CAD layers and the building footprints extracted thereof are given in Figure 7.

Figure 7. A part of the basemap with all CAD layers (left) and the building footprints extracted from Figure 7. A part of the basemap with all CAD layers (left) and the building footprints extracted from a a set of them (right). set of them (right).

3.3. Digital Terrain Model (DTM) Extraction 3.3. Digital Terrain Model (DTM) Extraction DTMs are 3D representations of the terrain elevations found on the bare Earth’s surface. In this DTMs are 3D representations of the terrain elevations found on the bare Earth’s surface. In this study, study, DTMs are required to make sure that buildings are located exactly on the terrain, not under or DTMs are required to make sure that buildings are located exactly on the terrain, not under or above of it. above of it. It is also used to calculate the base height of a building from the terrain. The resolution It is also used to calculate the base height of a building from the terrain. The resolution requirements requirements for the DTMs are substantially lower than DSMs in the methodology proposed here. A for the DTMs are substantially lower than DSMs in the methodology proposed here. A grid resolution grid resolution between 1–5 m is usually sufficient. A general recommendation from between 1–5 m is usually sufficient. A general recommendation from BuildingReconstruction is to BuildingReconstruction is to use a grid interval of 1 m. The software works best with the tile-based use a grid interval of 1 m. The software works best with the tile-based raster data since raw point raster data since raw point clouds data can be very massive and difficult to process. Figure 8 shows clouds data can be very massive and difficult to process. Figure 8 shows an overview of the manually an overview of the manually extracted terrain features and a zoomed view of it. extracted terrain features and a zoomed view of it.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

9 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

9 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

9 of 20

Figure 8. General view of the manually extracted terrain features (left) and close view (right). Figure 8. General view of the manually extracted terrain features (left) and close view (right). Figure 8. General view of the manually extracted terrain features (left) and close view (right).

Within the photogrammetric flight project, contour lines with different intervals (i.e., 1 m, 2 m, Within the photogrammetric flight project, contour lines with different intervalsusing (i.e., 1 Bentley m, 2 m, 5 m,Within 10 m), and grid DTM points havelines beenwith manually thebreaklines photogrammetric flight project, contour differentmeasured intervals (i.e., 1 m, 2 m, 5MicroStation m, 10 m), breaklines and grid DTM points have been manually measured using Bentley MicroStation photogrammetry operators. than 1.2 million height points, and 5 m, 10 m), software breaklinesbyand grid DTM points have More been manually measured using Bentley software byof photogrammetry operators. More thanbeen 1.2 million height points, and thousands of contour thousands contour lines and break lines have merged and converted into a TIN dataset and MicroStation software by photogrammetry operators. More than 1.2 million height points, and lines andDTM breakwith lines1-m have been merged and converted into a TIN dataset and a single DTM with 1-m a single grid interval 9). been merged thousands of contour lines and break(Figure lines have and converted into a TIN dataset and grid interval (Figure 9). a single DTM with 1-m grid interval (Figure 9).

Figure 9. Overview of the generated grid DTM from manually measured terrain data of the whole Cesme area (left) and athe zoomed viewgrid (right). from manually measured terrain data of the whole Figure Figure 9. 9. Overview Overviewof of thegenerated generated gridDTM DTM from manually measured terrain data of the whole Cesme Cesmearea area(left) (left)and andaazoomed zoomedview view (right). (right).

3.4. Digital Surface Model (DSM) Generation

3.4. Digital Surface (DSM) aGeneration For the DSMModel generation, number of different software has been investigated to obtain an 3.4. Digital Surface Model (DSM) Generation optimum balance between the DSM quality density, accuracy) and the For the DSM generation, a number of(completeness, different software hasand been investigated to production obtain an For the DSM generation, a number of different software has been investigated to obtain an speed. Finally several high-density been generated using Agisoft Photoscan software, optimum balance between the DSM DSMs qualityhave (completeness, density, and accuracy) and thePro production optimum balance between the DSM quality (completeness, density, and accuracy) and the production since it allows parallel processing by allowing GPU (graphics processing unit) usage along with the speed. Finally several high-density DSMs have been generated using Agisoft Photoscan Pro software, speed. Finally several high-density DSMs have been generated using Agisoft Photoscan Pro software, CPU (central processing unit) during the DSM generation. The DSM of the project area has since it allows parallel processing by allowing GPU (graphics processing unit) usage along withbeen the since it allows parallel processing by allowing GPU (graphics processing unit) usage along with the generated in seven sub-blocks thesethe blocks have already been theproject triangulation CPU (central processing unit) as during DSM generation. The formed DSM offor the area has(Figure been CPU (central processing unit) during the DSM generation. The DSM of the project area has been 5). Since BuildingReconstruction can reconstruct maximum offormed a couple thousand buildings in a generated in seven sub-blocks as these blocks haveaalready been forofthe triangulation (Figure generated in seven sub-blocks as these blocks have already been formed for the triangulation (Figure 5). single the larger blockscan have again been divided of into more sub-blocks decreaseinthe 5). Sinceprocess, BuildingReconstruction reconstruct a maximum a couple of thousandtobuildings a Since BuildingReconstruction can reconstruct a maximum of a couple of thousand buildings in a single processing area and time. While selecting the aerial photographs in every sub-block for building single process, the larger blocks have again been divided into more sub-blocks to decrease the process, the larger blocks have again been divided into more sub-blocks to decrease the processing reconstruction has While been taken consideration buildingsinnear thesub-block outer boundary of the processing areastep, and it time. selecting the aerial that photographs every for building area and time. While selecting the aerial photographs in every sub-block for building reconstruction project area should visible at taken least in six or more photographs fornear better Adjusted reconstruction step, be it has been consideration that buildings thereconstruction. outer boundary of the EOPs and camera calibration parameters have been used as input in Photoscan to provide the project area should be visible at least in six or more photographs for better reconstruction. Adjusted georeferencing of the images. However, the images have been re-oriented by generating tie points EOPs and camera calibration parameters have been used as input in Photoscan to provide the georeferencing of the images. However, the images have been re-oriented by generating tie points

ISPRS Int. J. Geo-Inf. 2018, 7, 339

10 of 20

step, it has been taken consideration that buildings near the outer boundary of the project area should be visible at least in six or more photographs for better reconstruction. Adjusted EOPs and camera calibration parameters beenREVIEW used as input in Photoscan to provide the georeferencing of the ISPRS Int. J. Geo-Inf. 2018, 7, xhave FOR PEER 10 of 20 images. the have been re-oriented by generating tie points and re-measuring10GCPs ISPRS Int. J.However, Geo-Inf. 2018, 7, images x FOR PEER REVIEW of 20 and to re-measuring GCPs due toBased software requirements. the established camera orientations, due software requirements. on the established Based cameraon orientations, the software generates a the re-measuring software generates adue dense point 10 cloud (i.e.,the DSM). Figureon 10the depicts thepoint tie camera points the dense and GCPs toFigure software requirements. Based orientations, dense point cloud (i.e., DSM). depicts tie points and theestablished dense cloudand generated in point cloud generated one part. the software generates aindense point cloud (i.e., DSM). Figure 10 depicts the tie points and the dense one part. point cloud generated in one part.

Figure 10. 10. Generated Generated tie tie points points (left) (left) and and dense dense point point cloud cloud (right) (right) using using Agisoft Agisoft Photoscan. Photoscan. Figure Figure 10. Generated tie points (left) and dense point cloud (right) using Agisoft Photoscan.

Although a DSM with a resolution of 25 cm is already considered to be sufficient for the Although a DSM with a resolution of 25 cm is already considered to be sufficient for the BuildingReconstruction software to generate models considered with optimum five Although a DSM with a resolution of 25building cm is already to beaccuracy, sufficient forpoint the BuildingReconstruction software to generate building models with optimum accuracy, five point clouds in Agisoft Photoscan haveto been generated with models differentwith densities (at 1,accuracy, 2, 4, 8, and 16 point pixels BuildingReconstruction software generate building optimum five clouds in Agisoft Photoscan have been generated with different densities (at 1, 2, 4, 8, and 16 pixels intervals) for testPhotoscan purposes.have The been optimal results with with different BuildingReconstruction achieved clouds in Agisoft generated densities (at 1, 2,have 4, 8, been and 16 pixels intervals) for test purposes. The optimal results with BuildingReconstruction have been achieved using DSMs cm resolutions forresults LoD2 buildings. Considering this fact,have the DSMs been intervals) forwith test 25–50 purposes. The optimal with BuildingReconstruction been have achieved using DSMs with 25–50 cm resolutions for LoD2 buildings. Considering this fact, the DSMs have been generated at a resolution of 30 cm for the whole project area. An overview and a zoomed view of the using DSMs with 25–50 cm resolutions for LoD2 buildings. Considering this fact, the DSMs have been generated at a resolution of 30 cm for the whole project area. An overview and a zoomed view of the generatedatDSMs from one is shown Figurearea. 11. An overview and a zoomed view of the generated a resolution of sub-block 30 cm for the wholein project generated DSMs from one sub-block is shown in Figure 11. generated DSMs from one sub-block is shown in Figure 11.

Figure 11. An overview (left) and a close view (right) of the generated DSM. Figure 11. An overview (left) and a close view (right) of the generated DSM. Figure 11. An overview (left) and a close view (right) of the generated DSM. 3.5. Automated Building Reconstruction

3.5. Automated Building Reconstruction In this study, BuildingReconstruction 2018 software has been used for automatic reconstruction 3.5. Automated Building Reconstruction of buildings and conversion into the CityGML format. has The been software automatic generation In this study, BuildingReconstruction 2018 software usedenables for automatic reconstruction In this study, BuildingReconstruction 2018 software has been used for automatic reconstruction of and manual and editing of building in LoD1format. and LoD2 on DSMs buildinggeneration footprints. of buildings conversion intomodels the CityGML The based software enablesand automatic buildings and conversion into the CityGML format. The software enables automatic generation and As described in [38], DSM models is analyzed for each building outline to detect the roof footprints. shape. The and manual editing of the building in LoD1 and LoD2 based on DSMs and building manual editing of building models in LoD1 and LoD2 based on DSMs and building footprints. following algorithms are DSM employed for this for purpose: As described in [38], the is analyzed each building outline to detect the roof shape. The As described in [38], the DSM is analyzed for each building outline to detect the roof shape. following algorithms are employed for this purpose: • following Rectanglealgorithms intersection: Simple geometries are processed with rectangular intersections. The are employed for this purpose: Intersecting rectangles and squares from the are basisprocessed of the footprint created tointersections. construct the • Rectangle intersection: Simple geometries with are rectangular • Intersecting Rectangle intersection: geometries withfootprint rectangular Intersecting building in 3D. rectanglesSimple and squares fromare theprocessed basis of the are intersections. created to construct the rectangles and squares from the basis of the footprint are created to construct the building in 3D. • building Cell decomposition. The cell decomposition algorithm is used for buildings with multiple roof in 3D. forms and/or with different [8]. A built-in library of more than 32 main connecting • Cell decomposition. The cell heights decomposition algorithm is used for buildings withand multiple roof roof types is used to detect the best [8]. fitting roof type for each generated cell. forms and/or with different heights A built-in library of more than 32 main and connecting • roof Footprint extrusion: This algorithm is used totype create models of cell. large areas and also flat types is used to detect the best fitting roof forLoD1 each generated LoD2 buildings. • Footprint extrusion: This algorithm is used to create LoD1 models of large areas and also flat LoD2 buildings. The main advantage of this method is automatic reconstruction of the 3D city models along with

ISPRS Int. J. Geo-Inf. 2018, 7, 339





11 of 20

Cell decomposition. The cell decomposition algorithm is used for buildings with multiple roof forms and/or with different heights [8]. A built-in library of more than 32 main and connecting roof types is used to detect the best fitting roof type for each generated cell. Footprint extrusion: This algorithm is used to create LoD1 models of large areas and also flat LoD2 buildings.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

11 of 20

The main advantage of this method is automatic reconstruction of the 3D city models along with the rich semantic information and thematic attributes of the works besttiles withwith tile-based input files. If a large region is required to be reconstructed, thebuildings. users needItto generate 1 km2 input files.sizes If a large is required be reconstructed, the usersAneed to generateapproach tiles withis1used km2 or smaller with region the footprint data,to laser data and orthophotos. model-driven or BuildingReconstruction smaller sizes with the footprint data, laser data and orthophotos. A model-driven approach is used in to detect and reconstruct the general roof geometry and type using a in BuildingReconstruction and reconstruct the used general roof geometry and type using a(Figure library library containing 32 typesto ofdetect roof geometry popularly in urban areas around the world containing 32 types roof geometry popularly used in urbana areas around the worldgeometry (Figure 12). 12). For every givenofpolygon in input footprint shapefile, discrete 3D building is For every given polygon in input footprint shapefile, a discrete 3D building geometry is reconstructed. reconstructed. The reconstructed geometry is ensured to be fully closed without any holes or gaps The reconstructed to be fully closed without any holes or gaps between surfaces, between surfaces, geometry and also isbeensured fully compatible with given building footprint. Along with the and also bereconstruction, fully compatible givenoffers building footprint. Along the automated reconstruction, automated thewith software an interactive editor with for manual editing on the building the software offers an interactive editor for manual editing on the building geometries. geometries.

Figure 38]. Figure12. 12.BuildingReconstruction BuildingReconstruction2018 2018software softwareroof roof library library [[38].

The input data requirements for the BuildingReconstruction software are as follows: The input data requirements for the BuildingReconstruction software are as follows: ● A grid DSM with 25–50 cm grid interval, optimally A DTM grid DSM 25–50 cm gridoptimally interval, optimally ●• A with with 1-m grid interval, A DTM withfootprints 1-m grid interval, optimally ●• 2D building with attributes (each building polygon must have a unique ID) • 2D building footprints with attributes (each building polygon must have a unique ID) ● An orthophoto for visual checks • An orthophoto for visual checks Figure 13 shows screenshots of the DSM and the DTM from one sub-block. The DSMs are used to determine the roof geometry and whereas the DTMs are used to calculate theused height Figure 13 shows screenshots of characteristics, the DSM and the DTM from one sub-block. The DSMs are to and the ground of the building. Footprints areused alsotoused to determine the determine the roofgeometry geometry(footprint) and characteristics, whereas the DTMs are calculate the height and geometric the walls building model. The software supports the groundstructures geometry of (footprint) of in thethe building. Footprints are also used toalso determine theprocessing geometric without a DTM, if no terrain modelmodel. is available, the basealso height of a building is determined from structures of theand walls in the building The software supports processing without a DTM, aand buffer which the lowest pointthe around a building is determined. However, approach is if noin terrain model is available, base height of a building is determined from this a buffer in which successful areas where building density is low. the lowest only pointinaround a building is determined. However, this approach is successful only in areas where building density is low.

Figure 13 shows screenshots of the DSM and the DTM from one sub-block. The DSMs are used to determine the roof geometry and characteristics, whereas the DTMs are used to calculate the height and the ground geometry (footprint) of the building. Footprints are also used to determine the geometric structures of the walls in the building model. The software also supports processing without a DTM, and if no terrain model is available, the base height of a building is determined from ISPRS Int. J. Geo-Inf. 2018, 7, 339 12 of 20 a buffer in which the lowest point around a building is determined. However, this approach is successful only in areas where building density is low.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

12 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

12 of 20

Figure 13. 25 cm resolution DSM and 1 m resolution DTM input for BuildingReconstruction software.

Figure 13. 25 cm resolution DSM and 1 m resolution DTM input for BuildingReconstruction software.

The buildings with the simple roof types and roof heights can in general be reconstructed more Figure than 13. 25 cm resolution DSM and 1types m resolution DTM input forcan BuildingReconstruction software. accurately the complex ones. This problem arises because the softwarebe cannot match themore The buildings with the simple roof and roof heights in general reconstructed complex roof types with the roof models in its library. Additionally, small parts on the are accurately than the complex ones. This problem arises because the software cannot matchroofs the complex Theignored buildings withsoftware. the simple roof types and roofofheights can in general be reconstructed more mostly by the The main advantage the model-driven method is reconstructing roof types with than the roof models inones. its library.problem Additionally, small parts on the roofs are mostly ignored accurately the arises with because the geometry. software cannot match the models without anycomplex gaps or holesThis between surfaces correct In addition, the by thecomplex software. The main of the in model-driven method is reconstructing models roof types withadvantage thebetter roof than models its library.The Additionally, small parts the roofs without are production speed is slightly other methods. main disadvantage is theonrequirement of any gaps or holes between surfaces with correct geometry. In addition, the production speed is slightly mostly ignored by the software. The main advantage of the model-driven method is reconstructing a very comprehensive roof library for comparing roof geometries generated from given DSM. bettermodels than other methods. The main disadvantage is the requirement of a very comprehensive without gaps holes between with correct generated geometry. In addition,with the roof Figure 14 any shows anor example from surfaces the automatically buildings production speed is slightly better than other methods. The main disadvantage is the requirement of library for comparing roof geometries generated from given BuildingReconstruction 2018 software from the project area. DSM. a very comprehensive roof library roofgenerated geometriesbuildings generatedwith fromBuildingReconstruction given DSM. Figure 14 shows an example fromfor thecomparing automatically Figure 2018 software from14theshows project an area.example from the automatically generated buildings with

BuildingReconstruction 2018 software from the project area.

Figure 14. Automatically generated buildings with BuildingReconstruction 2018 software in Cesme Town. Figure generated with BuildingReconstruction 2018 software in Cesme Figure 14. 15Automatically shows a part of thebuildings automatically generated roof geometries overlaid on the Figure 14. Automatically generated buildings withaccuracy BuildingReconstruction 2018 by software Cesme Town. orthophoto with 10 cm GSD. The reconstruction has been analyzed visualin checks onTown. 1000 buildings and the results are provided in Table 2. During the visual assessments, the roof structures Figure a part of the automatically generated roof geometries overlaid on the smaller than15 theshows DSM grid interval have been excluded and if the main roof structure is modeled orthophoto with 10 cm GSD. The reconstruction accuracy has been analyzed by visual checks on 1000 correctly, it is considered as true. buildings and the results are provided in Table 2. During the visual assessments, the roof structures smaller than the DSM grid interval have been excluded and if the main roof structure is modeled correctly, it is considered as true.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

13 of 20

Figure 15 shows a part of the automatically generated roof geometries overlaid on the orthophoto with 10 cm GSD. The reconstruction accuracy has been analyzed by visual checks on 1000 buildings and the results are provided in Table 2. During the visual assessments, the roof structures smaller than the DSM grid interval have been excluded and if the main roof structure is modeled correctly, it is considered true. 2018, 7, x FOR PEER REVIEW ISPRS Int.as J. Geo-Inf. 13 of 20

Figure 15. Automatically generated roof geometries overlaid on the orthophoto with 10 cm resolution.

Figure 15. Automatically generated roof geometries overlaid on the orthophoto with 10 cm resolution. Table 2. Automatic building reconstruction accuracy based on visual analysis of 1000 buildings.

Table 2. Automatic building reconstruction accuracy based on visual analysis of 1000 buildings.

DSM Resolution Processing Time 10 cm 6 min 50 s DSM Resolution Processing Time 25 cm 1 min 54 s 1 min 10 cm50 cm 6 min 502ss s s 25 cm100 cm 1 min3954

LoD1 Reconstruction Success LoD2 Reconstruction Success 100% 73.8% LoD1 Reconstruction Success LoD2 Reconstruction Success 100% 72.7% 100% 65.4%73.8% 100% 100% 55.1%72.7% 100%

50 cm 1 min 2 s 100% 65.4% 100 cm on the investigations 39 s performed in this study, 100% the advantages and the disadvantages 55.1% Based of the BuildingReconstruction software can be listed as follows:

+ Large-area reconstructionperformed of LoD1 andinLoD2 building is possible. Based on the investigations this study, themodels advantages and the disadvantages of the + Building geometries are suitable texturing with nadir/oblique aerial images. BuildingReconstruction software can befor listed as follows:

+ + + + + − − −

+ Direct export to CityGML format with automatic and flexible attribute mapping is possible. reconstruction of LoD1with andsemantic LoD2 building models is possible. +Large-area Fast processing of 3D Buildings information is possible. are suitable texturing nadir/oblique aerial images. +Building High geometries geometric accuracy can befor obtained withwith a ground plan-based reconstruction. −Direct Reconstruction is limited to existing roof library and complex roof types cannot be reconstructed. export to CityGML format with automatic and flexible attribute mapping is possible. − Smaller roof parts are neglected during the reconstruction. Fast processing of 3D Buildings with semantic information is possible. − The manual editing interface is not user-friendly.

High geometric accuracy can be obtained with a ground plan-based reconstruction. Reconstruction is limited to existing roof library and complex roof types cannot be reconstructed. 3.6. Automatic Building Texturing Smaller roof3D parts neglected reconstruction. Existing cityare models can beduring texturedthe from oriented images to increase the attractiveness of Thedigital manual notterrain user-friendly. the cityediting models.interface Building is and models can be textured automatically if the images of the models with accurate orientation parameters exist. Automatic-texturing of the models generated

3.6. Automatic Building Texturing in this study has been carried out with CityGRID Texturizer software by UVM Systems GmbH [39]. Automatic texturing workflow of CityGRID Texturizer is given in Figure 16. Oriented aerial

Existing 3D city models can be textured from oriented images to increase the attractiveness of the photographs can be used for the texturing of roofs and façades, as well as mobile mapping data for digital city models. Building and terrain models can be textured automatically if the images of the the high-resolution texturing of street-side façades in CityGRID Texturizer. The façade texture is models with accurate orientation exist. Automatic-texturing of the models generated assigned interactively or fully parameters automatic from oriented images. Terrain texture is always calculatedin this studydynamically, has been carried outroof withand CityGRID Texturizer by UVM Systems GmbH [39]. Automatic whereas façade textures needsoftware to be pre-calculated and saved for each building unit. Orthophotos of the façades can be generated by simple affine transformation in the texture tool. More details on the texturing process are elaborated in the following sub-sections.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

14 of 20

texturing workflow of CityGRID Texturizer is given in Figure 16. Oriented aerial photographs can be used for the texturing of roofs and façades, as well as mobile mapping data for the high-resolution texturing of street-side façades in CityGRID Texturizer. The façade texture is assigned interactively or fully automatic from oriented images. Terrain texture is always calculated dynamically, whereas roof and façade textures need to be pre-calculated and saved for each building unit. Orthophotos of the façades can be generated by simple affine transformation in the texture tool. More details on the texturing process are elaborated in the following sub-sections.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

14 of 20

Figure 16. CityGRID Texturizer automatic texturing workflow [39]. Figure 16. CityGRID Texturizer automatic texturing workflow [39].

3.6.1. Image Acquisition Requirements and the Radiometric Pre-Processing for Texturing 3.6.1. Image Acquisition Requirements and the Radiometric Pre-Processing for Texturing There acquisition of airborne images for automatic texturing. The There are are several severalcriteria criteriafor forthethe acquisition of airborne images for automatic texturing. optimal resolution for automatic texturing is usually 10 cm or better and the flight altitude to obtain The optimal resolution for automatic texturing is usually 10 cm or better and the flight altitude to obtain this this resolution resolution with with large-format large-format cameras cameras is is usually usually between between 800 800 and and 2500 2500 m m from from terrain, terrain, but but these these values can range depending on different factors, such as airport restrictions in the area, building values can range depending on different factors, such as airport restrictions in the area, building heights, heights, terrain elevation, camera type (e.g., format), camera format), camera parameters (in particular focal terrain elevation, camera type (e.g., camera camera parameters (in particular focal length), length), image acquisition method (vertical or oblique) and the other factors of image image acquisition method (vertical or oblique) and the other factors of image GSD requirements. InGSD this requirements. In this project, the images have been acquired an8altitude of 1500with m with 8 cm project, the images have been acquired from an altitude of 1500 from m with cm resolution the aim of resolution the aim of Cesme City.the In vividness this type of of the acquisition, the productionwith of basemaps forproduction Cesme City.ofInbasemaps this type for of acquisition, colors is not vividness is notaspect considered an important aspect in the “Basemap” process, consideredofasthe ancolors important in the as “Basemap” generation process, however, generation in order to generate however, in order to generate a fine texture, the sharpness of images together with the vividness of a fine texture, the sharpness of images together with the vividness of colors is of a great importance. colors is of great importance. order to obtain texture, a more vivid and impressive the images In order to aobtain a more vividInand impressive the images have beentexture, pre-processed for have been pre-processed for improving the realistic look of color tones and the image sharpness. improving the realistic look of color tones and the image sharpness. Figure 17 shows an example from Figure 17 shows an example the original and the enhanced images. the original and the enhancedfrom images.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

15 of 20 15 of 20

Figure Figure 17. 17. Raw Raw aerial aerial image image (left) (left) and and pre-processed pre-processed (color-enhanced) (color-enhanced) image (right) of Cesme Town.

3.6.2. 3.6.2. Image Image and and Model Model Orientation Orientation A that the the model model coordinates coordinates and A basic basic requirement requirement of of model model texturing texturing is is to to ensure ensure that and the the image image EOPs are defined in the same reference coordinate system. In addition, the camera calibration data EOPs are defined in the same reference coordinate system. In addition, the camera calibration should be provided for the orientation of the If noIferrors are detected in the data should be provided for accurate the accurate orientation of images. the images. no errors are detected in orientation validation, the database setup and load is carried out to establish a project space, which the orientation validation, the database setup and load is carried out to establish a project space, contains the orthophotos, DTMs, building models, models, and aerial photographs of the project. prewhich contains the orthophotos, DTMs, building and aerial photographs of theThe project. defined project (texturing) area is enlarged with a buffer thatso thethat buildings near the outer The pre-defined project (texturing) area is enlarged with a zone buffersozone the buildings near the border can be included completely. After importing all data into a database, aerial photographs and outer border can be included completely. After importing all data into a database, aerial photographs adjusted exterior orientation parameters are matched with buildings, DTMsDTMs and orthophotos in the and adjusted exterior orientation parameters are matched with buildings, and orthophotos database. Consistency checks between aerial photographs, interior and exterior orientation in the database. Consistency checks between aerial photographs, interior and exterior orientation parameters, and buildings buildings are are performed performed and and the the texturing texturing process process is is started started ifif there there are areno noerrors. errors. parameters, and 3.6.3. Texture Mapping Process To find the A pre-selection of the photos for the texturing area would result in faster processing. To most suitable photos for a building, the algorithm splits the building into sub-parts and assigns each The buildings buildings are are usually usually split split into into four four different different objects, objects, such such as as roof, roof, of them to different classes. The fringe. ThisThis object structure is similar to the object structure CityGMLin specification. façade, base, base,and and fringe. object structure is similar to the objectinstructure CityGML The software can usecan CityGML structure in the automatic-texturing process. The parts specification. Thedirectly software directlyobject use CityGML object structure in the automatic-texturing of the buildings thatofwill used in the process cantexturing also be selected thealso user.beFor example, process. The parts the be buildings thattexturing will be used in the processbycan selected by onlyuser. roofsFor canexample, be textured in roofs a project other parts the building can be left un-textured. Thecan nadir the only canand be textured in of a project and other parts of the building be left un-textured. The nadir (vertical) stereo images this80% study have and been60% taken withoverlap, 80% forward (vertical) stereo images in this study have been takenin with forward lateral which and 60% lateral overlap, which is sufficient is sufficient for texturing both the roofs and for thetexturing façades. both the roofs and the façades. The most important factor factor in texturing texturing roofs roofs is the selection of the best image for each roof, as every object object appear appearin inmultiple multiplephotos photosdue duetoto the high overlap ratio. central of the photos the high overlap ratio. TheThe central partpart of the photos has has the best radiometric appearance for buildings no relief displacement occurs in part. this part. The the best radiometric appearance for buildings as noasrelief displacement occurs in this The best best texture for each roof exists basically the photo the building is located close to the image texture for each roof exists basically in thein photo wherewhere the building is located close to the image center. center. other the words, the principal axis of the imageintersect should the intersect theperpendicularly, roof top perpendicularly, In otherInwords, principal axis of the image should roof top if possible. if possible. Forinexample, in the application thecan same can found in different nine or ten different For example, the application dataset, thedataset, same roof be roof found inbe nine or ten photos due photos to high software finds the most suitablefor photograph for roof the texturing roof to high due overlap. Theoverlap. softwareThe finds the most suitable photograph the texturing by comparing by center point the photographs. roof and the After aerialthephotographs. the the comparing center pointthe coordinates of thecoordinates roof and theofaerial comparison, After the photo comparison, thecenter photo point with the closest tofor thetexturing. roof center is used for texturing. with the closest to the roof center point is used Façade texturing is done with a different approach from the roof texturing. This approach is flexible and affected mainly by the camera system and city structure. Other parameters are whether the camera is large- or medium-format, how high the buildings in the city are, how the vegetation is

ISPRS Int. J. Geo-Inf. 2018, 7, 339

16 of 20

Façade texturing is done with a different approach from the roof texturing. This approach is flexible and affected mainly by the camera system and city structure. Other parameters are whether the camera is large- or medium-format, how high the buildings in the city are, how the vegetation is ISPRS Int. J. Geo-Inf.there 2018, 7, FOR PEER REVIEW 16 of 20 covered, whether isxfringe in the buildings, and topographical characteristics. Image orientation angles along with center coordinates are of great importance for the façade texturing. The façades covered, whether there is fringe in the buildings, and topographical characteristics. Image orientation appear to be better for texturing in the regions closer to the outer border of the photograph. In terms angles along with center coordinates are of great importance for the façade texturing. The façades of the viewing angle, the building has clearer outward appearance, but the camera distortions should appear to be better for texturing in the regions closer to the outer border of the photograph. In terms alsoofbethe taken intoangle, account this case. viewing thein building has clearer outward appearance, but the camera distortions should The software cuts out the photos for all façades and saves them in separate image files. While also be taken into account inchosen this case. cutting out the aerial photos, a predefined buffer is keptand at the peripherals. Thus,image if anyfiles. editing The software cuts out the chosen photos forzone all façades saves them in separate is needed to be done in the building gaps in the building can be prevented. Each While cutting out the aerial photos,geometry, a predefined buffer zone is keptenvelope at the peripherals. Thus, if any texture is then used into that of in thethe building according to the CityGML data structure. Thecan extracted editing is needed bepart done building geometry, gaps in the building envelope be prevented. Each texture is then used in that the building according data image is not directly textured to the façade, butpart it isofrelated one-to-one basedtoonthe fileCityGML names. In other structure. The extracted image is not each directly textured to the façade, but itand is related one-to-one words, the texture parts cut out from aerial photograph is unique can only be usedbased for one on file names. In otherbuilding. words, the texture partsimage cut out photograph is unique façade belonging to one Every texture is from used each onlyaerial for one building or façade. and A part can only be used for one façade belonging to one building. Every texture image is used only for one of textured city model area is presented in Figure 18. building or façade. A part of textured city model area is presented in Figure 18.

Figure 18. Exported CityGML buildings with textures.

Figure 18. Exported CityGML buildings with textures.

4. Conclusions

4. Conclusions In this study, a workflow has been introduced for the production of 3D city models, which have now become aa necessity forhas better management cities. Large-format imagery and vector In this study, workflow been introducedoffor the production of aerial 3D city models, which have maps manually produced from this data have been used for this purpose. 3D city models in LoD2 now become a necessity for better management of cities. Large-format aerial imagery and vector maps and DSMs can be from produced usinghave existing theirinproject manually produced this data beenaerial usedphotogrammetric for this purpose.datasets 3D city and models LoD2 outputs and DSMs without additional cost and effort as shown here. With the proposed method, it is possible to produce can be produced using existing aerial photogrammetric datasets and their project outputs without the model of Cesme Town (260 km2) within one week by one trained person using a desktop PC. additional cost and effort as shown here. With the proposed method, it is possible to produce the Figure 19 shows views of the resulting product of this study. The created 3D model is automatically model of Cesme Town (260 km2 ) within one week by one trained person using a desktop PC. Figure 19 textured from aerial images and published online on the web at www.cesme3d.com.

shows views of the resulting product of this study. The created 3D model is automatically textured from aerial images and published online on the web at www.cesme3d.com.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

17 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

17 of 20

Figure 19. Different parts of the resulting 3D city model of Cesme. Figure 19. Different parts of the resulting 3D city model of Cesme.

Some of the problems encountered during the work and suggestions for their solutions are as Some of the problems encountered during the work and suggestions for their solutions are follows: as follows: ● Excessive amount of preprocessing for file format conversion: Most of the data used in this study • was Excessive of preprocessing for file format conversion: the data used in thisof study not inamount the formats that the software requires, and need Most to beofpre-processed. Most the was notinteraction in the formats that the software and need to be pre-processed. Most of the manual has been performed forrequires, this purpose. manual interaction has models: been performed for thisreconstruction purpose. ● Incorrect building roof The automatic of buildings has the limitation of types roof in themodels: library.The In addition, occluded buildings are partially covered by • existing Incorrectroof building automatic reconstruction of which buildings has the limitation of trees or other objects have been reconstructed incorrectly from DSMs. In order to fix these existing roof types in the library. In addition, occluded buildings which are partially covered problems, checks have and manual editing haveincorrectly been required. Examples to thetofalse roof by trees orvisual other objects been reconstructed from DSMs. In order fix these reconstructions are provided Figureediting 20. problems, visual checks and in manual have been required. Examples to the false roof ● Software limitations in processing the20.data of large regions: A significant example to this reconstructions are provided in Figure the BuildingReconstruction software, canAonly reconstruct buildings in areas • problem Software is limitations in processing the data of largewhich regions: significant example to this problem 2. In this study 43,158 buildings have been reconstructed automatically in an area less than 1 km is the BuildingReconstruction software, which can only reconstruct buildings in areas less than 2. In order to be able to perform an automatic reconstruction on such a large area, the of 2702 . km 1 km In this study 43,158 buildings have been reconstructed automatically in an area of 270 km2 . working divided into small and then divided intoon folders producing DSM, DTM, In order area to beisable to perform an parts automatic reconstruction such by a large area, the working and building footprint for each area. The segments were separately produced and exported as area is divided into small parts and then divided into folders by producing DSM, DTM, and CityGML, and thesefor separate files have later converted into a single CityGML file to cover building footprint each area. The been segments were separately produced and exported as the entire city. CityGML, and these separate files have been later converted into a single CityGML file to cover the aentire As final city. recommendation, an automatic quality control and quality assessment (QC and QA) should be implemented for the 3D city models. Such methods are especially crucial for the QC and QA of large datasets. More research on the production of 3D city models should be conducted and

ISPRS Int. J. Geo-Inf. 2018, 7, 339

18 of 20

As a final recommendation, an automatic quality control and quality assessment (QC and QA) should be implemented for the 3D city models. Such methods are especially crucial for the QC and ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 18 of 20 QA of large datasets. More research on the production of 3D city models should be conducted and their results results should should be be discussed discussed in in the the literature, literature, since since more more results results would would help help to to open open new new horizons horizons their for higher-quality city models. for higher-quality city models.

Figure 20. Examples to the false reconstructions of the roofs. Figure 20. Examples to the false reconstructions of the roofs. Author Contributions: This article is produced from the Master of Science (M.Sc.) thesis of Mehmet Buyukdemircioglu underThis the article supervision of Sultan Kocaman (principle supervisor) and thesis Umit Isikdag (coAuthor Contributions: is produced from the Master of Science (M.Sc.) of Mehmet Buyukdemircioglu under the supervision of Sultan Kocaman (principle supervisor) and Umit Isikdag (co-supervisor). supervisor). Funding: This This research research received received no Funding: no external external funding. funding. Acknowledgments: The authors gratefully acknowledge the support of Alpaslan Tutuneken for provision and Acknowledgments: The authors gratefully acknowledge the support of Alpaslan Tutuneken for provision and training of CityGRID Texturizer software. training of CityGRID Texturizer software. Conflicts of Interest: The authors declare no conflicts of interest. Conflicts of Interest: The authors declare no conflicts of interest.

References References 1. 1. 2. 2. 3. 3. 4. 4. 5. 5. 6. 6. 7. 7.

8. 8. 9. 9. 10. 10. 11. 11.

12.

13.

United Nations. World Urbanization Prospects Highlights; United Nations: New York, NY, USA, 2014. United Nations. World Urbanization Prospects Highlights; United Nations: New York, NY, USA, 2014. Stadler, A.; Kolbe, T.H. Spatio-semantic coherence in the integration of 3D city models. In Proceedings of the Stadler, A.; Kolbe, T.H. Spatio-semantic coherence in the integration of 3D city models. In Proceedings of 5th International Symposium on Spatial Data Quality, Enschede, The Netherlands, 13–15 June 2007. the 5th International Symposium on Spatial Data Quality, Enschede, The Netherlands, 13–15 June 2007. Döllner, J.; Kolbe, T.H.; Liecke, F.; Sgouros, T.; Teichmann, K. The virtual 3D city model of Berlin-managing, Döllner, J.; Kolbe, T.H.; Liecke, F.; Sgouros, T.; Teichmann, K. The virtual 3D city model of Berlin-managing, integrating, and communicating complex urban information. In Proceedings of the 25th Urban Data integrating, communicating complexDenmark, urban information. Proceedings of the 25th Urban Data Managementand Symposium Udms, Aalborg, 15–17 May In 2006. Management Symposium Udms, Aalborg, Denmark, 15–17 May 2006. ˙ Kolbe, T.; Gröger, G.; Plümer, L. CityGML: Interoperable access to 3D city models. In Geo-Information for Kolbe, T.; Gröger, G.; Plümer, L. CityGML: Interoperable access to 3D city models. In Geo-İnformation for Disaster Management; Springer: Berlin, Germany, 2005; pp. 883–899. Disaster Springer: Germany, 2005; pp. 883–899. Kada, M.Management; The 3D berlin project.Berlin, In Photogrammetric Week’09; Wichmann Verlag: Heidelberg, Germany, 2009. Kada, M. The 3D berlin project. In Photogrammetric Week’09; Wichmann Verlag: Heidelberg, Germany, 2009. Agugiaro, G. First steps towards an integrated CityGML-based 3d model of Vienna. ISPRS Ann. Photogramm. Agugiaro, G. First steps towards an integrated CityGML-based 3d model of Vienna. ISPRS Ann. Remote Sens. Spat. Inf. Sci. 2016, III-4, 139–146. [CrossRef] Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-4, 139–146. Ozerbil, T.; Gokten, E.; Onder, M.; Selcuk, O.; Sarilar, N.C.; Tekgul, A.; Yilmaz, E.; Tutuneken, A. Konya Ozerbil, T.; belediyesi Gokten, E.;egik Onder, M.; Selcuk, O.;alimi, Sarilar, N.C.; Tekgul, A.; Yilmaz, E.; Tutuneken, A. projesi. Konya buyuksehir (oblique) goruntu 3 boyutlu kent modeli ve 3 boyutlu kent rehberi buyuksehir belediyesi (oblique) goruntuve alimi, 3 boyutlu modeliSempozyumu ve 3 boyutlu kent rehberi projesi. In Proceedings of the egik V. Uzaktan Algılama Cografi Bilgi kent Sistemleri (UZAL-CBS 2014), In Proceedings of the V. Uzaktan Algılama ve Cografi Bilgi Sistemleri Sempozyumu (UZAL-CBS 2014), Istanbul, Turkey, 14–17 October 2014. Available online: http://uzalcbs.org/wp-content/uploads/2016/11/ Istanbul, October 2014. Available online: http://uzalcbs.org/wp2014_013.pdf Turkey, (accessed on14–17 21 August 2018). content/uploads/2016/11/2014_013.pdf (accessed on 21 August 2018). Kada, M.; McKinley, L. 3D building reconstruction from lidar based on a cell decomposition approach. Kada, M.; Photogramm. McKinley, L.Remote 3D building reconstruction Int. Arch. Sens. 2009, 38, 47–52.from lidar based on a cell decomposition approach. Int. Arch. Photogramm. Remote Sens. 2009, 38, 47–52. Haala, N.; Kada, M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. Haala, N.;570–580. Kada, M. An update on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2010, 65, [CrossRef] 2010, 65, 570–580. Haala, N.; Rothermel, M.; Cavegn, S. Extracting 3D urban models from oblique aerial images. In Proceedings Haala, N.; Joint Rothermel, Cavegn, S. Extracting 3D urban models from oblique aerial images. In of the 2015 Urban M.; Remote Sensing Event (JURSE), Lausanne, Switzerland, 30 March–1 April 2015; Proceedings of the 2015 Joint Urban Remote Sensing Event (JURSE), Lausanne, Switzerland, 30 March–1 pp. 1–4. April 2015; S.; pp.Zhang, 1–4. Kocaman, L.; Gruen, A.; Poli, D. 3D city modeling from high-resolution satellite images. Kocaman, S.; Zhang, L.; Gruen, A.; Poli, D. 3D cityMapping modeling from high-resolution images. In In Proceedings of the ISPRS Workshop Topographic from Space (withSpecial satellite Emphasis on Small Proceedings of the Turkey, ISPRS Workshop Topographic Satellites), Ankara, 14–16 February 2006. Mapping from Space (withSpecial Emphasis on Small Satellites), Ankara, Turkey, 14–16 February 2006. Kraus, T.; Lehner, M.; Reinartz, P. Modeling of urban areas form high resolution stereo satellite images. In Proceedings of the ISPRS Hannover Workshop 2007 High-Resolution Earth Imaging for Geospatial Information, Hannover, Germany, 29 May–1 June 2007. Flamanc, D.; Maillet, G.; Jibrini, H. 3D city models: An operational approach using aerial images and cadastral maps. In Proceedings of the Photogrammetric Image Analysis, Munich, Germany, 17–19 September 2003.

ISPRS Int. J. Geo-Inf. 2018, 7, 339

12.

13. 14.

15. 16. 17. 18. 19.

20. 21. 22. 23.

24. 25. 26. 27. 28. 29. 30.

31. 32. 33.

34. 35.

19 of 20

Kraus, T.; Lehner, M.; Reinartz, P. Modeling of urban areas form high resolution stereo satellite images. In Proceedings of the ISPRS Hannover Workshop 2007 High-Resolution Earth Imaging for Geospatial Information, Hannover, Germany, 29 May–1 June 2007. Flamanc, D.; Maillet, G.; Jibrini, H. 3D city models: An operational approach using aerial images and cadastral maps. In Proceedings of the Photogrammetric Image Analysis, Munich, Germany, 17–19 September 2003. Kabolizade, M.; Ebadi, H.; Mohammadzadeh, A. Design and implementation of an algorithm for automatic 3D reconstruction of building models using genetic algorithm. Int. J. Appl. Earth Obs. Geoinf. 2012, 19, 104–114. [CrossRef] El Garouani, A.; Alobeid, A.; El Garouani, S. Digital surface model based on aerial image stereo pairs for 3D building. Int. J. Sustain. Built Environ. 2014, 3, 119–126. [CrossRef] Haala, N.; Brenner, C. Extraction of buildings and trees in urban environments. ISPRS J. Photogramm. Remote Sens. 1999, 54, 130–137. [CrossRef] Suveg, I.; Vosselman, G. Reconstruction of 3D building models from aerial images and maps. ISPRS J. Photogramm. Remote Sens. 2004, 58, 202–224. [CrossRef] Open Geospatial Consortium. City Geography Markup Language (Citygml) Encoding Standard Version 2.0.0. Available online: http://www.opengis.net/spec/citygml/2.0 (accessed on 21 August 2018). Buyuksalih, I.; Isikdag, U.; Zlatanova, S. Exploring the processes of generating lod (0-2) citygml models in greater municipality of Istanbul. In Proceedings of the 8th 3DGeoInfo Conference & WG II/2 Workshop, Istanbul, Turkey, 27–29 November 2013. ˙ Kolbe, T.H. Representing and exchanging 3D city models with citygml. In 3D Geo-Information Sciences; Springer: Berlin, Germany, 2009; pp. 15–31. Arefi, H.; Engels, J.; Hahn, M.; Mayer, H. Level of detail in 3D building reconstruction from lidar data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 485–490. ˙ Gröger, G.; Plümer, L. Citygml—Interoperable semantic 3D city models. ISPRS J. Photogramm. Remote Sens. 2012, 71, 12–33. [CrossRef] Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008, 8, 7323–7343. [CrossRef] [PubMed] Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P.; Koehl, M. Model-driven and data-driven approaches using lidar data: Analysis and comparison. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 87–92. Huang, H.; Brenner, C.; Sester, M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 29–43. [CrossRef] Taillandier, F. Automatic building reconstruction from cadastral maps and aerial images. Int. Arch. Photogramm. Remote Sens. 2005, 36, 105–110. S¸ engül, A. Extracting semantic building models from aerial stereo images and conversion to CityGML. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 41, 321–324. [CrossRef] Kobayashi, Y. Photogrammetry and 3D city modeling. Proc. Digit. Archit. 2006, 90, 209. Nex, F.; Remondino, F. Automatic roof outlines reconstruction from photogrammetric DSM. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 257–262. [CrossRef] Früh, C.; Sammon, R.; Zakhor, A. Automated texture mapping of 3D city models with oblique aerial imagery. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 3DPVT 2004, Thessaloniki, Greece, 6–9 September 2004; pp. 396–403. Früh, C.; Zakhor, A. Constructing 3D city models by merging aerial and ground views. IEEE Comput. Soc. 2003, 23, 52–61. [CrossRef] Kada, M.; Klinec, D.; Haala, N. Façade texturing for rendering 3D city models. In Proceedings of the ASPRS 2005 Annual Conference, Baltimore, MD, UDA, 7–11 March 2005. Novel, C.; Keriven, R.; Poux, E.; Graindorge, P. Comparing aerial photogrammetry and 3D laser scanning methods for creating 3D models of complex objects. In Proceedings of the Capturing Reality Forum, Bentley Systems, Salzburg, Austria, 2015; p. 15. ˙ Vexcel Imaging Ultracam Falcon. Available online: http://www.vexcel-imaging.com/ultracam-falcon/ (accessed on 31 July 2018). Agisoft Photoscan Pro. Available online: http://www.agisoft.com/features/professional-edition/ (accessed on 31 July 2018).

ISPRS Int. J. Geo-Inf. 2018, 7, 339

36. 37. 38. 39. 40.

20 of 20

Bentley Microstation. Available online: https://www.bentley.com/en/products/product-line/modelingand-visualization-software/microstation (accessed on 31 July 2018). Safe Software Fme. Available online: https://www.safe.com/ (accessed on 31 July 2018). Virtualcity Systems Building Reconstruction. Available online: http://www.virtualcitysystems.de/en/ products/buildingreconstruction (accessed on 31 July 2018). Uvm Systems Gmbh. Available online: http://www.uvmsystems.com/index.php/en/software/soft-city (accessed on 31 July 2018). Intergraph Corporation. Available online: http://www.intergraph.com/ (accessed on 31 July 2018). © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).