Laser Scanning and Data Integration for Three-Dimensional ... - MDPI

3 downloads 0 Views 5MB Size Report
Feb 18, 2016 - In addition, the accuracy of the 3-D point cloud model ...... scanning for geometry documentation and construction management of highway.
International Journal of

Geo-Information Article

Laser Scanning and Data Integration for Three-Dimensional Digital Recording of Complex Historical Structures: The Case of Mevlana Museum Cihan Altuntas 1, *, Ferruh Yildiz 1 and Marco Scaioni 2 1 2

*

Department of Geomatics, Engineering Faculty, Selcuk University, Selcuklu, Konya 42075, Turkey; [email protected] Department of Architecture, Built Environment and Construction Engineering, Politecnico di Milano, Milano 20132, Italy; [email protected] Correspondence: [email protected]; Tel.: +90-332-223-1894; Fax: +90-332-241-0635

Academic Editor: Wolfgang Kainz Received: 3 November 2015; Accepted: 25 January 2016; Published: 18 February 2016

Abstract: Terrestrial laser scanning method is widely used in three-dimensional (3-D) modeling projects. Nevertheless it usually requires measurement data from other sources for full measurement of the shapes. In this study a 3-D model of the historical Mevlana Museum (Mevlana Mausoleum) in Konya, Turkey was created using state-of-the art measurement techniques. The building was measured by terrestrial laser scanner (TLS). In addition, some shapes of the indoor area were measured by a time-of-flight camera. Thus, a 3-D model of the building was created by combining datasets of all measurements. The point cloud model was created with 2.3 cm and 2.4 cm accuracy for outdoor and indoor measurements, and then it was registered to a georeferenced system. In addition a 3-D virtual model was created by mapping the texture on a mesh derived from the point cloud. Keywords: terrestrial laser scanner; time-of-flight camera; point cloud; registration; three-dimensional modeling; cultural heritage

1. Introduction Cultural assets are extremely important structures to be transferred to next generations as the common heritage of the humanity [1]. Therefore, their documentation is an important facet of this process. In particular, recording their shape, dimensions, colors, and semantics may allow architects to restore their original forms and rebuild them when they would be destroyed [2]. Any kinds of geometric information about the object (volume, length, location, digital elevation model, cross-sections) can be retrieved from three-dimensional (3-D) digital models [3,4]. 3-D modeling has been practiced for many aims such as virtual reality, heritage building information model (HBIM) applications [5,6], documentation of historical structures, physical replication of artefacts [7], among others. It requires collecting and integrating high-density spatial data from the object surface [8]. In the last decade, with the impressive development of digital photogrammetric techniques, scanning and 3-D imaging sensors, 3-D spatial data of architectural objects can be measured very quickly and precisely. Plenty of literature has been published where different approaches are analyzed and compared [3,9–12]. In reality, in many cases their integration in the optimal solution, being imaging and scanning techniques, are quite complementary [8]. Terrestrial laser scanning should be preferred when the target is a complex structure that is difficult to model by means of geometric primitives [13]. It is also increasingly used for digitization of cultural heritage [10]. Many studies about heritage documentation projects have been made by using laser scanning technique singly, or together with

ISPRS Int. J. Geo-Inf. 2016, 5, 18; doi:10.3390/ijgi5020018

www.mdpi.com/journal/ijgi

ISPRS Int. J. Geo-Inf. 2016, 5, 18

2 of 16

image based techniques [3,14,15]. Due to object size, shape, and occlusions, it is usually necessary to use multiple scans from different locations to cover every surface [10]. Multiple scan stations can be integrated in a relatively easy way by means of consolidated registration techniques [16–19]. On the other hand, photogrammetry is more suitable for modeling those surfaces that can be decomposed in regular shapes, even though the development of dense matching techniques [20] gave the opportunity to obtain results comparable to terrestrial laser scanners (TLS) in the reconstruction of free-form objects [21,22]. Photogrammetry also has the advantage of being operated by light sensors that can ISPRS Int. J. Geo-Inf. 2016, 5, 18 2 of 16 be carried onboard unmanned aerial vehicle (UAV) systems [23]; such platforms are useful for data acquisition over roofsregistration and othertechniques non-accessible places. Very often a camera isisintegrated into a TLS of consolidated [16–19]. On the other hand, photogrammetry more suitable for to modeling surfaces that can recording be decomposed in regular even though the development instrument allow those the contemporary of 3-D shapeshapes, and color information. dense matching techniques [20] gave the opportunity to obtain results comparable to terrestrial laser The of emerging technology for 3-D measurement is range imaging [24], which can be operated scanners (TLS) in the reconstruction of free-form objects [21,22]. Photogrammetry also has the by meansadvantage of time-of-flight (ToF) sensors instantly measure a set(UAV) of 3-D points of being operated bycameras. light sensorsSuch that can be carriedcan onboard unmanned aerial vehicle (point cloud) which represents the geometry of the imaged area. Due to the small size and weight, and systems [23]; such platforms are useful for data acquisition over roofs and other non-accessible places. Very often a camera isrecording integrated into instrument toToF allowcameras the contemporary of 3-D shape the capability of directly 3-Da TLS information, have arecording great potential to devise a andof color information. in the future. The measurement process is based on recording the response wide range applications The emerging technology for 3-D measurement is range imaging [24], which can be operated by of a modulated laser light emitted by the sensor itself. [25] and [26] exploited a ToF camera and means of time-of-flight (ToF) cameras. Such sensors can instantly measure a set of 3-D points (point image-based for 3-D of cultural heritage. On the other hand, of cloud)modeling which represents the visualization geometry of the imaged area. Due to the small size and weight, andinfluences the the measuring direction inclination and material ToF types on distance measurement accuracy have been capability of directly recording 3-D information, cameras have a great potential to devise a wide rangeby of [27]. applications in the future. The measurement process is based on recording the response of a investigated modulated laser light emitted by the sensor itself. [25] and [26] exploited a ToF camera and image-based In this study, a 3-D digital model of Mevlana Museum (Konya, Turkey) was created by combining modeling for 3-D visualization of cultural heritage. On the other hand, influences of the measuring TLS and direction ToF camera data.and The outside ofon the building were solely measured laser scanner, while inclination material types distance measurement accuracy have beenby investigated by indoor details [27]. were measured by using both techniques. Virtual models and orthophoto images thisimportant study, a 3-D digital model of Mevlana Museum (Konya, Turkey) created by combining were createdInfor details by texturing image onto the pointwas cloud, after creating a mesh TLS and ToF camera data. The outside of the building were solely measured by laser scanner, while has been surface. Thus, the performance of ToF cameras for cultural heritage (CH) documentation indoor details were measured by using both techniques. Virtual models and orthophoto images were evaluated in terms of possible new applications. In addition, the accuracy of the 3-D point cloud model created for important details by texturing image onto the point cloud, after creating a mesh surface. was analyzed. Thus, the performance of ToF cameras for cultural heritage (CH) documentation has been evaluated in terms of possible new applications. In addition, the accuracy of the 3-D point cloud model was

2. The Case Study: Mevlana Museum analyzed.

Mevlana locatedMuseum in the city center of Konya, Turkey (Figure 1). The museum was 2. The Museum Case Study:isMevlana setup around the mausoleum of Mevlana Jalaluddin Rumi, which was built during the Seljuq period Mevlana Museum is located in the city center of Konya, Turkey (Figure 1). The museum was setup (13th century). Themausoleum location of ofMevlana “Dervish Lodge” which is was currently usedtheasSeljuq a museum used to be a around the Jalaluddin Rumi, which built during period (13th rose garden of the the Seljuq. “Dervish Lodge”used was byused Sultan century). ThePalace locationof of “Dervish Lodge” which is currently as gifted a museum to beAlaeddin a rose gardenKeykubat of the Palace Bahaeddin of the Seljuq. “Dervish Lodge” was gifted by Sultan The Alaeddin Keykubat to Sultanul-Ulema to Sultanul-Ulema Veled, the father of Mevlana. mausoleum where the Green Dome Bahaeddin Veled, the father of Mevlana. The mausoleum where the Green Dome (Kubbe-i Hadra) is (Kubbe-i Hadra) is located was built with the permission of the son of Mevlana, Sultan Veled, by located was built with the permission of the son of Mevlana, Sultan Veled, by architect Tebrizli Bedrettin architect inTebrizli Bedrettin in 1274 after the death of Mevlana. From that time, the construction 1274 after the death of Mevlana. From that time, the construction activities continued up to the end activitiesofcontinued up to[28]. the end of the 19th century [28]. the 19th century

Figure 1. Mevlana Museum.

Figure 1. Mevlana Museum.

ISPRS Int. J. Geo-Inf. 2016, 5, 18 ISPRS Int. J. Geo-Inf. 2016, 5, 18

3 of 16 3 of 16

The “Mevlevi “Mevlevi Dervish Dervish Lodge” Lodge” and and the the mausoleum mausoleum began began to to serve serve as as aa museum museum under under the the name name The “Konya Asar-ı Asar-ı Atika Atika Museum” Museum” in in 1926. 1926. In In 1954 1954 the the exhibition exhibition and and arrangement arrangement of of the the museum museum was was “Konya revised and the name of the museum was changed to the Mevlana Museum [28]. The section where revised and the name of the museum was changed to the Mevlana Museum [28]. The section where the mausoleum mausoleum is is located located is is 30.5 30.5 m mˆ × 30.5 the 30.5 m m in in size size and and the the courtyard courtyard is is 68 68 m m ׈84 84m min insize. size. The courtyard courtyard of of the the museum museum is is entered entered from from Dervisan Dervisan Gate. Gate. There There are are “Dervish “Dervish Rooms” Rooms” along along The the north and west sides of the courtyard. The south side next to the Matbah and the Hurrem Pasha the north and west sides of the courtyard. The south side next to the Matbah and the Hurrem Pasha Mausoleum ends ends with with the the Hamusan Hamusan Gate Gate opened opened to to the the Ucler Ucler Cemetery. Cemetery. In In the the east east of of the the courtyard courtyard Mausoleum there isisthe themain main building where Semahane, Masjid, and the graves of Mevlana and his family there building where Semahane, Masjid, and the graves of Mevlana and his family members members are located. The covered fountain and “Seb-i Arus” pool give a novel touch to the courtyard are located. The covered fountain and “Seb-i Arus” pool give a novel touch to the courtyard [28]. [28]. The entrance to the mausoleum of Hazreti Mevlana is from the Chant Room (Figure 2). The Chant to the mausoleum of Hazreti is from the Chant Roomis(Figure 2). The Chant RoomThe is aentrance dome-covered square place. Next toMevlana it, beyond the silver gate there the mausoleum hall Room is a dome-covered square place. Next to it, beyond the silver gate there is the mausoleum hall (Huzur-ı Pir) to enter, which is roofed by three small domes. The third dome is called Post Dome (Huzur-ı Pir) tothe enter, which is roofed by three small domes. The dome is called and it and it adjoins Green Dome in the north. The mausoleum hallthird is surrounded by aPost highDome wall in the adjoins the Green Dome in the north. The mausoleum hall is surrounded by a high wall in the east, south, east, south, and north. The graves of Mevlana and his son, Sultan Veled, are located under the Green and north. Dome [28].The graves of Mevlana and his son, Sultan Veled, are located under the Green Dome [28].

Figure 2. Ground plan of Mevlana Museum.

The Semahane Semahane along along with with Masjid Masjid was was built built by by Suleiman Suleiman the the Magnificent in XVI XVI century. The naat naat The Magnificent in century. The chair in Semahane, the Mutrib room where musicians lived, and the gathering places belonging to men chair in Semahane, the Mutrib room where musicians lived, and the gathering places belonging to and women are preserved in their state. state. men and women are preserved in original their original The Masjid is entered from the Cerag Gate. Also, Also, there there are are transitions transitions with small door door from from The Masjid is entered from the Cerag Gate. with aa small Semahane and Huzur-ı Pir sections where there are the graves. In this section, the Muezzin Mahvil and Semahane and Huzur-ı Pir sections where there are the graves. In this section, the Muezzin Mahvil the Mesnevihan chairchair are kept in their original form.form. and the Mesnevihan are kept in their original 3. Materials Materialsand andMethods Methods

surveying has been used to creating creating 3-D 3-D models models of of historical historical Terrestrial laser scanning (TLS) surveying buildings. The main steps of the 3-D modeling process with this technique are point cloud registration, registration, texture mapping, and geo-referencing, which necessitates connection with the other related spatial data. The high-resolution spatial data measured, like the point cloud, depicts the shape of the object. In

ISPRS Int. J. Geo-Inf. 2016, 5, 18

4 of 16

ISPRS Int. J. Geo-Inf. 2016, 5, 18

4 of 16

texture mapping, and geo-referencing, which necessitates connection with the other related spatial data. The high-resolution spatial data measured, like the point cloud, depicts the shape of the addition, the texture data should be mapped to the point cloud, after meshing, for the real visualization object. In addition, the texture data should be mapped to the point cloud, after meshing, for the real of the object. Terrestrial laser scanners are used for measurement of the outside and inside details. In visualization of the object. Terrestrial laser scanners are used for measurement of the outside and inside addition, in the indoor area, complex details that cannot be completely measured with TLS were imaged details. In addition, in the indoor area, complex details that cannot be completely measured with TLS by ToF imaging, which is an affordable tool to accomplish this task. The flowchart in Figure 3 shows were imaged by ToF imaging, which is an affordable tool to accomplish this task. The flowchart in the method followed in the measuring and creation of the 3-D model of the museum. Figure 3 shows the method followed in the measuring and creation of the 3-D model of the museum.

Figure 3. The flowchart of the surveying and 3-D modeling methodology. Figure 3. The flowchart of the surveying and 3-D modeling methodology.

3.1. Terrestrial Terrestrial Laser Laser Scanning Scanning 3.1. In this this study, study, laser laser scanning scanning measurements measurements were were carried carried out out with with Optech Optech ILRIS-3D ILRIS-3D laser laser scanner scanner In device. This device can measure 2500 points in a second using direct ToF method. The range device. This device can measure 2500 points in a second using direct ToF method. The range measurement measurement accuracy is Its 7 mm at 100 sampling m. Its minimum sampling step (point-to-point is accuracy is 7 mm at 100 m. minimum step (point-to-point spacing) is 0.001146°spacing) (20 μrad). ˝ (20 µrad). Beam divergence is 0.009740˝ (170 µrad). It records color data for the measured 0.001146 Beam divergence is 0.009740° (170 μrad). It records color data for the measured points on the basis of points on the camera basis of(six the megapixel). integrated camera (six megapixel). The maximum of the integrated The maximum measurement range of themeasurement instrument is range 1300 m. ˝ ˝ instrument is 1300 m. It makes at 220field-of-view, vertical andrespectively 360 horizontal Itthe makes panoramic measurements at panoramic 220° verticalmeasurements and 360° horizontal [29]. field-of-view, respectively [29]. The ability to capture centimeter accuracy data at high resolution and the rapid response capability Theby ability to capture centimeter data atmeasurement high resolution the rapidbuildings. response capability afforded TLS makes it the ideal tool accuracy for quantitative of and architectural TLS may afforded by TLS makes it the ideal tool for quantitative measurement of architectural buildings. use direct ToF, phase-shift, and triangulation methods to measure the distance from the instrument to TLSobject maypoints use direct ToF,maximum phase-shift, and triangulation to and measure the distance from the the [30]. The measurement range ofmethods phase-shift time-of-flight laser scanners instrument to the object points [30]. The maximum measurement ofrespectively. phase-shift and time-of-flight can reach, approximately, five hundred to a few thousand metersrange away, They have been laser scanners can reach, approximately, five hundred to a few thousand meters away, respectively. used for modeling of buildings and outdoor objects [31–33]. The maximum range of the instruments They have used for method modeling of buildings and outdoor objects [31–33]. The maximum range based on thebeen triangulation is about eight meters away. Thus, it has been generally used for 3-D of the instruments based on the triangulation method is about eight meters away. Thus, it has been modeling of small objects. generally for 3-D of small objects.3D drawing, virtual models, and orthophoto images Laser used scanning datamodeling are a benefit to creating Laser scanning data are a benefit to creating virtualismodels, and orthophoto of of architectural structures. One of the limitations3D of drawing, laser scanning the measurement fromimages groundarchitectural structures.stations. One of the limitations ofscans laser scanning is the required measurement fromtoground-based based static instrument Thus, multiple are frequently in order model large

objects. In this regards, all of the point clouds should be registered into a common reference system. Another limitation of the laser scanning is the equipment cost, which may reach several tens of thousands of U.S. dollars.

ISPRS Int. J. Geo-Inf. 2016, 5, 18

5 of 16

static instrument stations. Thus, multiple scans are frequently required in order to model large objects. In this regards, all of the point clouds should be registered into a common reference system. Another limitation of the laser scanning is the equipment cost, which may reach several tens of ISPRS Int. J. Geo-Inf. 2016, 5, 18 5 of 16 thousands of U.S. dollars.

3.2. Time-of-Flight (ToF) (ToF) Imaging Imaging in in this study [34,35]. It has sizesize 65 mm ˆ 65 ×mm ˆ 68 ×mm A SwissRanger SwissRanger®® SR4000 SR4000camera camerawas wasused used this study [34,35]. It has 65 mm 65 mm 68 and and weight of 470 g. The imaging sensor hashas size 144 ˆ ×176 mm weight of 470 g. The imaging sensor size 144 176pixels pixelsand andthe the measurement measurement error is 1 cm on its maximum maximum measurement range of 5 m m [36]. [36]. The SR4000 camera records 50 frames per The coordinates coordinates (xyz), (xyz), the the amplitude, amplitude, and and confidence confidence values values are are recorded recorded into the second (fps). The files. The Theorigin originofofthe the coordinates is the optical overlaps the camera measurement files. coordinates is the optical axisaxis and and overlaps the camera frontalfrontal plane plane 4). All frames acquired sequentially are recorded in separate measurement In the (Figure(Figure 4). All frames acquired sequentially are recorded in separate measurement files. Infiles. the case of caseToF of the ToF iscamera kept in the stationary position data some acquisition, some differences the camera kept inisthe stationary position during dataduring acquisition, differences usually occur usually occur between the measurements (frames) same scene. Such are between the measurements (frames) recorded from recorded the same from scene.the Such departures are departures due to various due to various that need modeled, and to random noise. if more systematic errorssystematic that need toerrors be modeled, andtotobe random noise. Therefore, if more thanTherefore, one measurement than measurement ofrecorded, the same an image areaimage is recorded, image file can be generated in of theone same image area is average file cananbeaverage generated in order to minimize noise. order to minimize noise.obtained According the researchers, results obtained by other researchers, a number between 10 According to the results by to other a number between 10 and 30 frames is generally adequate to create 3-D models [37]. The self-calibration ToF The cameras has been performed with and 30 frames is generally adequate to create 3-D modelsof[37]. self-calibration of ToF cameras various The effect techniques of distortion[38]. errorThe in the coordinate measurement of the SR4000 has beentechniques performed[38]. with various effect of distortion error in the coordinate camera was corrected by using the factory settings.by The output the ToF camera is aoutput point of cloud. On measurement of the SR4000 camera was corrected using the of factory settings. The the ToF the otherishand, thecloud. measurement can also be the made in motion bycan handling ToFincamera registering camera a point On the other hand, measurement also bethe made motionbyby handling different 3-D scenes into a single coordinate [39].a single coordinate system [39]. the ToF camera by registering different 3-D system scenes into

Figure Figure 4. 4. Coordinate Coordinate axis axis of of the the SwissRanger SwissRanger SR4000 SR4000 camera. camera.

ToF imaging techniques can be used in several domains. Driver assistance, interactive screen, and ToF imaging techniques can be used in several domains. Driver assistance, interactive screen, biomedical applications are reported in [40]. Even though these processes can be carried out by using laser and biomedical applications are reported in [40]. Even though these processes can be carried out by scanning or photogrammetry, the use of a ToF camera makes data acquisition faster and economically using laser scanning or photogrammetry, the use of a ToF camera makes data acquisition faster and sustainable. Hussmann et al. [34] compared photogrammetry and a ToF camera, in terms of field of economically sustainable. Hussmann et al. [34] compared photogrammetry and a ToF camera, in terms view, conjugate point detection, and measurement. These two methods were examined in the of field of view, conjugate point detection, and measurement. These two methods were examined in the automotive industry for driver assistance, accident prevention, automatic brakes, and for determining automotive industry for driver assistance, accident prevention, automatic brakes, and for determining the movements on the road. The ToF camera was found superior to the photogrammetric method. the movements on the road. The ToF camera was found superior to the photogrammetric method. Boehm and Pattinson [41] investigated the exterior orientation of ToF point clouds in relation to a given Boehm and Pattinson [41] investigated the exterior orientation of ToF point clouds in relation to a given reference TLS point cloud. In addition, ToF imaging has been used for object modeling [37,42,43], reference TLS point cloud. In addition, ToF imaging has been used for object modeling [37,42,43], structural deflection measurement [44,45], and human motion detection [46]. Moreover, mobile structural deflection measurement [44,45], and human motion detection [46]. Moreover, mobile measurement was carried out by ToF imaging for detecting the route of moving robots [39] or mapping measurement was carried out by ToF imaging for detecting the route of moving robots [39] or mapping indoor environment [47]. indoor environment [47].

ISPRS Int. J. Geo-Inf. 2016, 5, 18 ISPRS Int. J. Geo-Inf. 2016, 5, 18

4. Data Acquisition Process

6 of 16 6 of 16

4. Data Acquisition Process Surfaces 4.1. Measurement of the Outside 4.1. of the Outside TheMeasurement outside surfaces of the Surfaces museum were measured by using a Optech ILRIS-3D terrestrial laser scanner (see subsection 3.1). The sampling wasILRIS-3D about 1.5 cm on the The outside surfaces of the selected museum spatial were measured by resolution using a Optech terrestrial laserobject walls, ceilings, floors. 3.1). The The doors, due spatial to fine sampling details and embellishments, were with a scanner (seeand Subsection selected resolution was about 1.5 cm measured on the object sampling 1 cm and 3due mm, respectively. The sampling resolution on the roof walls, spatial ceilings,resolution and floors.ofThe doors, to fine details and embellishments, were measured with(outer surface) was 3 cm. Each scan was withrespectively. others for at least 30% toresolution help co-registration. a sampling spatial resolution of overlapped 1 cm and 3 mm, The sampling on the roof The surface) was 3 cm. Each was overlapped others for at least 30% to helpout co-registration. laser(outer scanning measurements of scan the outside surfaceswith of the museum were carried from a distance The laser scanning the outside surfaces of the museum were carriedlocated out from a the of approximately 25 m.measurements The roof was of measured from the minarets of Selimiye Mosque near distance of approximately 25 m. The roof was measured from the minarets of Selimiye Mosque located museum, and the measuring distance was approximately 60 m. The sections of the roof that cannot be near the museum, and the measuring approximately 60 m.number The sections the scanning roof measured from there were measured by distance climbingwas onto the roof. A total of 110oflaser that cannot be measured from there were measured by climbing onto the roof. A total number of stations were set up for measuring the outside surfaces of the museum (Figure 5). Totally, 2,980,000 110 laser scanning stations were set up for measuring the outside surfaces of the museum (Figure 5). laser points were recorded. Totally, 2,980,000 laser points were recorded.

Figure 5. The outside of the museum was measured by laser scanner from 110 stations. Some details Figure 5. The outside of the museum was measured by laser scanner from 110 stations. Some details were scanned many times with overlapped measurements. The color legend on the right shows the were scanned many times with overlapped measurements. The color legend on the right shows the repeated measurements (for example, red areas were measured10 times). repeated measurements (for example, red areas were measured10 times).

4.2. Measurement of the Indoor Environment

4.2. Measurement of the Indoor Environment

The indoor areas of the museum were measured by TLS with a spatial sampling resolution of indoor of theOptech museum were was measured TLS withwere a spatial sampling resolution of 1The cm. The sameareas instrument ILRIS-3D applied by here. Details measured with a sampling 1 cm.resolution The same Optech ILRIS-3D was were measured sampling ofinstrument 3 mm. The indoor environment wasapplied recordedhere. fromDetails 72 stations and 5,835,000 with pointsa were resolution of 3 As mm. indoor environment was did recorded from 72 stations 5,835,000 points measured. theThe administration of the museum not allow scanning of theand place where the tombwere of Mevlana is located, the measurement of the ceiling of this section could not be carried out. With measured. As the administration of the museum did not allow scanning of the place where the the tomb of exception of this area, a SR4000 ToF camera used record any small the out. museum. Mevlana is located, the measurement of the was ceiling oftothis section coulddetails not beinside carried With the The light weight and the chance to handle the ToF camera helped the acquisition of details located exception of this area, a SR4000 ToF camera was used to record any small details inside the museum. positions cover with TLS. Some details in Huzur-ı Pirthe and Masjid were by in The in light weightdifficult and thetochance to handle the ToF camera helped acquisition of measured details located the SR4000 camera from different stations in order to overlap. The point cloud registration and the positions difficult to cover with TLS. Some details in Huzur-ı Pir and Masjid were measured by the integration of TLS and ToF data are illustrated in Section 5.

SR4000 camera from different stations in order to overlap. The point cloud registration and the integration of Acquisition TLS and ToF data are illustrated in Section 5. 4.3. Image

Photorealistic texture mapping was needed to create a virtual-reality (VR) model of the museum [48]. Due to the low resolution of imaging sensors embedded in the adopted TLS and ToF camera, independent images were the mere purpose of photo-texturing. Photorealistic texture mapping wasacquired neededfor to create a virtual-reality (VR) model of the museum

4.3. Image Acquisition

[48]. Due to the low resolution of imaging sensors embedded in the adopted TLS and ToF camera, independent images were acquired for the mere purpose of photo-texturing. A single-lens reflex (SLR) Nikon D80 camera (pixel array 3872 × 2592, pixel size 6.1 µm, focal length 24 mm), which had been calibrated beforehand [49], was used for image acquisition. The details to be textured were photographed from a frontal position, in order to mitigate perspective distortions and

ISPRS Int. J. Geo-Inf. 2016, 5, 18

7 of 16

A single-lens reflex (SLR) Nikon D80 camera (pixel array 3872 ˆ 2592, pixel size 6.1 µm, focal length 24 mm), which had been calibrated beforehand [49], was used for image acquisition. The details to be textured were photographed from a frontal position, in order to mitigate perspective distortions and irregularity in color and intensity. The brightness of the image depends upon the light source and camera position. Thus, the images should be taken from proper positions [50,51]. 5. Creating 3D ISPRS Int.Point J. Geo-Inf.Cloud 2016, 5, 18Model ISPRS Int. J. Geo-Inf. 2016, 5, 18

7 of 16 7 of 16

irregularity in color and intensity. The brightness of the image depends upon the light source and 5.1. Registration of Laser Scanner Measurements

camera position. Thus, theintensity. images should be taken from proper positions irregularity in color and The brightness of the image depends[50,51]. upon the light source and

The registration of the laser scanner point clouds and the creation and editing of the triangulated camera position. Thus, the images should be taken from proper positions [50,51]. 5. Creating Point Cloud Model (mesh) surface was3D undertaken with PolyWorks® software [52]. First of all, the measurement files 5. Creating Point Cloud Model recorded from TLS 3D were converted to the PIF file format with Optech Parser (ver.4.2.7.2) software. 5.1. Registration of Laser Scanner Measurements The registration of the point clouds was operated in PolyWorks® IMAlign® module with the 5.1. Registration of Laser Scanner The registration of the laserMeasurements scanner point clouds and the creation and editing of the triangulated Iterative Closest Point (ICP) method with (see PolyWorks Pomerleau et al. [53] for a review). point cloud ® software (mesh) was undertaken First of all, the The measurement files featuring Thesurface registration of the laser scanner point clouds and the[52]. creation and editing of the triangulated the largestrecorded overlap respect towith other scans was selected asParser a all, reference. Scans that directly fromwith TLS converted to the PIF file ®format with[52]. Optech (ver.4.2.7.2) software. (mesh) surface waswere undertaken PolyWorks software First of the measurement files ® IMAlign® module with the The were registration of the point clouds wasfile operated in PolyWorks overlapped this registered to its system. AfterParser the registration of this group of recorded from TLS were converted tocoordinate the PIF format with Optech (ver.4.2.7.2) software. Iterative Pointof (ICP) (see Pomerleau et al. [53] for a review). The point cloud featuring ® IMAlign ® module The Closest registration the method point clouds was in PolyWorks with the scans to the reference, the process continued up operated to include all remaining scans. the largest overlap with respect to other scans was selected as a reference. Scans thatfeaturing directly Iterative Closest Point (ICP) method (see Pomerleau et al. [53] for a review). The point cloud In order to apply ICP, initial registration parameters werethe obtained from thegroup manual measurement overlapped this were registered to its coordinate system. After registration of this of scans the largest overlap with respect to other scans was selected as a reference. Scans that directly of at leastto three common points (from natural details such as corners, edges, or distinctive the reference, the process continued up to include all remaining scans. overlapped this were registered to its coordinate system. After the registration of this group of scans details). order to apply ICP,applied initial registration obtained from the manual Then, the fine registration with ICP method. After registering all measurement scans in this way, the to theIn reference, thewas process continued upthe toparameters include all were remaining scans. of at least three common points (from natural details such as corners, edges, or distinctive Then, In order to apply ICP, initial registration parameters were obtained errors from theoriginated manualdetails). measurement global registration was accomplished to minimize the cumulative from consecutive the registration was applied with natural the ICPdetails method. After registering all scans in this way, the global of atfine least three common points (from such as corners, edges, or distinctive details). registrations. In the global registration, registration of all the scansoriginated with respect to theThen, reference data registration was accomplished to the minimize the cumulative errors from consecutive the fine registration was applied with the ICP method. After registering all scans in this way, the global set were performed The maximum mean square error (RMSE) of scandata registrations registrations. In theaccomplished global registration, the registration of all the scans with respect to from the reference registration simultaneously. was to minimize the root cumulative errors originated consecutive set wereregistration performed The maximum square error (RMSE) of scan registrations after the global was 2.5 cm. In the firstroot stage, TLS point ofthe outdoor (Figure 6) and registrations. In the simultaneously. global registration, the registration ofmean all the scans withclouds respect to reference data after the performed global registration was 2.5 The cm. maximum In the firstroot stage, TLSsquare point error clouds of outdoor (Figure 6) and set were simultaneously. mean (RMSE) of scan registrations indoor (Figure 7) environments were independently registered. Then, these were combined with the indoor environments independently registered. Then, theseofwere combined with the after the(Figure global7) registration was were 2.5 cm. In the first stage, TLS point clouds outdoor (Figure 6) and georeferencing stage that illustrated ininSection georeferencing stageisthat is illustrated Section 6.6. indoor (Figure 7) environments were independently registered. Then, these were combined with the georeferencing stage that is illustrated in Section 6.

Figure 6. The outside 3-D point cloud model of the Mevlana Museum. The images are from west (left)

Figure 6. The outside 3-D point cloud model of the Mevlana Museum. The images are from west (left) and north (right) sides. Figure 6. The outside 3-D point cloud model of the Mevlana Museum. The images are from west (left) and north (right) sides. and north (right) sides.

Figure 7. The inside 3-D point cloud model with color. Figure 7. The inside 3-D point cloud model with color.

Figure 7. The inside 3-D point cloud model with color.

ISPRS Int. J. Geo-Inf. 2016, 5, 18 ISPRS Int. J. Geo-Inf. 2016, 5, 18

8 of 16 8 of 16

5.2. Registration of ToF Camera Point Clouds 5.2. Registration of ToF Camera Point Clouds

5.2.1. Case 1: Measurement of the Mihrab in Huzur-ı Pir Section 5.2.1. Case 1: Measurement of the Mihrab in Huzur-ı Pir Section

In Huzur-ı Pir section, the Mihrab next to the silver sill was measured using a SR4000 ToF camera. In Huzur-ı Pir section, theaMihrab next sillstations. was measured using a SR4000distance ToF camera. Fifty frames were recorded from distance ofto 4.5the msilver at two This measurement resulted Fifty frames were recorded from a distance of 4.5 m at two stations. This measurement distance in a mean object sampling spatial resolution of 2.0 cm (angular resolution 0.24 degrees). The integration mean samplingand spatial of frequency 2.0 cm (angular resolution 0.24 degrees). timeresulted was set in toa30 (unitobject is arbitrary) the resolution modulation was set to 15 MHz. The average The integration time was set to 30 (unit is arbitrary) and the modulation frequency was set to 15 MHz. ® code. measurement files (Figure 8) of the raw frames from both stations were created using Matlab The average measurement files (Figure 8) of the raw frames from both stations were created using ToF point clouds have more mistaken points, especially close to the edge of the frame. Thus, the parts Matlab® code. ToF point clouds have more mistaken points, especially close to the edge of the frame. that have error-prone points on the edge were removed from the point clouds. Then, the second point Thus, the parts that have error-prone points on the edge were removed from the point clouds. Then, the cloud was registered the registered coordinate of the first (reference) point cloud with cm with standard second point cloudtowas tosystem the coordinate system of the first (reference) point1.9 cloud deviation using ICP. 1.9 cmby standard deviation by using ICP.

Figure 8. The Mihrab Huzur-ıPir Pirsection sectionwas was measured measured with camera from twotwo stations. Figure 8. The Mihrab in in Huzur-ı witha aSR4000 SR4000 camera from stations. Intensity images (top), and point clouds (below). Intensity images (top), and point clouds (below).

2: Measurement Mihrabon onthe the Masjid Masjid 5.2.2.5.2.2. CaseCase 2: Measurement ofof Mihrab The Mihrab of the Masjid was captured with a SR4000 ToF camera from three different stations.

The Mihrab of the Masjid was captured with a SR4000 ToF camera from three different stations. Forty images were recorded from each station (Figure 9). The distance between the camera station Forty images were recorded from each station (Figure 9). The distance between the camera station and and the measured Mihrab was approximately 4 m at three points-of-view. The mean object spatial the measured Mihrab was approximately 4m at three Theand mean spatial sampling sampling resolution at this measurement range waspoints-of-view. 1.7 cm. The second theobject third point clouds resolution at this measurement range wasand 1.71.7 cm.cmThe seconddeviations, and the third point clouds were registered were registered using ICP with 1.6 cm standard respectively, by assuming the usingfirst ICPscan with cm and 1.7 cm standard deviations, respectively, by assuming the first scan as reference. as1.6 reference.

ISPRS Int. J. Geo-Inf. 2016, 5, 18 ISPRS Int. J. Geo-Inf. 2016, 5, 18

9 of 16 9 of 16

Figure TheMihrab Mihrab in Masjid measured fromstations three stations with camera. a SR4000 The Figure 9.9.The in Masjid was was measured from three with a SR4000 Thecamera. registration registration of the measurements relation to the first point cloud was applied by ICP. Intensity images of the measurements relation to the first point cloud was applied by ICP. Intensity images (top), and (top), and combined point clouds (below). combined point clouds (below).

5.3. Camera and and TLS TLS Data Data 5.3. Integration Integration of of ToF ToF Camera The laser scanning The gate gate between between the the mausoleum mausoleum and and the the Semahane Semahane was was measured measured with with both both laser scanning and and ToF imaging techniques. Thus, the measurements needed to be integrated. Laser scanning surveying was ToF imaging techniques. Thus, the measurements needed to be integrated. Laser scanning surveying accomplished withwith an average spatial resolution of 1.2 cmcm ononthe imaging was accomplished an average spatial resolution of 1.2 theobject objectsurface, surface,while whileToF ToF imaging provided a spatial resolution of approximately 1 cm. After creating the average image file from multiple provided a spatial resolution of approximately 1 cm. After creating the average image file from frames imaged by the ToF camera, both point clouds were integrated by using ICP (Figure 10). The RMSE multiple frames imaged by the ToF camera, both point clouds were integrated by using ICP (Figure 10). of registration resulted in aresulted resolution 1.9 cm. The RMSE of registration in of a resolution of 1.9 cm.

Figure 10. 10. The The point point clouds clouds of of the the ToF ToF camera camera and and TLS TLS were were combined combined with with ICP. ICP. Figure

Georeferencingand and3D 3DModel ModelAccuracy AccuracyEvaluation Evaluation 6. Georeferencing

together the final georeferencing of any point clouds in a common reference frame and We grouped together the assessment of 3-D model accuracy because both were based on the measurement of an independent set of control points (CP). These consisted of thirty-six control points that could be be well well distinguished distinguished in the TLS/ToF point clouds (Figure 11). A local geodetic 3-D network was set up to measure CP coordinates. A Topcon GPT-3007N theodolite was employed for this purpose, which allowed for

ISPRS Int. J. Geo-Inf. 2016, 5, 18

10 of 16

in the TLS/ToF point clouds (Figure 11). A local geodetic 3-D network was set up to measure CP theodolite was employed for this purpose, which allowed 10 offor 16 measuring CP 3-D coordinates thanks to the integrated reflectorless rangefinder. Considering the measuring CP 3-D coordinates thanks to thean integrated reflectorless Considering the intrinsic intrinsic measurement precision of such instrument, the errorrangefinder. related to the determination of the measurement precision such errorthe related to theofdetermination the network’s network’s stations, andofthe usean of instrument, reflectorlessthe mode, precision CPs has beenof evaluated to be stations, and the use of reflectorless mode, the precision of CPs has been evaluated to be better than ±5 better than ˘5 mm in all directions [54]. This precision is superior to the one of 3D points from TLS mm allimaging. directions [54]. This precision is superior to the one of 3D points from TLS and ToF imaging. and in ToF coordinates. A Topcon ISPRS Int. J. Geo-Inf. 2016, 5, 18GPT-3007N

Figure 11. 11. Control Controlpoints pointswere were measured from traverse points. Georeferencing the and inside and measured from traverse points. Georeferencing of the of inside outside outside point cloud models were executedcontrol by ground control points. Thegeoreferencing accuracy of was the point cloud models were executed by ground points. The accuracy of the georeferencing was assessed by check points. assessed by check points.

The indoor and outdoor point clouds were georeferenced and aligned by using 3-D rigid-body The indoor and outdoor point clouds were georeferenced and aligned by using 3-D rigid-body transformations on the basis of CPs, whose coordinates were identified in the 3-D model. After transformations on the basis of CPs, whose coordinates were identified in the 3-D model. georeferencing, the accuracy assessment was carried out by using a set of check points (ChP) made up After georeferencing, the accuracy assessment was carried out by using a set of check points (ChP) of those CPs that had not been adopted for georeferencing. The metric used for evaluating the accuracy made up of those CPs that had not been adopted for georeferencing. The metric used for evaluating was the RMSE of 3-D residuals on ChPs (Table 1). The average residuals of the ChP coordinates were the accuracy was the RMSE of 3-D residuals on ChPs (Table 1). The average residuals of the ChP less than one centimeter and their standard deviations were about 2 cm. These are very good results coordinates were less than one centimeter and their standard deviations were about 2 cm. These are when compared to the average point spacing (ranging between 1 cm and 3 cm) of laser scanning data. very good results when compared to the average point spacing (ranging between 1 cm and 3 cm) On the other hand, a rigorous quality assessment was carried out by the comparison of CPs distances of laser scanning data. On the other hand, a rigorous quality assessment was carried out by the between coordinates in the point cloud to the ones in the geodetic measurements. Globally, two subsets comparison of CPs distances between coordinates in the point cloud to the ones in the geodetic of 16 and 20 CPs were selected for quality assessment of indoor and outdoor point clouds, respectively. measurements. Globally, two subsets of 16 and 20 CPs were selected for quality assessment of indoor The average results showed high accuracy of 3-D modeling (Table 2 and Table 3). According to these and outdoor point clouds, respectively. The average results showed high accuracy of 3-D modeling results, the accuracy of the obtained 3-D point cloud model was positively evaluated. (Tables 2 and 3). According to these results, the accuracy of the obtained 3-D point cloud model was positively evaluated. Table 1. The georeferencing results of inside and outside point cloud model of the Mevlana Museum. Point Cloud Outside Inside

Control Points # 6 9

Check Points # 10 11

Average Residuals (m) dx dy dz 0.005 −0.002 −0.004 0.014 −0.004 0.007

Std.Dev.of Residuals (m) σx σy σz 0.022 0.013 0.007 0.020 0.026 0.026

ISPRS Int. J. Geo-Inf. 2016, 5, 18

11 of 16

Table 1. The georeferencing results of inside and outside point cloud model of the Mevlana Museum. Point Cloud

Control Points #

Check Points #

Outside Inside

6 9

10 11

Average Residuals (m)

Std.Dev.of Residuals (m)

dx

dy

dz

σx

σy

σz

0.005 0.014

´0.002 ´0.004

´0.004 0.007

0.022 0.020

0.013 0.026

0.007 0.026

Table 2. The quality assessment criteria (df ) for the outside 3-D point cloud model of the Mevlana Museum. Control/Check Point Line

ITRF-2005 Distance (m)

TLS Distance (m)

Discrepancy (m)

df (m)

522–525 525–510 510–521 521–524 524–523 523–520 520–519 519–514 514–515 515–4519 4519–3512 3512–4514 4514–4511 4511–4520 4520–4518

13.716 3.066 8.880 4.207 6.314 24.658 9.157 38.744 3.566 34.237 2.535 9.661 1.611 8.293 1.974

13.705 3.052 8.902 4.218 6.278 24.654 9.167 38.714 3.541 34.255 2.565 9.647 1.573 8.332 1.965

´0.011 ´0.014 0.021 0.011 ´0.035 ´0.004 0.010 ´0.030 ´0.024 0.018 0.030 ´0.014 ´0.038 0.038 ´0.008

0.023

Table 3. The quality assessment criteria (df ) for the inside 3-D point cloud model of the Mevlana Museum. Control/Check Point Line

ITRF-2005 Distance (m)

TLS Distance (m)

Discrepancy (m)

df (m)

707–708 708–722 722–723 723–721 721–715 715–716 716–724 724–725 725–706 706–726 726–727 727–710 710–700 700–701 701–704 704–717 717–719 719–712 712–714

1.277 15.951 1.161 3.089 21.408 2.440 17.087 1.252 18.098 13.440 0.680 15.200 25.912 1.296 2.328 5.136 3.217 23.744 2.559

1.295 15.982 1.148 3.054 21.425 2.445 17.060 1.222 18.106 13.411 0.694 15.169 25.912 1.316 2.365 5.117 3.249 23.754 2.594

´0.018 ´0.030 0.013 0.035 ´0.017 ´0.004 0.027 0.030 ´0.008 0.029 ´0.015 0.031 0.001 ´0.020 ´0.037 0.019 ´0.032 ´0.010 ´0.035

0.024

7. Texture Mapping Although the 3-D point cloud model represents the shape of the measured object, it does not have texture data to render a realistic visualization. When the texture data of the object is requested to be

ISPRS Int. J. Geo-Inf. 2016, 5, 18

12 of 16

recorded in addition to the geometric data, a photorealistic texture has to be mapped on the point cloud after meshing to obtain a continuous surface connecting discrete points. Thus, both geometry and texture data can be visualized together. In order to map the texture, first the triangulated (mesh) model surface has to be derived from the raw point cloud [48]. The triangulated model depicts the model shape with triangles adjacent to each other through the object surface. over a mesh requires the knowledge of the camera interior orientation ISPRSTexturing Int. J. Geo-Inf.images 2016, 5, 18 12 and of 16 exterior orientation of any stations [49]. The former can be obtained from the preliminary camera calibration. The bebe computed by by following oneone of the procedures for image calibration. Thelatter lattermay may computed following of photogrammetric the photogrammetric procedures for orientation. In this Incase, the presence of several images, which areare not image orientation. this case, the presence of several images, which notwell wellconnected connected in in a photogrammetric network, resection method for computing computing the exterior exterior photogrammetric network, led led us us to to prefer the space resection orientation in an independent independent way for for any any camera camera stations. stations. This task, theoretically, theoretically, required required the the orientation computation of corresponding control points (CP)(CP) visible in both the image and the mesh. computation ofatatleast leastthree three corresponding control points visible in both the image and the Alternatively, groundground control control points points (GCP) (GCP) can be can used. some problems (space mesh. Alternatively, be Due used.toDue to numerical some numerical problems resection modelsmodels are non-linear), usually the computational methods require moremore points, a facta that (space resection are non-linear), usually the computational methods require points, fact maymay also also increase the redundancy of theofobservations. that increase the redundancy the observations. triangulated surface was created created in in PolyWorks PolyWorks®® software. Unlike in many software software packages packages The triangulated ® adata ® a real 3-D triangulation process is 2.5-D, in PolyWorks can be done. that implement implementthe the triangulation process is 2.5-D, in PolyWorks realtriangulation 3-D data triangulation Afterbecreating the mesh model, was required towas fix some residual The triangulated can done. After creating theediting mesh model, editing required to fixproblems. some residual problems. image of the domeimage belonging Semahane can betoseen on Figure The triangulated of thetodome belonging Semahane can12. be seen on Figure 12.

Figure 12. point cloud (top-left), solid (top-right), and mesh (below) of Semahane’s 12. TheThe point cloud (top-left), solid (top-right), and meshvisualization (below) visualization of dome. Semahane’s dome.

PI-3000 software (ver. 3.21) was used to map photo-texture on the triangulated model of the PI-3000 software (ver. 3.21) was used to map photo-texture on the triangulated model of the museum. The triangulated model from PolyWorks was transferred to PI-3000 software using the DXF museum. The triangulated model from PolyWorks was transferred to PI-3000 software using the format. The images captured by the Nikon D80 camera were also imported into the software. The DXF format. The images captured by the Nikon D80 camera were also imported into the software. photographs which will be used for texture mapping were selected interactively. They were registered The photographs which will be used for texture mapping were selected interactively. They were to the object (point cloud) coordinate system with conjugated points chosen from the photograph and registered to the object (point cloud) coordinate system with conjugated points chosen from the the point cloud. Then, 3-D virtual models and orthophoto images [55] were created by mapping photophotograph and the point cloud. Then, 3-D virtual models and orthophoto images [55] were created textures onto the triangulated model (Figure 13). Similarly, the triangulated and texture-mapped by mapping photo-textures onto the triangulated model (Figure 13). Similarly, the triangulated models of the Mihrabs in Huzur-ı Pir section and the door of the Masjid were created (Figure 14). and texture-mapped models of the Mihrabs in Huzur-ı Pir section and the door of the Masjid were created (Figure 14).

format. The images captured by the Nikon D80 camera were also imported into the software. The photographs which will be used for texture mapping were selected interactively. They were registered to the object (point cloud) coordinate system with conjugated points chosen from the photograph and the point cloud. Then, 3-D virtual models and orthophoto images [55] were created by mapping phototextures onto the triangulated model (Figure 13). Similarly, the triangulated and texture-mapped ISPRS Int. J. Geo-Inf. 2016, 5, 18 13 of 16 models of the Mihrabs in Huzur-ı Pir section and the door of the Masjid were created (Figure 14).

ISPRS Int. J. Geo-Inf. 2016,13. 5, 18 Figure Orthophoto and texture mapped 3-D model model of of the the dome dome of of Semahane. Semahane.

13 of 16

Figure PirPir section (left, middle) and thethe door of Figure 14. 14.Texture Texturemapped mapped3-D 3-Dmodels modelsofofMihrabs MihrabsininHuzur-ı Huzur-ı section (left, middle) and door Masjid (right). of Masjid (right).

8. 8. Conclusions Conclusions In 3-D model model of the Mevlana Mevlana Museum In this this study, study, aa 3-D of the Museum (Konya, (Konya, Turkey) Turkey) was was created created using using TLS TLS and and ToF imaging datasets. While the application of TLS for 3-D modeling of cultural heritage has been ToF imaging datasets. While the application of TLS for 3-D modeling of cultural heritage has been largely largely proved proved to to be be effective effective in in projects projects carried carried out out so so far, far, here, here, the the usability usability of of aa ToF ToF camera camera in in documentation documentation studies studies of of historical historical structures structures was was investigated. investigated. The The areas areas that that cannot cannot be be measured measured with camera and final unique unique point point cloud with laser laser scanning scanning were were measured measured by by aa ToF ToF camera and aa final cloud model model was was generated generated by by merging merging both both datasets. datasets.In Infact, fact,ToF ToF cameras cameras can can be be handled handled with with ease ease due due to to the the light light weight ofof recording images when in motion. However, the short measurement range weightand andthe thecapability capability recording images when in motion. However, the short measurement of the ToF cameras can be seen as a drawback of this technology, even though this limitation is going range of the ToF cameras can be seen as a drawback of this technology, even though this limitation to be overcome in future in sensors. addition, since the point of ToF camera does not have is going to be overcome futureIn sensors. In addition, since cloud the point cloud of ToF camera doesany not color, it is difficult to select details. These deficiencies limit the use of a ToF camera in full 3-D modeling have any color, it is difficult to select details. These deficiencies limit the use of a ToF camera in full studies. 3-D modeling studies. The distance from thethe camera is The object objectspatial spatialsampling samplingresolution resolutionatatthe themaximum maximummeasurement measurement distance from camera approximately 2 cm. The point clouds recorded by the ToF cameras can be combined using ICP is approximately 2 cm. The point clouds recorded by the ToF cameras can be combined using ICP registration. registration. Moreover, Moreover, integration integration of of point point clouds cloudsfrom from the the ToF ToF camera camera and and laser laser scanner scanner can can be be also also accomplished using ICP. accomplished using ICP. The The terrestrial terrestrial laser laser scanning scanning method, method, which which has has been been used used for for modeling modeling historical historical structures structures for for nearly nearly twenty twenty years, years, has has been been proved proved to to be be an an efficient efficient measurement measurement technique technique in in the the field field of of cultural heritage documentation. The laser scanning of indoor and outdoor surfaces were performed cultural heritage documentation. The laser scanning of indoor and outdoor surfaces were performed in in six six days. days. The Themeasurement measurement of ofthe theroof roofand andindoor indoorcomplex complexdetails detailswas wasespecially especiallytime timeconsuming. consuming. On capacity. The computer that was used to On the the other otherhand, hand,large largescale scaledata dataisisa aburden burdenononcomputer computer capacity. The computer that was used process the data had an Intel Core i5-2400 CPU processor and 8 GB RAM. The process required about to process the data had an Intel Core i5-2400 CPU processor and 8 GB RAM. The process required five days for completion. At the end of this project, all the details of the Mevlana Museum were measured and the 3-D virtual model was created. The quality assessment showed that the point cloud models were created with high accuracy despite of the large number of measurements and the complex structure of the building. On the other hand, georeferencing of the measurements was made by using adequate numbers of control points with high accuracy. In addition, texture mapping with

ISPRS Int. J. Geo-Inf. 2016, 5, 18

14 of 16

about five days for completion. At the end of this project, all the details of the Mevlana Museum were measured and the 3-D virtual model was created. The quality assessment showed that the point cloud models were created with high accuracy despite of the large number of measurements and the complex structure of the building. On the other hand, georeferencing of the measurements was made by using adequate numbers of control points with high accuracy. In addition, texture mapping with digital images recorded by an independent SLR camera allowed an increase in the information content of the final 3-D model. Acknowledgments: This study was funded by The Scientific and Technological Research Council of Turkey (TUBITAK) with project number 112Y025. Author Contributions: Cihan Altuntas was the primary author of this article. Each of the co-authors Ferruh Yildiz and Marco Scaioni contributed to the article on a roughly equally basis. All authors reviewed the final version of the article. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3. 4. 5.

6.

7.

8. 9. 10. 11.

12. 13. 14.

Seker, D.Z.; Alkan, M.; Büyüksalih, G.; Kutoglu, S.H.; Kahya, Y.; Akcin, H. Kültürel Mirasın Kaydı, Analizi, Korunması ve Ya¸satılmasına Yönelik Bir Bilgi ve Yönetim Sisteminin Geli¸stirilmesi, Örnek Uygulama: Safranbolu Tarihi Kenti; The Scientific and Technological Research Council of Turkey (TUBITAK): Ankara, Turkey, 2011. Gruen, A.; Remondino, F.; Zhang, L. Document photogrammetric reconstruction of the Great Buddha of Bamiyan, Afghanistan. Photogramm. Rec. 2004, 19, 177–199. [CrossRef] Remondino, F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sens. 2011, 3, 1104–1138. [CrossRef] Remondino, F.; Boehm, J. Editorial: Theme section, terrestrial 3D modeling. ISPRS J. Photogram. Remote Sens. 2013, 76, 31–32. [CrossRef] Murphy, M.; McGovern, E.; Pavia, S. Historic building information modelling—Adding intelligence to laser and image based surveys of European classical architecture. ISPRS J. Photogram. Remote Sens. 2013, 76, 89–102. [CrossRef] Oreni, D.; Brumana, R.; Torre, S.D.; Banfi, F.; Barazzetti, L.; Previtali, M. Survey turned into HBIM: The restoration and the work involved concerning the Basilica di Collemaggio after the earthquake (L’Aquila). In Proceedings of the ISPRS Technical Commission V Symposium, Riva del Garda, Italy, 23–25 June 2014; pp. 267–273. Scaioni, M.; Vassena, G.; Kludas, T.; Pfeil, J.U. Automatic DEM generation using digital system InduSCAN: An application to the artworks of Milano Cathedral finalized to realize physical marble copies. In Proceedings of the XVIIIth ISPRS Congress, Vienna, Austria, 9–19 July 1996; pp. 581–586. Kedzierski, M.; Fryskowska, A. Methods of laser scanning point clouds integration in precise 3D building modelling. Measurement 2015, 74, 221–232. [CrossRef] Blais, F.; Beraldin, J.A. Recent developments in 3D multi-model laser imaging applied to cultural heritage. Mach. Vis. Appl. 2006, 17, 395–409. [CrossRef] El-Hakim, S.; Gonzo, L.; Voltolini, F.; Girardi, S.; Rizzi, A.; Remondino, F.; Whiting, E. Detailed 3D modelling of castles. Int. J. Archit. Comput. 2007, 5, 200–220. [CrossRef] Barazzetti, L.; Binda, L.; Scaioni, M.; Taranto, P. Importance of the geometrical survey for structural analyses and design for intervention on C.H. buildings: Application to a My Son Temple in Vietnam. In Proceedings of the 13th International Conference on Repair, Conservation and Strengthening of Traditionally Erected Buildings and Historic Buildings, Wroclaw, Poland, 2–4 December 2009; pp. 135–146. Alsadik, B.; Gerke, M.; Vosselman, G. Automated camera network design for 3D modeling of cultural heritage objects. J. Cult. Hérit. 2013, 14, 515–526. [CrossRef] Grussenmeyer, P.; Hanke, K. Cultural heritage applications. In Airborne and Terrestrial Laser Scanning; Vosselman, G., Maas, H.-G., Eds.; Whittles Publishing: Caithness, UK, 2010; pp. 271–290. Akca, D.; Remondino, F.; Novàk, D.; Hanusch, T.; Schrotter, G.; Gruen, A. Recording and modeling of cultural heritage objects with coded structured light projection systems. In Proceedings of the 2nd International Conference on Remote Sensing in Archaeology, Rome, Italy, 4–7 December 2006; pp. 375–382.

ISPRS Int. J. Geo-Inf. 2016, 5, 18

15. 16.

17. 18. 19. 20. 21. 22. 23. 24. 25.

26. 27.

28. 29. 30. 31. 32.

33. 34. 35. 36. 37. 38. 39.

15 of 16

Grussenmeyer, P.; Alby, E.; Assali, P.; Poitevin, V.; Hullo, J.F.; Smiciel, E. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data. Proc. SPIE 2011. [CrossRef] Alba, M.; Scaioni, M. Comparison of techniques for terrestrial laser scanning data georeferencing applied to 3-D modelling of cultural heritage. In Proceedings of the 3D-ARCH, Zurich, Switzerland, 12–13 July 2007; pp. 1–8. Salvi, J.; Matabosch, C.; Fofi, D.; Forest, J. A review of recent range image registration methods with accuracy evaluation. Image Vis. Comput. 2007, 25, 578–596. [CrossRef] Aquilera, D.G.; Gonzalvez, P.R.; Lahoz, J.G. An automatic procedure for co-registration of terrestrial laser scanners and digital cameras. ISPRS J. Photogramm. Remote Sens. 2009, 64, 308–316. Altuntas, C. Pair-wise automatic registration of three-dimensional laser scanning data from historical building by created two-dimensional images. Opt. Eng. 2014, 53, 1–6. [CrossRef] Haala, N. The landscape of dense image matching algorithms. In Proceedings of the Photogrammetric Week 2013, Stuttgart, Germany, 9–13 September 2013; pp. 271–284. Barazzetti, L.; Scaioni, M.; Remondino, F. Orientation and 3D modelling from markerless terrestrial images: Combining accuracy with automation. Photogramm. Rec. 2010, 25, 356–381. [CrossRef] Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [CrossRef] Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [CrossRef] Remondino, F.; Stoppa, D. ToF Range-Imaging Cameras; Springer: Berlin, Germany, 2013. Rinaudo, F.; Chiabrando, F.; Nex, F.; Piatti, D. New instruments and technologies for cultural heritage survey: Full integration between point clouds and digital photogrammetry. In Digital Heritage; Ioannides, M., Ed.; Springer: Berlin, Germany, 2010; pp. 56–70. Chiabro, F.; Rinaudo, F. ToF cameras for architectural surveys. In TOF Range-Imaging Cameras; Remondino, F., Stoppa, D., Eds.; Springer: Berlin, Germany, 2013; pp. 139–164. Rinaudo, F.; Chiabrando, F. Calibrating and evaluating a range camera for cultural heritage metric survey. In Proceedings of the XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013; pp. 271–276. Republic of Turkey Ministry of Culture and Tourism. Available online: http://www.kulturvarliklari.gov.tr/ TR,43870/konya—mevlana-muzesi.html (accessed on 3 December 2012). GEO3D Optech Ilris-3D Specifications. Available online: http://www.geo3d.hr/download/ilris_3d/ brochures/ilris_36d.pdf (accessed on 10 January 2013). Vosselman, G.; Maas, H.G. Airborn and Terrestrial Laser Scanning; Whittles Publishing: Caithness, UK, 2010. Clark, J.; Robson, S. Accuracy of measurements made with a Cyrax 2500 laser scanner. Surv. Rev. 2004, 37, 626–638. [CrossRef] Vezoˇcnik, R.; Ambrožiˇc, T.; Sterle, O.; Bilban, G.; Pfeifer, N.; Stopar, B. Use of terrestrial laser scanning technology for long term high precision deformation monitoring. Sensors 2009, 9, 9873–9895. [CrossRef] [PubMed] Gikas, V. 3D terrestrial laser scanning for geometry documentation and construction management of highway tunnels during excavation. Sensors 2012, 12, 11249–11270. Hussmann, S.; Ringbeck, T.; Hagebeuker, B. A Performance review of 3D ToF vision systems in comparison to stereo vision systems. In Stereo Vision; Bhatti, A., Ed.; InTech: Rijeka, Croatia, 2008; pp. 103–120. Piatti, D.; Rinaudo, F. SR-4000 and CamCube 3.0 Time of Flight (ToF) cameras: Tests and comparison. Remote Sens. 2012, 4, 1069–1089. MesaImaging, SwissRanger SRSR4000 Overview. Available online: http://www.mesa-imaging.ch/ products/sr4000/ (accessed on 9 March 2015). Piatti, D. Time-of-Flight Cameras: Tests, Calibration and Multi Frame Registration for Automatic 3D Object Reconstruction. Ph.D. Thesis, Politecnico di Torino, Torino, Italy, 2010. Lichti, D.D.; Qi, X.; Ahmed, T. Range camera self-calibration with scattering compensation. ISPRS J. Photogramm. Remote Sens. 2012, 74, 101–109. [CrossRef] Valverdea, S.A.; Castilloa, J.C.; Caballeroa, A.F. Mobile robot map building from time-of-fight camera. Expert Syst. Appl. 2012, 39, 8835–8843. [CrossRef]

ISPRS Int. J. Geo-Inf. 2016, 5, 18

40.

41. 42.

43. 44. 45. 46.

47. 48.

49. 50.

51.

52. 53. 54. 55.

16 of 16

Oggier, T.; Büttgen, B.; Lustenberger, F.; Becker, G.; Rüegg, B.; Hodac, A. SwissRanger SR3000 and first experiences based on miniaturized 3D-ToF cameras. In Proceedings of the 1st Range Imaging Research Day, Zurich, Switzerland, 8–9 September 2005; pp. 97–108. Boehm, J.; Pattinson, T. Accuracy of exterior orientation for a range camera. In Proceedings of the ISPRS Commission V Mid-Term Symposium, Newcastle upon Tyne, UK, 21–24 June 2010; pp. 103–108. Cui, Y.; Schuan, S.; Chan, D.; Thrun, S.; Theobalt, C. 3D shape scanning with a time-of-flight camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 1173–1180. Altuntas, C.; Yildiz, F. The registration of point cloud data from range imaging camera. Géod. Cartogr. 2013, 39, 106–112. [CrossRef] Lichti, D.D.; Jamtsho, S.; El-Halawany, S.I.; Lahamy, H.; Chow, J.; Chan, T.O.; El-Badry, M. Structural deflection measurement with a range camera. ASCE J. Surv. Eng. 2012, 138, 66–76. [CrossRef] Qi, X.; Lichti, D.D.; El-Badry, M.; Chan, T.O.; El-Halawany, S.I.; Lahamy, H.; Steward, J. Structural dynamic deflection measurement with range cameras. Photogramm. Rec. 2014, 29, 89–107. [CrossRef] Ganapathi, V.; Plagemann, C.; Koller, D.; Thrun, S. Real time motion capture using a single time-of-flight camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 755–762. Clemente, L.A.; Davison, A.J.; Reid, I.D.; Neira, J.; Tardos, J.D. Mapping large loops with a single hand-held camera. In Proceedings of the Robotics: Science and Systems, Atlanta, GA, USA, 27–30 June 2007. Previtali, M.; Barazzetti, L.; Scaioni, M. An automated and accurate procedure for texture mapping from images. In Proceedings of the 18th International Conference on Virtual Systems and Multimedia (VSMM), Milan, Italy, 2–5 September 2012; pp. 591–594. Luhmann, T.; Robson, S.; Kyle, S.; Böhm, J. Close Range Photogrammetry: 3D Imaging Techniques; Walter De Gruyter: Berlin, Germany, 2013; p. 702. Yu, Y.; Debevec, P.; Malik, J.; Hawkins, T. Inverse global illumination: Recovering reflectance models of real scenes from photographs. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999; pp. 215–227. Yang, H.; Welch, G.; Pollefeys, M. Illumination intensive model-based 3D object tracking and texture refinement. In Proceedings of the 3rd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), Chapel Hill, NC, USA, 14–16 June 2006; pp. 869–876. PolyWorks Software, version 9.0, InnovMetric Software Inc. Ville de Québec, QC, Canada, 2007. Pomerleau, F.; Colas, F.; Siegwart, R.; Magnenat, S. Comparing ICP variants on real-world data sets. Auton. Robot. 2013, 34, 133–148. [CrossRef] Schofield, W.; Breach, M. Engineering Surveying, 6th ed.; Butterworth-Heinemann: Oxford, UK, 2007. Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult. Hérit. 2007, 8, 423–427. [CrossRef] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).