1 application of photogrammetry for measuring dip

33 downloads 0 Views 1MB Size Report
plane whereas DD is the azimuth (clockwise direction with respect to the North) of the ... Where: Q is an angle, in degree which ensures that lies in the correct quadrant and in ... Comparison of Compass and SM measurement in OkChon slope ...
Application of Photogrammetry for measuring dip and dip direction and creating 3D model for slope and face of underground works TrungThanh Dang, Hanoi University of Mining and Geology

Abstract: Manual fracture mapping in tunnels, slopes, caverns, mines or other underground spaces is a time-consuming and, sometime, dangerous process. Therefore, this paper describes the software named Surface Mapper 1.0, which is written in Visual Basic 6.0 and a photogrammetrical method for dip and dip direction automated measuring and create 3D model on exposed rock faces in underground works and slope. In this method, the dip and dip direction of the geologic structures and 3D geometry are measured using digital images.

1. Introduction The American Society for Photogrammetry and Remote Sensing (ASPRS) in the Mapping Sciences defines Photogrammetry as “ the art, science, and technology of obtaining reliable information about physical objects and the environment through the processes of recording, measuring, and interpreting photographic images and patterns of electromagnetic radiant energy and other phenomena” . The development from Classical to Digital Photogrammetry have simplified field and laboratory works. As a result, from the typical topographic mapping applications, photogrammetric techniques are now being used in the field of Architecture, Engineering, Geology, Archaeology, and Medical Science, to name a few. An application using laser sensor profilometer in measuring roughness of rock surface was introduced (Lee and Ahn, 2004). Aside from profilometer, a high resolution Kodak DCS 420 digital camera is also used (Lee and Ahn, 2002) for estimating roughness of rock surface. Another method that uses Digital Photogrammetry was introduced by Poropat (2001), which is used for mapping rock masses. But each method presented usually has advantages and disadvantages, to reduce some of these disadvantages, the purpose of this study are to develop and improve an algorithm for automatic measurement system to consider a new technique, a new method and a new way to measure geologic features and estimating tunnel and slope stability accordingly; and to improve field mapping of excavated surfaces, which is based on the concept of Photogrammetry. This will provide a new way of collecting measurements in an exposed rock. The software was developed using Microsoft Visual Basic 6.0. It can measure orientation of geological structures, can model rock surface in 3D and can analyze the failure criteria. 2. Concepts 2.1. General image matching process To develop and improve an algorithm for automatic measurement system, the concept of Photogrammetry is used. The process is divided into three main steps. First is the three basic steps which includes, image acquisition (1), lens distortion calibration (2) and conversion of Red, Green, and Blue colors to Intensity values (3). Second is the image matching process which covers matching process (4) and calculation of overlapping areas (5). The last step is the calculation of 3D coordinates of the matching point (6) and calculation of dip and dip direction using plane equation (7). The detailed procedure for each step will be described in the succeeding sections.

1

Basic workflow for digital photogammetry

Steps for calculating Dip and Dip direction and creating 3D model

Fig. 1. Flowchart for the image processing 2.2. The three basic steps There are three basic steps that need to be done before calculating plane orientation and generating 3D model. First is acquiring stereopair images, second is calibrating the camera lens and third is calculating the intensity values of the images. 2.2.1. Image Acquisition Stereopair images are required in using (SM). The images of the target object are obtained from photographs taken from left and right camera positions using Nikon D70S which uses lens with 50mm or 105mm focal length (Fig. 2). Each stereopair images should have overlaps. Calculation of overlap will be discussed in detail in Section 2.3.2.

Fig. 2. Setting-up the equipment in the field 2.2.2. Lens Distortion Calibration To start processing, all the images have to be fixed caused by lens distortion. There are two types of distortion namely tangential and radial distortions. For checking and correcting lens distortion, this study adopted the calibrating method for radial distortion based on cross ratio invariability for perspective projection (Zhang, et al.,

2

2003). The two effects of radial distortion are the negative and positive distortion (Fig. 3). These distortions are corrected by using the equations below to get an undistorted image.

x'a 1 k1ra2 x'b 1 k1rb2

x'c 1 k1rc2 x'c 1 k1rc2

y 'a 1 k 2 ra2 y 'b 1 k 2 rb2

y 'c 1 k 2 rc2 y 'c 1 k 2 rc2

x'b 1 k1rb2 x'a 1 k1ra2 y 'b 1 k 2 rb2 y 'a 1 k 2 ra2

x'd 1 k1rd2 x'd 1 k1rd2 y 'd 1 k 2 rd2 y 'd 1 k 2 rd2

xia xib yia yib

xic xic yic yic

xib xia yib yia

xid xid

(1)

yid yid

Fig. 3. Effect of radial distortion (a) Undistorted grid; (b) Negative distortion; (c) Positive distortion 2.2.3. Conversion of RGB to Intensity value The RGB color space consists of the three additive primaries, namely red (R), green (G), and blue (B). Spectral components of these colors combine additively to produce a resultant color. The RGB model simplifies the design of computer graphics systems but is not ideal for all applications. The R, G, and B color components are highly correlated, which makes it inefficient for some image processing algorithms. Many processing techniques, such as histogram equalization, work on the intensity component of an image only. These processes utilize the HSI (Hue, Saturation and Intensity) color model rather than the RGB model. For this paper, Intensity value is used rather than RGB color components. To convert RGB color to intensity value, equation 2 can be used: Intensity value = 0.299R + 0.587G + 0.114B (2) 2.3. Image matching process This section describes the algorithm on how to find the matching points fast on two images using a modified Sun’ s algorithm, how to get the 3D coordinates from two stereo images using disparity values, and how to calculate overlapping area 2.3.1. Matching process In matching process, comparing pixel by pixel is impossible. A pixel on the left photo can have a lot of matching pixels on the right photo. Therefore, a search region must be used. If a point is selected on the left photo, the surrounding pixels will be the correlation window. The matching point will be searched in the searching window in the right image (Fig. 4). The center of the coordinates of the searching window on the right image can be defined by equation 3. XR = XL + ShiftX (3)

3

YR = YL + ShiftY XL and YL are coordinates of an arbitrary point on the left image ShiftX and ShiftY are the horizontal and vertical deviation between two

Where: images

center pixel which is needed to find a matching point

Fig. 4. Establishing correlation and searching window; K and L is the size of the correlation window; v and d is the size of searching window For matching process, the intensity values of the pixels are used for comparison. During this process, correlation window on the right image will move around the searching window as shown in Fig. 4b. The search process will stop if the correlation coefficients are calculated for the whole searching window. If the value is over a certain threshold, the point is said to be the matching point. However, this process is time consuming if searching window is quite big. In order to save time and get the best matching point between two images, this study uses a modified Sun’ s algorithm (Changming Sun, 2002) for fast finding matching point on the whole image. 2.3.2. Approximating the overlapping area Using these matching points, which is discussed in last part, we can define the overlapping area between the left and right images. The overlap region between two images can be determined simply by some seed points on the two images. Supposed the coordinates of the seed points are XL and YL on the left image, X R and YR on the right image (Fig. 5). The deviation between these two coordinates of the images is calculated as follow:

Fig. 5. Seed point on the left and right images and overlap area

4

ShiftX = XL - XR (4) ShiftY = YL - YR Where: ShiftX is the horizontal deviation between two images. ShiftY is the vertical deviation between two images. 2.4. Determination of fracture orientation by recorded - coordinates of points 2.4.1. Calculating 3D coordinates Calculation of the target point coordinates is the next step in measuring fracture orientations in rock masses. Fig. 6 shows a schematic illustration of the stereoscopic photogrammetry measurement of the 3D coordinates.

(a)

Formula to calculate coordinates (X, Y, Z)

X

Y

x1 x1

x2

y1 x1

x2

Z

D,

D

(b)

y2 x1

x2

x1

C x2

k

D

(5)

D

Fig. 6. Basic concept of photogrammetry Given two stereo photographs A and B where the focal length, C, of the camera are at positions O1 and O2, respectively. Note the straight lines connecting index point (Pt1) on the plane and the points on the photograph (P1, P2) are passing though O1 and O2 of left and right camera. If the straight line between the center of the right lens (O2) and the point on the photograph (P2) is moved to the left image as in Fig. 6b, then triangle O1-P1-P2 can be formed and this will have a similar shape with the triangle Pt1-O1-O2. Therefore, following the relationship of the two similar triangles, equation 5 can be established. Since C is known as well as the distance between the two camera lenses, D, then the 3D coordinates of the rock surface could easily be measured using equation 5 (Hwang, 2001). 2.4.2. Defining a plane to represent the fracture surface To measure a plane, at least three points are selected on the target and using equation 6 to defined the plane: Z = a + bX + cY (6) If more than three points are selected, then least square method is used (Mathematical, http://www.efunda.com/math/leastsquares/lstsqrzrwtxyld.cfm).

5

2.4.3. Determining the parameters of fracture orientation The orientation of discontinuity is given by two angles, known as dip angle (DA) and dip direction (DD). DA is defined between horizontal plane and the discontinuity plane whereas DD is the azimuth (clockwise direction with respect to the North) of the projection of the discontinuity plane on the horizontal plane (Fig. 7). If the plane of a fracture surface is defined, its orientation parameters such as DA and DD can be readily determined.

Fig. 7. Dip and Dip direction angle definition

a. Dip Angle (DA) DA which is in the range of 0 O and 90O, is determined by the angle between the plane of a fracture surface and a horizontal plane (Fig. 7). The fracture plane can be defined by at least three points captured on the exposed fracture surface.

Fig. 8. Determination of DA of fracture plane The normal vector h of fracture plane (Fig. 8) and the normal vector of the horizontal plane hn , can be represented by bX + cY - Z + a = 0 and X-Z plane for Y = 0 as h = (b, c, -1) and h n = (0, 1, 0). The dot product of these two vectors is:

h hn

hn h cos

(Rade and Westergren, 1998). Therefore, the angle ( ) between

these two planes can be determined by:

cos

hn

h

hn h

c b2

c2 1

Then, the Dip angle is given by:

6

DA

arccos

c b2

c2 1

(7)

b. Dip direction (DD) The orientation of the fracture plane is defined by the dip direction (DD). DD is defined as the positive angle from the North direction, which is measured clockwise to the horizontal projection of the fall line when looking downwards. DD is perpendicular to the strike, (Fig. 9) which is defined as the line formed by the intersection of fracture surface with an imaginary horizontal plane. From normal vector, it can be easily converted to the strike of the plane following the equation (Priest, 1983): b Strike arctan Q (8) a Where: Q is an angle, in degree which ensures that lies in the correct quadrant and in the range from 00 to 3600. This parameter depends upon the signs of b and c as listed in table 1. Table 1. Determining the value of Q b c Q 0 ≥0 ≥0