Three-dimensional image orientation through only one rotation applied to image processing in engineering Jaime Rodríguez,1,* María T. Martín,1 José Herráez,2 and Pedro Arias3 1
Politecnic High School, University of Santiago de Compostela, 27002 Lugo, Spain
2
Cartographic Engineering School, Polytechnic University of Valencia, 46022 Valencia, Spain 3
Mining School, University of Vigo, Rua Maxwell s/n, 36310 Vigo, Spain *Corresponding author:
[email protected]
Received 28 April 2008; revised 3 October 2008; accepted 23 October 2008; posted 3 November 2008 (Doc. ID 95541); published 8 December 2008
Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult. © 2008 Optical Society of America OCIS codes: 080.0080, 100.0100, 110.0110.
1. Introduction
An advantage of photogrammetry over other measuring techniques is its capacity to take measurements without any contact with the object, for which it is necessary to determine the relationship between image and object spaces in terms of their exterior orientation and scale. Close range photogrammetry applications have been under study for the last few years by various pieces of commercial software such as Elcovision [1], Iwitnessphoto [2], and Photomodeler [3] among others, as well as by some researchers that have in mind shortening the gap between photogrammetry and nonspecialized users in engineering applications. These are mainly based on the use of low cost digital cameras and on the suppression of methods and topographic equipment for ground control points measurement. Ethrog presented a photogrammetric 0003-6935/08/356631-07$15.00/0 © 2008 Optical Society of America
method for determining the interior orientation parameters and the orientation angles using objects with parallel and perpendicular lines instead of control points [4]. The concept of linear feature was introduced to represent the formulation for photogrammetric observations and linear features to be combined in an adjustment procedure [5]. The use of a single image with geometric restrictions to rebuild objects was studied by Van Den Heuvel [6]. The restrictions are based on geometric relationship among straight lines like coplanarity, parallelism, perpendicularity, symmetry, and distance. A similar treatment was applied in order to obtain the camera interior orientation parameters through a photographic shot of an unknown scale introducing a distance in the space object [7]. Arias et al. used a simple method of close range photogrammetry to document agroindustrial constructions in Galicia (Spain) based on the use of a conventional digital camera calibrated and on vertical plumb lines that allow the orientation of the models generated [8]. 10 December 2008 / Vol. 47, No. 35 / APPLIED OPTICS
6631
The applications of this research in engineering are very diverse, including fields such as construction and cultural heritage where it is usual to measure objects through direct methods, using tapes and plumb lines. The substitution of these methods by indirect ones based on close range photogrammetry can contribute with advantages such as improving measurement precision, time reduction, coarse error elimination, or the creation of a digital record of the photographed object, and also from a personal security support point, since it would limit the operator’s movement around buildings under construction. The relationship between the scale of image and object spaces should therefore be set externally to it. To that purpose, Arias et al. [8] introduced marks in the orientation plumb lines setting at the same time the scale between the spaces for measuring the agroindustrial constructions, Tommaselli and Lópes Reiss presented a photogrammetric method based on the use of a camera and a handheld lasermeter with which the dimensions of flat surfaces through a single picture and the measurement of the distance to the building surfaces are scaled [9]. More recently, a device for measuring topographic surfaces based on photos and laser measurements based on the same idea but including systems of mirrors to align the devices axis has been developed [10]. This research focuses on the spatial orientation of the image without having any contact with the object (no ground control points). The methodology, based on a single photographic image, conducts the exterior orientation between image and object spaces through the vanishing line of the image space, that is, the intersection of the image plane with a parallel plane to the object plane passing through the optical center of the camera, by means of a single rotation through a change in the coordinate system focused on the main point of the photogram. This enables multiple applications in construction as well as in the field of robotics and artificial vision, by only subsequently introducing a single piece of dimensional data to scale a multiple composition of isolated images. 2. Image Orientation
Usually, the 3D coordinate system defined by three axes focused on the main point of the image is not parallel to the corresponding coordinate system of the object space. Determining the relationship between both systems proves to be simple through the calculation of the three existing rotations between them (Euler angles), according to Eq. (1):
1 0 cos ϕ cos χ X @ Y A ¼ @ − cos ϕ sin χ sin ϕ Z 0
6632
where ðX; Y; ZÞ are the coordinates of a point in the object space, ðx; y; cÞ are their corresponding coordinates in the image space, and ðω; Φ; χÞ are the tree rotations to be determined between both systems. The solution to the system is based on the knowledge of known points of coordinates in the object space ðX; Y; ZÞ, which is not always possible. To solve this problem we appeal to geometric constraints such as the use of parallel and perpendicular straight lines in the object space [12], by which it is possible to calculate the vanishing line they generate in the image space, through which the orientation could be made. So that the relationship between the two spaces only requires a photo and subsequently the knowledge of a scaling factor (λ) if it is intended to measure magnitudes on the object. In our case, since the objective of the research is the exterior orientation between the image and the object spaces to be applied to the measurement of the object element’s magnitudes, independently from each absolute spatial position, we use a variant of the Van Den Heuvel method restricting the rotation calculations, through determining the vanishing line in the image space, to only one with a variation of the image coordinate system. Thereby we obtain the orientation in an alone rotation, whereas the algorithm of Van Den Heuvel requires Euler’s three rotations for the orientation because it keeps the system of coordinates fixed in the image using the geometric restrictions on the element. A.
Calculation of the Vanishing Lines
The determination of the vanishing line requires calculation of the vanishing points resulting from the intersections generated by the pairs of parallel straight lines in the picture, whose image in the photogram is convergent because of the conical projection. The vanishing line can be calculated either by using two pairs of parallel lines to obtain a unique solution or by using the adjustment by least-squares finding a solution formed by numerous intersections, by means of different pairs of the same plane or even from parallel planes [13]. Given two pairs of straight lines defined by points 1, 2, 3, and 4 in Fig. 1, the equation of the vanishing line is
ðy − ya Þ ¼
ð2Þ
where ðxa ; ya Þ and ðxb ; yb Þ are the resulting points of the intersection of straight lines 1–4 and 2–3 for
cos ω sin χ þ sin ω sin ϕ cos χ cos ω cos χ − sin ω sin ϕ sin χ − sin ω cos ϕ
APPLIED OPTICS / Vol. 47, No. 35 / 10 December 2008
ðx − xa Þðyb − ya Þ ; ðxb − xa Þ
10 1 x sin ω sin χ − cos ω sin ϕ cos χ sin ω cos χ þ cos ω sin ϕ sin χ A@ y A; c cos ω cos ϕ
ð1Þ
Fig. 2. Image and object coordinate systems. Geometric determination of the vanishing line.
Fig. 1. Calculation of the vanishing line.
point a and straight lines 1–2 and 3–4 for point b in the image space. The calculation of a and b is done either directly by marking the points or through automatic edge extraction and recognition algorithms [14]. The first of the procedures suffers from errors caused by the accuracy in the marking of the points, while the second is affected by the lighting conditions and the radiometry of the object space (especially outdoors), so that on many occasions the recognition of edges turns out to be invalid. Proceeding through manual signaling methods, in order to automate the calculation, it is necessary to add a final premise that ensures the consistency of the calculation of the vanishing line in accordance with the order of entry of the coordinates of the vertices. An error in capture order (cross signaling) is easily detectable by δ¼
ðx − x1 Þ ðy − y1 Þ ¼ ; ðx2 − x1 Þ ðy2 − y1 Þ
ð3Þ
To do so, the only thing needed will consist of the application of a unique rotation ψ to the coordinate system focused on the main point of the image, to place one of its axes parallel to the vanishing line by creating a new system of coordinates in the image [Fig. 3(a)]. This rotation of the coordinate system does not affect the spatial position of the image. It is applied to the coordinate system, so that the coordinates of each point in the image are recalculated with respect to a new system, but its spatial position remains invariant. If A and B are the two vanishing points, with image coordinates ðxa ; ya Þ and ðxb ; yb Þ, respectively, obtained by the intersection in the image of the parallel straight lines in the object space, the angle is given by the equation ψ ¼ arctan
ðxa − xb Þ : ðya − yb Þ
ð4Þ
Once the coordinate image system is positioned, the orientation of the image consists of a rotation ξ on the x0 axis parallel to the vanishing line: c ξ ¼ arctan ; d
ð5Þ
where, if the result is 0 ≤ δ ≤ 1, the calculated intersection point is estimated between points 1 and 2 (in the same way as it would be calculated with points 3 and 4), in which case a mistake has been made and the process must be repeated. B. Image Orientation
Since the vanishing line is the intersection of the image plane with a parallel plane to the object plane passing through the optical center of the camera (Fig. 2), the vanishing line is parallel to the object plane, and at the same time is also parallel with respect to the image plane (as it is contained in it). So, using the vanishing line as the axis of rotation of the image plane, with a single rotation both spaces could be oriented.
Fig. 3. Image orientation: (a) change of coordinate system in the image space, rotation ψ and (b) image spatial orientation, rotation ξ. 10 December 2008 / Vol. 47, No. 35 / APPLIED OPTICS
6633
where c is the focal distance of the camera and d is the distance between the main point and the vanishing line in the image space [Fig. 3(b)] that is given, in the function of the coordinates of a and b, by the following expression: jya xb − yb xa j d ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ðxb − xa Þ2 þ ðyb − ya Þ2
ð6Þ
C. Scaling Factor Between the Two Spaces
The orientation of the image calculated by this method becomes relative, taking into account that one rotation, with which the absolute orientation would be obtained, has been omitted. However, that rotation is completely irrelevant in processes based on isolated photographs (without stereoscopy), being able to measure any magnitude, as they are given by differences of coordinates, regardless of the orientation of the object space. For the measurement of magnitudes it is only necessary to calculate a scaling factor λ between both spaces. The scaling can be obtained through any of the conventional methods based on a single known magnitude, by means of two or more control points of known coordinates in the object, or by using new hybrid systems based on laser [9] or [10] by the collinearity condition [15] reduced: 1 0 01 0 1 X x x @ Y A ¼ λðRξ Þ@ y0 A ¼ λðRξ ÞðRψ Þ@ y A; Z c c 0
ð7Þ
where Rξ is the ξ angle rotation matrix upon the x0 axis (spatial rotation of the image and Rψ is the ψ angle rotation matrix applied to the coordinate system of the image (image coordinate system rotation without spatial variation). 3. Results
To test the methodology it is necessary to employ an object space over which magnitudes and/or points of support are known, to provide the scale factor as data, in order to calculate spacial magnitudes using both the implemented and the conventional methodology, and contrast results. Numerous shots have been taken from different angles of inclination and distances on a calibrated panel. Any error or difference obtained will be due to the orientation of the imTable 1.
Fig. 4. Photographic shot at a distance of approximately 3 m with a slope of approximately 40g over a rectangular panel of known dimensions.
age since a common scale will be the starting point in both cases. To prove the suitability of the system a series of tests have been done using the following instruments: 1. Calibrated camera Canon EOS 10D Focal distance: 20:2157 mm Format: 22:5203 mm × 15:0132 mm 3072 × 2048 pixels Principal point: (11.1601, 7.5245) Distortion parameters: K 1 ¼ 4:034e − 5 mm−1 2:2e − 6 mm−1 K 2 ¼ −1:726e − 5 mm−3 2:9e − 6 mm−3 2. Calibrated panel Dimensions: 0:798 × 0:399 m (forming a mesh of 9 × 5 marks) In order to determine the precision of the implemented algorithm, several tests were made measuring over a rectangular panel of known dimensions placed on a wall. In the first place, a test conducted at a distance of approximately 3 m with a slope of approximately 40g is shown (Fig. 4). Once the image is set, the four corners of the panel are marked over the image and their coordinates calculated, and then the two dimensions of the panel (height and width) are calculated. Table 1 shows the real length of the two sides of the panel and the ones measured with the proposal method, taking for calculation of the value the scale
Comparison of the Dimensions of a Calibrated Panel with the Proposal Method for Different Definitions of the Vanishing Line in the Same Image (in Meters)
Horizontal Dimension
Vertical Dimension
Vanishing Line
Real
Photograph
Difference
Real
Photograph
Difference
Marks Marks Marks Marks Marks
0.798 0.798 0.798 0.798 0.798
0.803 0.804 0.800 0.801 0.800
0.005 0.006 0.002 0.003 0.002
0.399 0.399 0.399 0.399 0.399
0.402 0.397 0.396 0.403 0.401
0.003 −0:002 −0:003 0.004 0.002
6634
1, 9, 19, 27 2, 8, 29, 35 10, 18, 37, 45 3, 7, 30, 34 1, 9, 37, 45
APPLIED OPTICS / Vol. 47, No. 35 / 10 December 2008
Fig. 5. Cumulative frequency of absolute error in panel measurement.
factor of the known distance between two marks of the panel, as well as the differences between them (marks have been numbered correlatively beginning from the lower left corner). The maximum absolute error is generated in the horizontal dimension (0:6 cm), whereas the maximum relative error is generated in the vertical one (1%), which seems to be coherent with the definition of the vanishing points depending on the geometry of the element. Each value corresponds to a different definition of the vanishing line. In order to study the cumulative frequency distribution of the absolute errors, that are obtained comparing real dimensions with the ones obtained with the proposed method, a square panel of 0:8 × 0:8 m has been used (Fig. 5). A total of 50 measurements from the panel were calculated (hence, 50 values of the error were obtained), each one corresponding to a vanishing line obtained starting from two different pairs of parallel lines. As can be seen, 90% of the errors are less than 5 mm, which forms a relative error of 0.6%. Tests carried out at a distance of about 7 m from different positions are shown in Table 2 (Fig. 6). The results obtained show a lower accuracy caused by the distance to the item, which means lower image resolution and therefore the marking of the points that lead to the calculation of the vanishing line is Table 2.
less precise. Still the maximum error obtained at this distance is 3:2 cm (a relative error of 4% in width) and 2:2 cm in height (relative error of 5.5%). In both cases we can state that best results were obtained when the camera axis is not exactly perpendicular to the wall plane, in a way that it is more accurate to define the vanishing lines. The studies carried out from different places can be closer to three intervals for short distances to the element (between 1 and 5 m): the interval between −70g < α < 70g samples very precise values while in −100g < α < −70g and 70g < α < 100g errors caused by the perspective suppose a loss of precision that increases very rapidly, reaching inconsistent results in the determination of the vanishing line as we approach 100g (Fig. 7). However studies conducted at distances greater than 5 m show a drastic loss of precision out of range −50g < α < 50g due to the fact that the perspective is considerably affected by any minimum error in the marking of the points that define the vanishing line. In order to maintain the accuracy while increasing the distance it would be necessary to have a focal increase in the same proportion. Likewise there is a “black point” within the optimal interval in all cases which corresponds to the perfectly perpendicular position to the object, whose vanishing line is formed at infinity. 4.
Conclusions
The obtained precision in the measurement of elements is directly proportional to the accuracy with which the vanishing line is calculated, and therefore the angle of inclination with respect to the object plane is decisive. However, for analysis of angle of inclination, it is first necessary to separate, as two independent operations, the calculation of the vanishing line in the image space and the measuring of elements in the object space, since the formation of the vanishing line can be carried out through the element being measured or can be totally independent of it. In cases where the element to be measured itself determines the vanishing line, the accuracy obtained will depend on the suitability of the mentioned object, both for its geometry and by its position and size within the image. In contrast, in those cases where the vanishing
Comparison of the Dimensions of a Calibrated Panel with the Proposed Method for Different Definitions of the Vanishing Line from Different Inclination Angles (α) (in Meters)
Horizontal Dimension
Vertical Dimension
Camera Position
α (approx.)
Real
Photograph
Difference
Real
Photograph
Difference
1 2 3 4 5 6 7 8 9
−70g
0.798 0.798 0.798 0.798 0.798 0.798 0.798 0.798 0.798
0.830 0.824 0.823 0.823 0.822 0.820 0.820 0.824 0.826
0.032 0.026 0.025 0.025 0.024 0.022 0.022 0.026 0.028
0.399 0.399 0.399 0.399 0.399 0.399 0.399 0.399 0.399
0.421 0.420 0.417 0.418 0.419 0.419 0.418 0.420 0.420
0.022 0.021 0.018 0.019 0.020 0.020 0.019 0.021 0.021
−50g −30g −10g 0g 10g 30g 50g 70g
10 December 2008 / Vol. 47, No. 35 / APPLIED OPTICS
6635
Fig. 7. Optimal interval of inclination of the shots in a theoretical horizontal plane with respect to the perpendicular from the object (α decreases with distance).
Fig. 6. Photographic shots at a distance of approximately 7 m from different slopes over a rectangular panel of known dimensions.
line is not determined by the object subject to measurement it will be crucial, besides its geometric suitability, whether the selected element includes the element to be measured, resulting in the first case of greater precision. Separating both concepts (calculation of the vanishing line on one side and measurement of the ele6636
APPLIED OPTICS / Vol. 47, No. 35 / 10 December 2008
ment on the other), with respect to determination of the vanishing line, we can state that the more regular the object is (obtaining very unlike errors while determining one of the vanishing points when the geometry is very different in one direction), the shorter the distance to the object is, and/or the greater the element in the image is (the points defining the vanishing line that result are much more accurate), and the more focused it is, the better the vanishing line will be (the conical projection and distortions will have less effect). With regard to the inclination the displacement along a theoretical horizontal axis, from which α intervals are obtained, must be linked to similar values of inclination in the vertical so as not to get in any case parallel straight lines in the photogram, which would cause one to obtain one of the points a or b that determine the vanishing line at infinity. Regarding the measurement of the elements, the method results are independent of their dimensions (always when they are not too small within the image and/or are very off-center) and perspective (within the specified ranges in which the definition of the points is univocal). The developed methodology enables the measurement of any kind of regular or irregular flat geometry, as well as of an infinite number of elements that can be found in the same object plane, as long as there are parallel lines in the image that will allow us to determine the vanishing line of the plane with accuracy. Likewise, the calculation of the exterior orientation is possible by the determination of the vanishing line in an iterative fashion across all elements that meet the requirements within the same image. This enables a more accurate estimation at the same time as it shows possible mistakes in the guidance through the residuals
of the adjustment. Besides which it is essential in shots where the object represents a very small portion in the image (either by its small size or by the distance from the camera), allowing a better fit. In the same way, it is convenient to indicate that the inclusion of equations of rectangularity conditions of the objects (if any) means more consistency to the adjustment including restrictions through unit vectors. However, despite the satisfactory results obtained, the image resolution represents a key factor to obtain enough accuracy, since the marking of the vanishing points as well as the marking of the vertices of an element is directly affected by the pixel size of the image. Also regarding the data capture that determines the calculation of the vanishing lines, once it has been manually verified that the developed method is valid, its automation through automatic extraction of edges is of greater interest. The advantage of these algorithms is that they are very quick, no operator intervention is required, and also they calculate all the possible vanishing lines on the image with the straight lines detected, which according to the above-mentioned statements represents a significant advantage. However, the problems arising from the lighting conditions and radiometry while selecting invalid straight lines assume that the process should always be reviewed by an operator, allowing his or her interaction with the aim of eliminating invalid data. References 1. Elcovision, Product information on the internet at http://www .elcovison.com (accessed January 2008). 2. Iwitnessphoto, Product information on the internet at http:// www.iwitnessphoto.com (accessed January 2008). 3. Photomedeler, Product information on the internet at http:// (accessed January 2008).
4. U. Ethrog, “Non-metric camera calibration and photo orientation using parallel and perpendicular lines of the photographed objects,” Photogrammetria 39, 13–22 (1984). 5. D. C. Mulawa and E. M. Mikhail, “Photogrammetric treatment of linear features,” in International Archives of Photogrammetry and Remote Sensing (International Society for Photogrammetry and Remote Sensing, 1988), pp. 383–393. 6. F. A. Van Den Heuvel, “3D reconstruction from a single image using geometric constraints,” ISPRS J. Photogramm. Remote Sens. 53, 354–368 (1998). 7. R. M. Haralick, “Determining camera parameters from the perspectiva projection of a rectangle,” Pattern Recogn. 22, 225–230 (1989). 8. P. Arias, C. Ordóñez, H. Lorenzo, and J. Herráez, “Documentation for the preservation of traditional agro-industrial buildings in N.W. Spain using simple close range photogrammetry,” Surv. Rev. 38, 525–540 (2006). 9. A. M. G. Tommaselli and M. L. Lopes Reiss, “A photogrammetric method for single orientation and measurement,” Photogramm. Eng. Remote Sens. 71, 727–732 (2005). 10. T. Ohdake and H. Chikatsu, “Evaluation of image based integrated measurement system and its application to topographic survey,” International Archives of Photogrammetry and Remote Sensing (International Society for Photogrammetry and Remote Sensing, 2006). 11. P. R. Wolf, ed., Elements of Photogrammetry, with Air Photo Interpretation and Remote Sensing (McGraw-Hill, 1983). 12. F. A. Van Den Heuvel, “Exterior orientation using coplanar parallel lines,” Proccedings of the 10th Scandinavian Conference on Image Analysis (Lappeenranta, 1997), pp. 71–78. 13. A. Criminisi, I. Reid, and A. Zisserman, “Single view metrology,” in Proccedings of the 11th International Conference on Computer Vision (Kerkyra, 1999), pp. 434–441. 14. F. Schaffalitzky and A. Zisserman, “Planar grouping for automatic detection of vanishing lines and points,” Image Vision Comput. 18, 647–658 (2000). 15. W. Zhizhuo, “Principles of Photogrammetry (with remote sensing),” in Press of Wuhan Tecnical University of Surveying and Mapping (Publishing House of Surveying and Mapping, 1991).
10 December 2008 / Vol. 47, No. 35 / APPLIED OPTICS
6637