Line-Based Modified Iterated Hough Transform for ... - asprs

0 downloads 0 Views 233KB Size Report
line segments to the EOP of the image under consideration. Then, .... a line-based MIHT for the single photo resection, including the ..... Slama, C. (editor), 1980.
02-084.qxd 10/30/03 11:46 AM Page 1351

Line-Based Modified Iterated Hough Transform for Autonomous Single-Photo Resection Ayman F. Habib, Hsiang Tseng Lin, and Michel F. Morgan

Abstract Automatic single photo resection (SPR) remains to be one of the challenging problems in digital photogrammetry. Earlier attempts to automate the space resection task were mainly point-based, where image-point primitives are first extracted and matched with their object counterparts. The matched primitives are then used to estimate the exterior orientation parameters (EOP). However, visibility and uniqueness of distinct control points in the input imagery limit robust automation of the pose estimation procedure. Recent advances in digital photogrammetry mandate adopting higher-level primitives such as control linear features replacing traditional control points. Linear features can be automatically extracted in image space. On the other hand, object-space control linear features can be extracted from an existing GIS layer containing 3D vector data such as a road network and/or terrestrial mobile mapping systems (MMS). In this paper, we present a line-based approach for simultaneously determining the position and attitude of the involved imagery as well as establishing the correspondence between image- and object-space features. This approach is motivated by the fact that captured imagery over a man-made environment is rich in straight-line segments. Moreover, free-form linear features can be reliably represented with sufficient accuracy by a sequence of straight-line segments (i.e., polylines). The suggested methodology starts by establishing a general mathematical model for relating conjugate straightline segments to the EOP of the image under consideration. Then, a Modified Iterated Hough Transform (MIHT) strategy is adopted to derive the correspondence between image and object primitives as well as the position and the attitude of the involved imagery. This approach does not necessitate having one-to-one correspondence between image- and object-space primitives, which makes it robust against changes and/or discrepancies between the primitives. The parameter estimation and matching processes follow an optimum sequential procedure, which depends on the sensitivity of the mathematical model, relating corresponding primitives with different orientation at various image regions, to incremental changes in the EOP. Experimental results using real data proved the feasibility and robustness of the proposed approach even in the presence of a large percentage of outliers and/or discrepancies between the imageand object-space features. The authors were with Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210-1275. A.F. Habib and M.F. Morgan are currently with the Department of Geomatics Engineering, University of Calgary, 2500 University Drive NW, Calgary, Alberta, T2N 1N4, Canada ([email protected]; [email protected]). H.T. Lin is currently with the Map Compilation Section, CLC, 7F-3, No. 181, First Kung-Hsueh Street, Taichung, Taiwan, 402 R.O.C. ([email protected]). P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

Introduction Pose estimation (space resection) is a prerequisite for a variety of tasks in the fields of photogrammetry and computer vision such as map compilation, surface reconstruction, orthophoto generation, and object recognition. In traditional photogrammetric operations, the single photo resection (SPR) problem is solved through a point-based approach, where at least three non-collinear conjugate points, i.e., targeted control points, are used in a least-squares adjustment based on the well-known collinearity equations. The automation of the SPR using these targets continues to be a challenge due to the radiometric variation of the signal, poor contrast between the signal and the surrounding image content, and the small signal size in pixel units (Gülch, 1994). Mikhail et al. (1984) attempted to automatically find conjugate points by matching radiometric models of ground control points with their conjugate radiometric patterns in the images. One of the main problems in this approach is that it requires a very good approximation of the control point location in the image to ensure a small pull-in range for matching. Another problem is caused by the difficulty of creating a radiometric model for control points in the images because gray values depend on light and weather conditions. In addition, radiometric patterns of control points in image space are usually contaminated by noise, which might influence the matching process. Other approaches (Haala and Vosselman, 1992; Drewniok and Rohr, 1996; Drewniok and Rohr, 1997) employed relational matching of points in both object and image space. One could argue that relations between points are not as well defined as relations between linear or higherlevel features. Recent advances in computer technology mandate the use of more general methods, accommodating higher-level primitives such as linear features. Utilizing linear features in various photogrammetric operations is attractive for the following reasons:

• •



It is easier to automatically extract linear features from imagery rather than points (Kubik, 1991). The reliability of the extracted features, which are a collection of sub-entities, is another important factor to be considered. For example, a linear feature consists of a set of connected points along that feature. In other words, such features would increase the system’s redundancy and consequently increase the geometric strength and robustness in terms of the ability to detect blunders. Therefore, ambiguities can be resolved, occluded areas can be easily predicted, and/or changes can be detected. Higher-level features possess more semantic information about the object space, an important factor which can help in additional processes such as object recognition. For example,

Photogrammetric Engineering & Remote Sensing Vol. 69, No. 12, December 2003, pp. 1351–1357. 0099-1112/03/6912–1351$3.00/0 © 2003 American Society for Photogrammetry and Remote Sensing December 2003

1351

02-084.qxd 10/30/03 11:46 AM Page 1352



certain features could be grouped and their organizations detected for that purpose. For the SPR problem in particular, one could argue that Object space linear features can be easily derived from existing maps, GIS databases, and/or terrestrial mobile mapping systems; and Automatic SPR is rarely achieved using distinct points.

There has been a substantial body of work dealing with the use of analytical linear features (e.g., straight lines and conic curves) in photogrammetric orientation (Mulawa and Mikhail, 1988; Mikhail, 1993; Habib, 1999; Habib et al., 2000b). On the other hand, very few papers have addressed the use of free-form linear features (Habib and Novak, 1994; Zalmanson, 2000). However, the approaches suggested by these authors assume the knowledge of the correspondence between the object- and image-space features. Habib and Kelley (2001a) presented a new approach for autonomous SPR using free-form control linear features. The suggested methodology in this paper is based on the MIHT for robust parameter estimation. The MIHT can simultaneously estimate the EOP of the image under consideration as well as establish the correspondence between image- and object-space features. Linear features were represented by a sequence of 2D and 3D points along the features in the image and object space, respectively. Habib et al. (2001a) expanded this approach to detect changes/ discrepancies between corresponding features. Even though Habib and Kelley (2001a) and Habib et al. (2001a) have been dealing with features, the mathematical model remains point based because it relates object-space points to their counterparts along the image-space features using the collinearity equations. The point-based MIHT strategy for solving the SPR problem is time consuming because the image-space linear features have to be digitized at a high frequency to ensure the availability of conjugate points between the image and object space primitives. In this paper, the single photo resection problem is solved using control straight-line segments in the image and object space and using the MIHT strategy. Using straight-line segments is attractive because aerial images over urban areas are rich in straight lines arising from man-made objects. Moreover, free-form linear features can be reliably represented with a sufficient accuracy through a sequence of straight-line segments (i.e., polylines). Finally, using straight lines would lead to fewer primitives, which will make the suggested approach much faster when compared to the point-based approach developed by Habib and Kelley (2001a). The suggested methodology starts by introducing a general mathematical model that relates conjugate straight-line segments to the EOP of the involved imagery. The mathematical model is then implemented in a framework based on the MIHT for robust parameter estimation, where we simultaneously estimate the exterior orientation parameters as well as establish the correspondence between image- and object-space primitives. In the following section, a brief review of the MIHT for a robust parameter estimation technique is summarized. Then, a line-based MIHT for the single photo resection, including the optimum sequence for parameter estimation, is explained. The implementation of the closest line segment within the linebased MIHT is outlined afterwards. The last two sections present experimental results using real data as well as conclusions and recommendations for future research.

Modified Iterated Hough Transform (MIHT) Hough (1962) introduced a method for parameter estimation by way of a voting scheme. The basic principle behind this approach was to switch the roles of parameters and spatial variables. The Hough method is usually implemented through an accumulator array, which is an n-dimensional discrete

1352

December 2003

space, where n is the number of parameters under consideration. In the accumulator array, the cell with the maximum number of hits yields the parameters we are looking for. The variables contributing to the peak in the accumulator array can be tracked and identified. In a similar manner, the Modified Hough Transform can be used to estimate the parameters of a mathematical model relating conjugate entities of two data sets. In this approach, we assume no knowledge of the correspondence and do not require complete matching between conjugate entities. As a result of the parameter estimation, the correspondence is implicitly determined. The method starts by generating a hypothesis that an entity in the first data set corresponds to an entity in the second data set. The correspondence between conjugate entities of the data sets is expressed by a mathematical function. Using the hypothesized match, this mathematical function yields observation equation(s). The parameters of the mathematical relation can be estimated simultaneously or sequentially, depending on the number of hypothesized matches simultaneously considered. All possible entity matches are evaluated, and the results (parameter estimations) are represented in an accumulator array, which will exhibit a peak at the location of the correct parameter solution. By tracking the matched entities that contributed to the peak, the correspondence is determined. The number of parameters being simultaneously solved for determines the dimension of the accumulator array. In order to solve n parameters simultaneously, one must utilize the number of hypothesized entity matches needed to generate the required n equations. However, this approach is not practical. Simultaneous evaluation of all permutations of entities leads to combinatorial explosion. For example, if there are x entities in data set one and y entities in data set two, solving n parameters simultaneously would lead to xy![(xy  n)!n!] combinations (assuming that each matching hypothesis yields one equation). In addition, the memory requirements of an n-dimensional accumulator array create another problem. Alternatively, the MIHT solves for the parameters sequentially in an iterative manner (starting from some initial/ approximate values), updating the approximations after each step. Consequently, the accumulator array becomes onedimensional and the memory problem disappears. Also, if there are x elements in data set one and y elements in data set two, the total number of evaluated entity matches becomes xy, reducing the computational complexity of the problem. After each iteration, the approximations are updated and the cell size of the accumulator array can be reduced to reflect the improvement in the quality of the approximate values of the unknown parameters. In this manner, the parameters can be estimated with high accuracy. The convergence of this approach depends on the correlation among the parameters and the quality of the initial/approximate values for the unknown parameters. Poor approximations would require more iterations. The basic steps for implementing the MIHT for parameter estimation can be summarized as follows: (1) A mathematical model that relates corresponding entities of two data sets is established. The relation between the data sets can be described as a function of its parameters. (2) An accumulator array is formed for the parameters. The accumulator array is a discrete tessellation of the range of expected parameter solutions. The dimension of this array depends on the number of parameters to be simultaneously solved for, which is related to the number of entity pairings simultaneously considered as well as the number of equations provided by a single matching hypothesis. (3) Approximations are assumed for parameters which are not yet to be determined. The cell size of the accumulator array depends on the quality of the initial approximations; poor approximations will require larger cell sizes.

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

02-084.qxd 10/30/03 11:46 AM Page 1353

(4) Every possible match between individual entities of the two data sets is evaluated, incrementing the accumulator array at the location of the resulting solution. (5) After all possible matches have been considered, the peak in the accumulator array will indicate the correct solution of the parameter(s). Only one peak is expected for a given accumulator array. (6) After each parameter is determined (in a sequential manner), the approximations are updated. For the next iteration, the accumulator array cell size is decreased, and steps 2 through 6 are repeated. (7) By tracking the hypothesized matches that contributed towards the peak in the last iteration, one could determine the correspondence between conjugate entities. These matches are then used in a simultaneous least-squares adjustment to derive a stochastic estimate of the involved parameters.

A detailed explanation of the MIHT can be found in Habib et al. (2000a), Habib et al. (2001a), and Habib and Kelley (2001a). The MIHT has been successfully implemented in the automatic relative orientation of large-scale imagery over urban areas (Habib and Kelley, 2001b) and automatic surface matching (Habib et al., 2001b).

Line-Based Modified Iterated Hough Transform As mentioned before, straight-line segments will be used as the basic unit for establishing the correspondence between image- and object-space features. In this section, we will briefly explain the concept, the mathematical model, and the methodology of the straight-line-based MIHT for the autonomous single photo resection. First, we present the mathematical model that relates conjugate straight-line segments to the EOP of the image under consideration. Afterwards, the methodology for establishing the optimum sequence for parameter estimation will be explained. Mathematical Model Before explaining the mathematical model, one should settle the issue of representing the image- and object-space straightline segments. Alternative representations have been analyzed in the literature, and it has been established that the most convenient representation, from a photogrammetric point of view, is the one using two points along the line segments (Habib, 1999). This argument is supported by the following facts:

• • • •

Using two points to represent a straight-line segment is more helpful because we define well-localized line segments rather than infinite lines, This representation is capable of representing any line segment in space (i.e., it does not have any singularities), It will allow us to reliably represent free-form linear features with sufficient accuracy as a sequence of straight-line segments (i.e., polylines), and It will lead to a straightforward model for establishing the perspective transformation between image- and object-space line segments.

Therefore, a pair of 2D and 3D distinct points will be required to represent the line segment in the image and object space, respectively (e.g., c, d, A, and B; Figure 1). It has to be noted that the point pairs, representing corresponding image- and object-space segments, need not be conjugate. In other words, c and d do not necessarily correspond to A and B, respectively, in Figure 1. The mathematical model should force the coplanarity of the perspective center, the image line, and the corresponding object line. A triple product, as in Equation 1, could express such a coplanarity condition: i.e., (vA  vB) ° vc  0

(1)

where vA and vB are vectors from the perspective center to two points along the object-space straight-line segment (e.g., the two end points defining that segment). On the other hand, vc is a vector from the perspective center to a point along the same

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

Figure 1. Mathematical model relating corresponding straight-line segments.

line in the image space (e.g., one of the end points defining the image-space line segment, Figure 1). These three vectors should be expressed with respect to the same reference frame (e.g., the object coordinate system). In addition to the points defining the image- and object-line segments, the coplanarity constraint incorporates the EOP of the image under consideration as well as the interior orientation parameters (IOP) of the involved camera. There are only two independent coplanatrity constraints that could be introduced for one pair of corresponding line segments. Therefore, a minimum of three nonparallel conjugate line pairs is needed to estimate the six EOP of the image under consideration. Optimum Sequence for Parameter Estimation As mentioned earlier, the MIHT determines the parameters and correspondences between two data sets sequentially in an iterative manner. This section deals with the optimum sequence for the MIHT strategy as it relates to the SPR problem. By analyzing the sensitivity of the mathematical model (Equation 1) to incremental changes in the EOP as it relates to various line segments with different orientations at different image locations, the optimum sequence for parameter estimation could be established. The sensitivity of the coplanarity constraint to incremental changes in the EOP can be established through the respective partial derivatives. In other words, one can prove that some parameters have a low influence in some regions while having a larger influence in other regions. Therefore, a certain region in the image space would be useful for estimating some parameters if they have a large influence in that region while other parameters have a minor or almost no influence in the same region. Moreover, the optimum sequence should not affect previously considered regions/parameters. Conceptually, the optimum sequential parameter estimation should follow the same rules of empirical relative orientation on analog plotters (Slama, 1980). For such an objective, we have divided the image into nine regions labeled from 1 to 9, as shown in Figure 2. Regions 2, 5, and 8 have small x-coordinate values (i.e., x2  x5  x8  0), while regions 4, 5, and 6 have small y-coordinate values (i.e., y4  y5  y6  0). To illustrate the concept of optimum

December 2003

1353

02-084.qxd 10/30/03 11:46 AM Page 1354



It should be noted that the six EOP are individually solved for one after another. For each parameter, a one-dimensional accumulator array is created. During the recursive parameter estimation process, the optimum sequence is repeated after updating the initial values for the parameters with the estimated ones and reducing the pixel size. After convergence, matched line segments contributing to the peaks in the last iteration are used in a simultaneous least-squares adjustment to derive a stochastic estimate of the EOP. Before moving to the next section, we need to comment on the assumption of small , , and  rotation angles. Because the majority of aerial imageries are almost vertical, then the assumption of small  and  rotation angles is quite valid. On the other hand, small  rotation angles cannot always be guaranteed. In this case, control linear features need to be rotated with an angle corresponding to the heading of the aircraft, which is usually available. One should emphasize that the heading need not be very accurate. As it will be seen in the experiment section, deviations of up to 15° can be tolerated. Larger errors can be compensated for with more iterations until convergence is achieved.

Figure 2. Optimum sequence for SPR using straight-line segments.

sequence, let us consider the partial derivative of the coplanarity constraint, Equation 1, with respect to Z0: ∂f   (YB  YA)  Xc  (XA  XB)  Yc . ∂Z0

(2)

In Equation 2, (XA, YA) and (XB, YB) represent the planimetric coordinates of points A and B defining the object-space line segment. Xc and Yc, on the other hand, represent the planimetric components of the vector connecting the perspective center with one of the image points defining the image-space line segment (given relative to the ground coordinate system). Assuming small , , and  rotation angles, one could reach the following conclusions:





• • •

For  estimation, vertical lines in areas 1, 3, 7, and 9 should be considered.

For line segments close to the center of the image (i.e., in region 5), Xc and Yc are very small (i.e., Xc  Yc  0), which would reduce the partial derivative in Equation 2 to almost zero. Therefore, line segments in region 5, regardless of their orientation, are not useful for the estimation of Z0. For horizontal line segments (i.e., YA  YB) in regions 4 and 6 where Yc  0, the partial derivative in Equation 2 would reduce to almost zero. Therefore, horizontal line segments in regions 4 and 6 are not useful for the estimation of Z0. For vertical line segments (i.e., XA  XB) in regions 2 and 8 where Xc  0, the partial derivative in Equation 2 would reduce to almost zero. Therefore, vertical line segments in regions 2 and 8 are not useful for the estimation of Z0. For vertical line segments (i.e., XA  XB) in regions 4 and 6, (YB  YA)  Xc  0. Therefore, vertical line segments in regions 4 and 6 are useful for Z0 estimation. For horizontal line segments (i.e., YA  YB) in regions 2 and 8, (XB  XA)  Yc  0. Therefore, horizontal line segments in regions 2 and 8 are useful for Z0 estimation.

Closest Line Segment As mentioned in the previous section, matches that contributed to the peaks in the last iteration are used in a least-squares adjustment to simultaneously solve for the six EOP. However, the coplanarity constraint (Equation 1) does not guarantee that the elements of the line pair under consideration are conjugate. Rather, it only ensures the coplanarity of the perspective center, and the image and object line segments. For example, the  and cd ) in Figure 3 might yield the same two line segments (ab EOP when they are considered to be conjugate with the line segment  12 in the object space. Therefore, we might have ambiguities in the established matches from the MIHT. To resolve these ambiguities, we incorporate the Closest Line Segment (CLS) methodology. Within the CLS strategy, object-space line segments are back-projected into the image space by using the estimated EOP from the MIHT. A search space is established in the vicinity of the projected line segment to look for candidate , cd , and matches. For instance, the image space segments (ab  ) are possible matching candidates for the projected objectef  (Figure 4). One can effectively resolve ambispace segment 12 guities as well as filter out outliers according to the following

In a similar fashion, by considering predominantly horizontal and vertical line segments in the nine image regions, one can determine the optimum sequence, according to the strategy outlined in the previous paragraph, to be as follows:

• • • • •

1354

Horizontal lines in area 5 are used to solve for Y0; Use vertical lines in area 5 to solve for X0; For  estimation, horizontal lines in areas 4 and 6 and vertical lines in areas 2 and 8 are suitable; To solve for Z0, use horizontal lines in areas 2 and 8 and vertical lines in areas 4 and 6; Concerning , horizontal lines in areas 1, 3, 7, and 9 are suitable; and

December 2003

Figure 3. Ambiguity in the coplanarity constraint.

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

02-084.qxd 10/30/03 11:46 AM Page 1355

Figure 4. Closest line segment.

For the example depicted in Figure 4, the image space  will be disregarded according to the first criterion. segment ef , although it might be Moreover, the image space segment cd , will be ignored because it does not satisfy closer to  12 than ab the overlap criterion with the projected object-space segment. In other words, the overlap criterion has precedence over the distance constraint. The CLS is implemented to determine the matches (i.e., resolve the ambiguities) that would be eventually used for simultaneous estimation of the EOP in the least-squares adjustment. Within the implementation of the MIHT, the CLS has been implemented to limit the number of possible candidate matches used to populate the accumulator array. In this way, the CLS ensures faster and robust convergence of the MIHT.

Experimental Results criteria:

• •



Differences in the orientation between the projected segment and candidate matches. Overlap between the projected segment and candidate matches. The overlap is simply checked by projecting the end points of both the projected object-space segment and the candidate image-space segment onto each other. If the two segments overlap, the two projected end points  will lie within the segments (see Figure 4, segments 12 ). and ab Distances/closeness between the projected segment and candidate matches.

To test the feasibility and the performance of the developed methodology, experiments have been conducted using real data. For these experiments, one should have

• • •

Linear features in the object space (ground control features), Linear features in the image space, and The interior orientation parameters (IOP) of the camera.

In the area covered by the involved aerial image, there exist a number of primary and secondary roads. Several data sets in two scenarios have been collected to test various aspects of the developed system. The first scenario deals with extended free-form linear features represented by polylines. The second

(a)

(c)

(b)

(d)

Figure 5. Datasets for the first experimental scenario. (a) Fifteen object polylines (252 lines). (b) Five object polylines (129 lines). (c) Fifteen image poylines without digitization errors (602 lines). (d) Fifteen image polylines with digitization errors (898 lines).

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

December 2003

1355

02-084.qxd 10/30/03 11:46 AM Page 1356

TABLE 1. ESTIMATED EOP USING FREE-FORM LINEAR FEATURES REPRESENTED BY POLYLINES (SCENARIO 1)

Manual Approximations Experiment 1 (Figures 5a and 5c) Experiment 2 (Figures 5b and 5c) Experiment 3 (Figures 5a and 5d) Experiment 4 (Figures 5b and 5d)

X0 (m)

Y0 (m)

Z0 (m)

0

0

0

0

600.00 400.0 599.848

26.781 200.0 26.502

1014.894 800.0 1014.807

0.584667 12.0 0.565729

0.867300 12.0 0.867144

1.191474 14.0 1.188572

— — 0.089

600.085

26.261

1014.736

0.555897

0.854155

1.177883

0.087

599.689

26.054

1014.713

0.541970

0.872434

1.200153

0.090

599.338

27.287

1015.001

0.610370

0.888836

1.184411

0.101

scenario is comprised of a collection of non-connected short straight-line segments. In a digital environment, the extraction process can be established by applying a dedicated operator (e.g., the Canny or any other operator for road network extraction). In this work, however, 2D image features have been manually digitized. On the other hand, object-space control features have been collected from a stereo model over the area of interest. First Scenario Digitized object-space polylines are shown in Figures 5a (252 straight-line segments along 15 roads) and 5b (129 straight-line segments along five roads). On the other hand, image-space features can be seen in Figures 5c and 5d, where we collected 602 and 898 line segments along 15 roads. The data set in Figure 5d was collected after intentionally introducing some digitization errors, for the purpose of testing the robustness of the suggested approach against changes/discrepancies between the image- and object-space features. Combining the four datasets in Figure 5, one can carry out four experiments. In Experiment 1, the object- and image-space data sets shown in Figures 5a and 5c, respectively, are used, while data sets shown in Figures 5b and 5c are used in Experiment 2. In Experiment 3, the objectand image-space data sets shown in Figures 5a and 5d, respectively, are used, while data sets shown in Figures 5b and 5d are used in Experiment 4. Table 1 summarizes the estimated parameters from these experiments using the line-based MIHT strategy, including the initial values for the EOP. In addition, derived EOP from a manual SPR procedure can be also seen in Table 1. By analyzing these results, one can see that estimated parameters are not significantly different from those derived through

(a)

(b)

manual operations even in the most extreme case, Experiment 4, where we are dealing with five object-space control linear features (Figure 5b) together with 15 image-space linear features (Figure 5d) with some digitization errors. Second Scenario The objective of this scenario is to test the performance of the suggested methodology when dealing with relatively short non-connected straight-line segments. This might be the case when dealing with large-scale imagery over urban areas. These features might correspond to building boundaries, driveways, and markings along the road surface. Two data sets in the object space can be seen in Figures 6a and 6b. On the other hand, image space features are shown in Figure 6c. Two experiments can be conducted using the various datasets in Figure 6, where Experiment I utilizes the data sets shown in Figures 6a and 6c while Experiment II involves the two data sets shown in Figures 6b and 6c. Approximate as well as estimated EOP in this scenario are summarized in Table 2. Once again, the estimated parameters are very close to the manually estimated EOP as well as those derived from the datasets in the first scenario. It should be noted that we had to use closer approximate values for Experiment II due to the high percentage of outliers (roughly 80 percent). Based on the results shown in Tables 1 and 2, one could verify the feasibility and the robustness of the suggested algorithm. By tracking the established matches between the objectand image-space features, one could highlight any discrepancies between these datasets. Moreover, non-matched imagespace features can be used for map and GIS database updating purposes.

(c)

Figure 6. Datasets for the second experimental scenario. (a) 1299 object straight lines. (b) 224 object straight lines. (c) 1331 image straight lines.

1356

December 2003

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

02-084.qxd 10/30/03 11:46 AM Page 1357

TABLE 2. ESTIMATED EOP USING SHORT UNCONNECTED LINE SEGMENTS (SCENARIO 2)

Approximations Experiment I (Figures 6a and 6c) Appx Experiment II (Figures 6b and 6c)

X0 (m)

Y0 (m)

Z0 (m)

0

0

0

0

400.0 599.944

200.0 26.482

800.0 1014.888

12.0 0.572517

12.0 0.868934

14.0 1.184028

— 0.057

530.0 600.134

70.00 26.751

930.00 1015.043

3.0 0.595975

3.0 0.859108

3.0 1.200991

— 0.092

Conclusions and Recommendations The MIHT for the robust parameter estimation technique has been used to perform autonomous single-photo resections for real data using straight-line segments without knowing the correspondence between image- and object-space primitives. The proposed technique proved its robustness against discrepancies and/or changes between image- and object-space features. The parameters are estimated using common features in both data sets. On the other hand, non-corresponding entities are filtered out prior to the parameter estimation. An optimum sequence for parameter estimation and the associated image regions has been established and implemented. Experiments with real data in two scenarios have been conducted to test the performance of the overall suggested strategy. The first scenario deals with datasets composed of extended free-form linear features represented by polylines. The other scenario incorporated a set of non-connected relatively short straight-line segments. The estimated EOP from both scenarios proved the feasibility and robustness of the suggested approach even in the presence of a high percentage of blunders and/or outliers. The proposed system has the capability of integrating aerial imagery with GIS data or terrestrial mobile mapping systems for decision-making purposes (e.g., re-mapping of a road network). In this way, newly acquired aerial imagery can undergo an autonomous single-photo resection using available control information from a terrestrial mobile mapping system, previous imagery, a GIS database, or line maps. The proposed line-based strategy proved to be superior, in terms of robustness against poor approximations as well as faster convergence, to a pointbased MIHT. However, we did not report those results because of space limitations. Detailed discussion of this comparison is being presented in another publication. Currently, we are analyzing the optimum pixel size of the accumulator array corresponding to different parameters at various iterations. We are also interested in developing more comprehensive algorithms for autonomous change detection and database updating activities. In addition, generating rectified ortho-images using matched control linear features will be investigated.

References Drewniok, C., and K. Rohr, 1996. Automatic exterior orientation of aerial images in urban environment, International Archives of Photogrammetry and Remote Sensing, 9–19 July 1996, Vienna, Austria, 31(Part B3):146–152. , 1997. Exterior orientation—An automatic approach based on fitting analytic landmark models, ISPRS Journal of Photogrammetry and Remote Sensing, 52:132–145. Gülch, E., 1994. Using feature extraction to prepare the automated measurement of control points in digital aerial triangulation, International Archives of Photogrammetry and Remote Sensing, 5–9 September 1994, München, Germany, 30(3/1):333–340. Haala, N., and G. Vosselman, 1992. Recognition of road and river patterns by relational matching, International Archives of

P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G

Photogrammetry and Remote Sensing, 2–14 August 1992, Washington, D.C., 29(Part B3):969–975. Habib, A., 1999. Aerial triangulation using point and linear features, International Archives of Photogrammetry and Remote Sensing, 6–10 September 1999, München, Germany, 32(Part 3-2W5):137–141. Habib, A., and K. Novak, 1994. GPS controlled triangulation of single flight lines, The Symposium of ISPRS Commission II, Ottawa, Canada, 30(Part 2):203–209. Habib, A., A. Asmamaw, and D. Kelley, 2000a. New approach to solving matching problems in photogrammetry, International Archives of Photogrammetry and Remote Sensing, 16–23 July 2000, Amsterdam, The Netherlands, 33(Part B2):257–264. Habib, A., A. Asmamaw, D. Kelley, and M. May, 2000b. Linear Features in Photogrammetry, Report No. 450, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, Ohio, 80 p. Habib, A., and D. Kelley, 2001a. Single photo resection using the modified hough transform, Photogrammetric Engineering & Remote Sensing, 67(8):909–914. , 2001b. Automatic relative orientation of large scale imagery over urban areas using modified iterated hough transform, International Journal of Photogrammetry and Remote Sensing, 56(1):29–41. Habib, A., M. Morgan, and Y. Lee, 2001a. Integrating data from terrestrial mobile mapping systems and aerial imagery for change detection purposes, The 3rd International Symposium on Mobile Mapping Technology, 03–05 January, Cairo, Egypt, unpaginated CD_ROM. Habib, A., Y. Lee, M. Morgan, 2001b. Surface matching and change detection using the modified hough transform for robust parameter estimation, Photogrammetric Record, 17(98):303–315. Hough, P., 1962. Methods and Means for Recognizing Complex Patterns, U.S. Patent 3,069,654. Kubik, K., 1991. Relative and absolute orientation based on linear features, ISPRS Journal of Photogrammetry and Remote Sensing, 46(1991):199–204. Mikhail, E., 1993. Linear features for photogrammetric restitution and object completion, Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision, SPIE Proceedings, 14–15 April, Orlando, Florida, 1944:16–30. Mikhail, E., M. Akey, and O. Mitchell, 1984. Detection and sub-pixel location of photogrammetric targets in digital images, Photogrammetria, 39(1984):63–83. Mulawa, D., and E. Mikhail, 1988. Photogrammetric treatment of linear features, International Archives of Photogrammetry and Remote Sensing, 1–10 July 1988, Kyoto, Japan, 27(Part B10):383–393. Slama, C. (editor), 1980. Manual of Photogrammetry, Fourth Edition, American Society of Photogrammetry, Falls Church, Virginia, 1056 p. Zalmanson, G., 2000. Hierarchical Recovery of Exterior Orientation from Parametric and Natural 3-D Scenes, Ph. D. Thesis, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, Ohio, 129 p.

(Received 08 July 2002; accepted 03 December 2002; revised 03 January 2003)

December 2003

1357