proceedings of spie - SPIE Digital Library

0 downloads 0 Views 4MB Size Report
Milton T. Sakude, Guy A. Schiavone, Hector Morelos-Borja, Glenn Martin, Art. Cortes, "Recent advances on terrain database correlation testing," Proc. SPIE.
Recent Advances on Terrain Database Correlation Testing Milton T. Sakudea, Guy A. Schiavone", Hector More1osBorjaC, Glenn Martin", and Art Cortese

Institute for Simulation and Training University of Central Florida 3280 Progress Drive Orlando, FL 32826 ABSTRACT Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data.

1ST has been developing a suite of software tools, named ZCAP (Z-Correlation Analysis Program), to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training (1ST). We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error ofthe tests. With this set oftools, a user can quantitatively assess the degree of correlation between large terrain databases.

Keywords: Simulation interoperability, terrain database testing, correlation and statistics.

1. INTRODUCTION The suite of software tools known as ZCAP originated as a proposed solution to the problems first observed at the 1992 Interservice/Industry Training, Simulation and Education Conference (1/ITSEC) demonstration, held in San Antonio, Texas, of the then-emerging IEEE Standard 1278 Distributed Interactive Simulation (DIS) protocols and communication standards. At this demonstration, problems ofterram database correlation were manifested by visual anomalies such as tanks that floated in mid-air and aircraft that flew within the ground. To address these problems, researchers at 1ST began to develop statistical approaches and measurement techniques to identify terrain database correlation problems prior to the follow-on 1993 and 1994 1/ITSEC DIS demonstrations " 2, 3 Software tools arising from these efforts formed the core of the Z-elevation Correlation Analysis Program (ZCAP Vl.O). Since this initial release, a number of tools have been added,

including tools that provide capabilities to measure correlation of culture feature location and extents, line-of-sight correlation, and 3D model similarity, as well as tools to perform map and datum transformations, shift detection, triangulation and interpolation, terrain visualization, and segmentation of rendered images. References2, 4, 5, 6, 7 describe the underlying mathematical basis used in many ofthese tools. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to

lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models for terrain and culture data. For visual simulation applications, a terrain surface is generally produced by downsampling from the elevation posts contained in a digital elevation model such as Digital Terrain Elevation Data (DTED) 89 The National Imagery and Mapping 10 Agency (NIMA) produces DTED at a resolution of 3 arc seconds ("Level I") and 1 arc second ("Level II") Typical a

M.T.S. Email: Email: [email protected]; Telephone: 407-658-5000; Fax: 407-658-5059

b G.A.S. Email: [email protected] C

H.M-B. Email: [email protected] G.M. Email: [email protected] e A.C. Email: [email protected] d

364

Part of the SPIE Conference on Enabling Technology for Simulation Science II . Orlando, Florida . April 1998 SPIE Vol. 3369 s 0277.786X/98/$ 10.00

current image generation hardware, with a polygonal throughput capability of some tens of thousand of polygon per frame at 30 or 60 Hz, demands an economic polygonal terrain representation, generally obtained by downsamplmg the DTED using

some error-minimization approach prior to the polygonization step. This limitation also requires that a terrain be represented by several levels of detail (LODs), so that a fme resolution terrain representation is rendered for areas close to the observer and a lower resolution terrain representation is rendered at greater distances from the observer. Thus, in distributed

simulation a user can experience terrain correlation problems even with the use of identical source data due to the use of different LOD generation schemes.

The problem of assuring terrain database correlation for mteroperability in distributed simulation was not encountered in earlier systems that employed homogeneous simulators, but quickly became apparent in DIS exercises involving heterogeneous simulators with dissimilar functions and capabilities. The importance of assuring terrain database correlation in distributed simulations employing heterogeneous simulators is unabated by the introduction of DIS++ /HLA (High-Level Architecture).

Our most recent work in this area has concentrated on optimization of the tools to facilitate analysis of very large terrain databases, enhancements of usability, and the implementation of new tools to measure and classify terrain roughness. in section 2, we describe an improved approach to measure terrain roughness which overcomes the deficiency of the usual We also compare the developed method to another method named the roughness index. Section 3 method using sigma-t describes the grid method sampling adopted for use in all relevant ZCAP tools. The grid method sampling was developed to support large terrain databases, replication of the correlation tests and easy user interaction. Section 4 summarizes th terrain elevation correlation test and presents a result of replications of the test. Section 5 describes the stratified random sampling scheme developed for the culture correlation test. Section 6 describes a line of sight intervisibility algorithm used in the line

of sight correlation test.

Section 7 describes improvements in terrain elevation sampling and in the solution of

overdetermined linear systems of equations using a sparse matrix approach. Section 8 discusses the use of the OpenFlight API to support large database and to expand ZCAP capabilities. Section 9 comments on some usability improvements to ZCAP.

2. TERRAIN ROUGHNESS Our purpose for measuring terrain roughness is to classify and select portions of a terrain for terrain correlation analysis. In addition, measurement of terrain roughness is often used as a criterion for downsampling prior to terrain skin polygonization, and is important in the formulation of non-uniform stratified sampling schemes. Methods for classifying terrain roughness are also of interest to the tactical terrain analyst, and in the analysis and comparison of different approaches to digital terrain representation. One measure of roughness that is often used in optics and electromagnetic scattering theory is the correlation length of the surface at some particular scale. Three other measures of roughness that have been used are the sigma-t, the "roughness index" and the fractal dimension. The sigma-t " 12 is the standard deviation of the terrain height. The "roughness index" is a fmite difference estimate of the average rate of change in slope. The fractal dimension is a real number that indicates how close a fractal is to a dimension 13 The idea of measuring terrain surface as the standard deviation of the height (sigma-t) comes from the measure of surface microroughness. The surface height variations can be measured from a mean surface level by using profiling instruments 14 This is analogous to the calculation of the standard deviation. The problem with the use of the standard deviation to classify terrain roughness is that the slope contributes to the variation in height, that is, a smooth terrain in a slope can have a large standard deviation, and thus it may be classified as a rough terrain. Table 2. 1 presents terrain roughness categories related to standard deviation originally used in the cruise missile program . The terrain roughness classification is subjective, and depends on visual analysis for correctness. Terrain roughness classification depends on the

measured extents. Over a relatively small area, a terrain surface can be classified as smooth. However, over a relatively large area including the same smooth region, the terrain may be classified as rough.

The process of determining the Sigma-t value involves sampling terrain elevation values and calculating the standard deviation. A flat terrain in a slope can be classified as non-smooth by using this process. Let us consider an inclined planar area (a square with a side parallel to the x axis). Uniform random sampling on this plane gives a uniform distributed data set with values between, say, a and b (minimum and maximum value of elevation, respectively). The standard deviation is then (b-a) /121/2 15 Depending on the value of a and b, the terrain roughness classification can be any one, from smooth to very rough. It also does not depend on the area of the region. Therefore, the standard deviation of terrain elevation is not always appropriate for measuring terrain roughness.

365

Sigma-t (Standard. Deviation)

Category

Meter 243

Table 2.1. Terrain roughness categories.

Unlike the sigma-t, the fractal dimension is invariant on scale 13 Fractal dimension is a real number that indicates how close a fractal is to a dimension. For example, a straight line has dimension I, a polygonal line that almost fills a square

has dimension close to 2, a plane has dimension 2, and a very rough surface that almost fills its bounding volume has dimension close to 3. A fractal has the property of preserving shape similarity under scale transformation (zoom) . A fractal

assumes a dynamic update in its form under scale transformation. Although fractals have been used to model terraini6 , to our best knowledge, a study mapping fractal dimension to the existent roughness classification based on sigma-t has not been done. Figure 2.1 shows a small region of CCTT Primary II database: the terrain of 7.68 Km x 15.36 Km is subdivided in small squared areas (1.92 Km x 1.92 Km). Figure 2.1.b shows the corresponding sigma-t classifications. Regions that look flat are misclassified as moderate instead of smooth.

Figure 2.1.a. Shaded image of a portion of CCTT Primary 11 terrain

b. Sigma-t value based on the standard deviation Legend: S for Smooth. M for Moderate and R for Rough.

To overcome this kind of misclassification, we propose an improved method that fits a plane to the terrain elevation sample points and then calculates the standard deviation of the fitting as the measure of sigma-t. The method uses multiple linear regression for fitting a plane to a set of sample points. From the plane equation, ax+hy+cz+d0, the fitting equation z Multiple linear regression is the application of the least is given by z=Ax+By+D, where A-air, B=-bi and D=-di, for squares method and is formulated in matrix terms as follows

(I)

Z = bX

The vector b is estimated by:

b =(XTX)IXTZ

D Where

b= A B

366

—I

z= 7

1

x1

y1

I

x,

V,

I

x,,

_y,,

(2)

The standard deviation of the fitting is the square root of the Mean Square Errors (MSE). MSE is given by :

MSE=E

(3)

Where SSE is the Sum of Squares Errors, given by:

(4)

SSE=Z7Zb'X7Z

MSE is the expected value of the regression variance. The square root of MSE is an estimate of the standard deviation.

Therefore, this method preserves the statistical significance of the sigma-t (the standard deviation). Figure 2.2 shows the result of the application of the improved method on the piece of the terrain displayed in Figure 2. I. It shows a better match between the roughness classification and the terrain visual appearance. The reason why the

original sigma-t method fails to properly classify this example is because portions of the terrain have non-zero slope. There is a significant reduction of elevation variation with the "removal" of the slope by the plane fitting.

The tool implemented in ZCAP for assessing terrain roughness subdivides the terrain into square regions. It also subdivides recursively each region into four square sub-regions. providing different levels of roughness classification. The

top-level roughness could be a classification from a global view (pilot view) and the bottom-level one could be a classification from a local view (dismounted infantry view). The algorithm used to efficiently sample the terrain elevation is based on the grid method described in Section 3. Figure 2.3 shows roughness classification of more than half of a CCTT primary II database (89 km x 100 km). In Figure 2.3.a, the terrain is subdivided in regions of 3.84 Km x 3.84 Km. In Figure 2.3.b, each previous region is further subdivided in 4 subregions. An example of the differences that can occur between views at different scales can be observed in that the total area classified as smooth is larger at the smaller scale (Fig. 2.3 b).

I

512 s I s

I

s Li

1. 211 12 , s .1

4

is

V

s

13 2'6J 171,9 111! 1S 8s

4

I

3 30

26iP38 24.296 MMMMMM b. Terrain roughness index based on slope variation.

a. Terrain roughness based on fitting method

Figure 2.2. Terrain roughness classification. (Legend: S for Smooth, M for Moderate and R for Rough.)

The roughness index is another approach that overcomes the problem of misclassification due to overall slope. The roughness index (RI) measure is based on an average of a norm of the second order gradient (V2) of the terrain elevation: ., = Ve

3e

- + a2e

(5)

Assuming a grid of nc and nr dimensions with spacing Ax and Ay, and using a centered finite difference approximation for

the partial derivatives, the Roughness index can be given by:

RI

1

nr nc ,

,

2e —(e,,1 +e11j) +

(Ax)

2e,,

—(e,11 +e,11) (Ay)

(6)

367

In practice, the value of Ri is very small. We multiply RI by a factor, on the order of 10,000, to have values compatible with those of Table 2.1. Figure 2.4 shows a similar classification of that in Figure 2.3 using roughness index. The

grid spacing was 60 meters. The roughness index is less subject to the scale-of-view problem, however, the RI value obtained varies with the grid spacing. It converges when the grid spacing is small. For practical purpose, it requires a larger amount of data and processing for accurate calculation. The sigma-t and the proposed MSE method do not present as much

variation with sampling grid spacing, except for small variations due to the sampling process. Figure 2.2.b shows the roughness index classification for the same terrain piece of Figure 2.1, using 30-meter grid spacing. Some regions that ppear almost flat were classified as moderate (RI of 19, 20). Because the roughness value varies with the grid spacing, the RI multiplication factor or the category ranges should be changed to obtain more consistent results.

B

a. Subdivision in 3.84 Km x 3.84 Km cells.

b. Subdivision in 1.92 Km x 1.92 Km cells.

Figure 2.3. CCTT Primary 11 Terrain roughness classification. (Legend: Light Grey for Smooth; Grey for Moderate and Dark Grey for Rough.)

Figure 2.4. CCTT Primary II terrain roughness index classification, analogous representation of Figure 2.3.

3. GRIT) METHOD SAMPLING The efficiency of an algorithm that deals with a large amount of data generally depends on how fast it accesses the needed data. For search efficiency, data are organized by sorting and using efficient data structures such as binary trees, AVL trees, quadtrees, and hashing tables. The terrain elevation sampling is a geometric searching problem. The usual approach is

to search a polygon that includes a sample point in the terrain database. In this case the terrain database needs to he

368

organized, by using , for example, the slab method, the chain method, the quadtree method, the K-D tree method or the grid method 19, 20The approach adopted in this work is to organize sample points using the grid method, and to perform range searching 20 by traversing the database once. The range searching brings all points within a rectangular domain. The grid method subdivides the point domain into small square cells (Figure 3 .1). The grid method uses a matrix of pointers (hashing table) to a list of data indices as a basic data structure. A hashing function takes the X and Y values and transforms them into an index for the pointer matrix to provide access to the data in the list. The algorithm efficiency depends on keeping a relatively low number of elements in the lists and all lists occupied. The grid method is ideal for uniformly distributed data, which is used by all ZCAP correlation tests, because in practice a search of N points runs in 0(N) time on 19 average The terrain elevation sampling algorithm works as follows: (1) build the grid data structure; (2) for each polygon of the terrain database, (3) perform range searching for points within any polygon overlapping the box and (4) calculate the elevation value for points inside ofthe polygon.

. . .

S.

..

P8



'4 P2P6

L1

L2

L4

L5

L3

L2 1412161

L7

L4ft

P3

P7 P1 Ps

.

P11

P9

L6

L3

P10

L [1 L

L6 L7

I

11 tiO I

Figure 3.1. Grid subdivison, pointer matrix and index list.

Let N be the number of sample points and M be the number of polygons in the database. Considering low cell occupancy and uniform distributed data, the pre-processing time for building the grid structure is 0(N) 19 the database traversal takes 0(M), and range searching and elevation calculation take 0(N). Thus, the overall time complexity of the algorithm is O(N+M). Because it takes 0(M) time in pre-processing to organize a terrain database with M polygons by using any efficient data structure, such as a quadtree 20 and the sampling is performed once, the theoretical overall performance of the proposed algorithm is optimal.

The higher the ratio between the number of sample points and the number ofpolygons (NIM), the better the relative performance of the proposed algorithm, because the range searching is more effective, that is, it returns more points per query. For a high ratio NIM, the proposed algorithm can outperform an algorithm that uses a quadtree or a K-D tree. A point search in a quadtree or a K-D tree structure with D levels requires 0(D) time ''20 For simplicity let us consider 2D=M, that is D=log M. Sampling N points requires O(N log M) time. Without considering pre-processing time for the quadtree (because the database can be stored in that format) and time constants being equal, the break-even point between the methods is given by N=M/(logM-l), because T(N+M)=T(N log M). The region above the curve in Figure 3.2 represents points where the proposed algorithm theoretically obtains superior performance.

The grid method sampling is appropriate for all ZCAP tools. Terrain roughness determination requires a high NIM sampling rate to account for the variability in surface elevation. The default tool setup is approximately one sample per two polygons. In the terrain elevation and culture correlation tests, the replications ofthe test lead to a sufficiently high rate NIM. The terrain remediation sampling process requires several points per polygon. The proposed algorithm has other advantages: 1.

It uses less memory space (0(N)), because it organizes sample points. Organizing terrain database by using, for example, a quadtree or a K-D tree requires additional 0(M) space.

2. It is relatively easy to implement and can be used for sampling any terrain database format without conversion, since a polygon range searching or database traversal function is available.

369

3. It is suitable for large terrain databases. The terrain database does not need to be stored in memory, because it needs only one polygon at a time.

50000 Cl)

a,

E Cl)

z

40000

30000 20000 I

0000 0 0

200000 400000 600000 800000 M (# polygons)

Figure 3.2. Break-even curve of grid method algorithm vs. quadtree (grid method obtains superior performance above the curve).

4. TERRAIN CORRELATION TEST 1.

The terrain correlation test computes statistics on elevation differences between two terrain surfaces. It involves: Uniform random sample generation;

2. Baseline terrain database sampling (z value calculation); 3. Subject terrain database sampling (using the same sample points); 4. Elevation difference calculation and statistics computation.

Statistics include mean, median, variance, standard deviation, skewness, kurtosis, magnitude of the maximum elevation difference, and critical value of the acceptance sampling test based on the elevation differences . Because computer generated random numbers are used to generate sample points, performing replications ofthe test is important to evaluate the variation of the results. The variation analysis may indicate the need for increasing the sample size. 1ST implemented the capability to replicate the terrain correlation test, taking advantage of the efficiency of the grid method sampling algorithm. The sample points of all replications are grouped and processed as a large sample by using the grid sampling algorithm. The statistics computation for each trial is processed after ungrouping.

Table 4.1 shows an example of statistics ofterrain elevation correlation test with replication. The test compares two LODs of MRTDB 2 database. The two LODs are correlated. The statistic known as the "critical value", shown in Table 4.1,

is interpreted as the value such that 95% of the elevation differences are below the critical value (4 meters) at 95% significance level 2 In other words, for a threshold greater than 4 meters, the databases pass the acceptance-sampling test at 95 % confidence level. STATISTICS Mean

Average -0.035

95% Confidence Interval -0.055 -0.014

Std. Dev. 0.0581

Minimum -0.167

Maximum 0.076

Median

1.053

1.037

1.068

0.0432

0.974

1.162

Variance

3.361

3.293

3.427

0.1889

3.063

3.717

0.0514 0.1861

1.750

1.928

-0.368

0.433

Sic!. Deviation

1.833

1.814

1.851

Skewness

0.055

-0.012

0.121

Kurtosis

1.695 8.706

1.452 8.206

1.938 9.206

0.6797 1.3991

0.629 6.530

3.080

Maximum

Criticalvalue

4.022

3.972

4.072

0.1404

3.758

4.366

12.01

Table 4.1 Terrain correlation test statistics on terrain elevation differences of 30 replications. Each trial has 1000 sample points.

370

5. CULTURE CORRELATION TEST The ZCAP culture correlation test compares the agreement in feature location between two terrain databases by using the Kappa statistic . Kappa statistic formula is: (7)

K=POPe 1 - Pe

Where Po i5 the overall proportion of agreement and Pc 5 the adjustment due to chance expected agreement (see more details i_n reference 5)•

The requirements ofthe sampling algorithm for culture correlation test are: (1) sample every cultural feature; (2) generate stratified random locations, and (3) achieve a desired number of sample per feature. This must be accomplished using a stratified sampling procedure that prevents missing important features such as airports, buildings, and small targets, that represent relatively small areas. To satisfy these requirements and to have efficient performance, an algorithm based on the polygon pattern filling algorithm was developed. The terrain database is traversed once, "filling" each polygon with a random pattern. The algorithm is also similar to the terrain elevation sampling, except for three aspects: the use of a random point pattern, scale transformation and random placement of the polygon. The random pattern is a set of normalized points (values between 0 and 1) (Figure 5 .1). To have the desired number of sample points per feature, the random pattern is scaled down appropriately for each feature. The scale transformation is a function of the total area of all polygons of a feature and the desired number of sample points per feature (this algorithm handles only areal features). Instead of scaling the random pattern, the polygon is scaled and placed randomly in the random pattern (Figure 5.1). Points inside ofthe polygon, obtained by the grid method range searching, are transformed to the terrain database coordinate systems and stored in a list of sample points. The time complexity of this algorithm is O(N+M), because it traverses the terrain database once and uses the grid method efficiently.

The sampling of the feature location in the second terrain database is performed with an algorithm similar to the elevation sampling algorithm. The overall time complexity is again proportional to the number of sample points and the number of the polygons in the terrain database (O(N+M)). After sampling the second database, the agreement of feature location is analyzed building a contingency table and calculating the Kappa statistic . For replicating the culture correlation test, the sampling is done considering the aggregated number of sample points. Because the sample points are generated during the database traversal, the algorithm are ordered according to the terrain database traversal. Thus, the list of sample points is randomized and distributed to each replication set. Kappa

I

I •

0.9

.

0.8

0.9

Süc

07

cç7 p 0

0.2

0.4

0.6

0.8

1

0

1000

2000

3000

4000 Sample size

Figure 5.1. Random polygon placement on random pattern. Figure 5.2. Kappa statistics variation versus sample size

371

Figure 5.2 shows a plot of the variation of kappa statistics in function of the sample size resulted of culture correlation tests. One database is a 10 Km x 10 Km portion of Fort Hunter-Liggett terrain in OpenFlight format. The other is the same portion in SIMNET S1000 format. The OpenFlight format database is generally used for flight simulation, and the Si 000 format database is used for ground battlefield, which requires fmer terrain representation. Each box plot represents graphically 30 trials: minimum, maximum and 2 standard deviation box. The average kappa statistic is around 0.74, below the adopted critical value of 0.80 for the agreement criteria. The replication of the correlation test assures a more accurate conclusion. A conclusion based on the result of one test may lead to a wrong conclusion. In the above example, some tests had Kappa statistic values greater than 0.80, and one could conclude that the two database are correlated in cultural feature location at 0.80 level.

6. LINE OF SIGHT CORRELATION TEST Like the culture correlation test, the Line of Sight Correlation Test (LOS test) uses the Kappa statistic . Instead of generating random sample points, the LOS test computes the agreement of LOS blockage by culture features or terrain skin. This test involves generation of sample points, determination of the LOS intervisibility, and the Kappa statistic calculation. Equal-length LOS segments are generated such that they are randomly distributed over the terrain. The end points of the segments are at a specified height above the terrain surface. The grid method sampling is used to calculate the projection of the LOS end points on the terrain surface. The most time-consuming process of this test is the determination of the LOS intervisibility. The determination of the LOS intervisibility is the calculation of the intersection between LOS segment and terrain polygons. Several efficient algorithms have been developed using the approach of organizing the terrain database for efficient querying. For terrain database organized with the grid method, a Bresenham-like 2D digital differential analyzer (DDA) traversal of the terrain proved to be very efficient 21, 22 Pandary 23 extended the DDA algorithm to handle quadtree terrain represenation. Petty et al.22 developed two algorithms: one based on double-connected edge list (DECEL) traversal and the other based on slab 20 traversal method. Although they can outperform the DDA-based algorithm 22 they use additional amount of memory to organize the terrain database.

Our goal is not to develop the most efficient LOS algorithm, but one fast enough to allow easy user-interaction. Similar to the previous approach, the grid method is used to efficiently organize LOS data, because generally LOS sample data occupies less memory space than terrain data. The algorithm determines all LOS intervisibility in one terrain database traversal. For each polygon in the database, the range searching brings LOS segments that potentially intersect the polygon. Before calculating the polygon-line intersection, two nested trivial rejection tests are performed: 3D bounding box rejection test and 2D circle rejection test. The first test checks if LOS segment and polygon bounding boxes do not overlap. The second one checks if the circle that involves the polygon bounding box intersects the LOS by comparing the LOS-circle center distance and the circle radius. The LOS intersection point is the intersection of a line segment and a plane:

(8)

P=P+t(P—J) With

N.(1—P)

N.(1-P)

Where N is the plane normal, P0 a point in the plane, P1and P2 end segment points and • denotes dot product. Considering that the number ofLOS segments in the cells ofthe grid structure is relatively small, the overall time complexity ofthe algorithm is O(LN+M), where L is the length ofthe LOS segment.

Table 6.1 shows result of the LOS correlation test applied to the databases referred in the previous section. It represents the result of 30 replications with 1000 LOS segments, whose length was 1500 meters.

Statistics Kappa statistic

Average 0.445

95% Confidence Interval 0.436

0.453

Std. Dev.

Minimum

Maximum

0.0230

0.406

0.494

Table 6.1 Statistics of Kappa statistic values of 30 LOS replicated tests.

7. TERRAIN REMEDIATION

372

Terrain remediation is an important process to alleviate the miscorrelations. Schiavone and Graniela 24developed an automated tool to address this issue. They developed an algorithm that adjusts the polygon vertices of a terrain database by using a constrained least square fitting method. It works only on terrain skin represented by triangles and maintains the

triangulation, changing only the z values. The basic idea is to minimize the squared error given the constraints of a triangulation in an existing reference terrain database. It is similar to our improved roughness fitting, but instead of finding the plane equation coefficients, it determines the triangle vertex z values of the subject terrain database (Figure 7.1). Since surface continuity must be maintained, the linear system of equations for each triangle are put together forming a sparse overdetermined linear system with size 3 T x V, where T is the number of triangles and V the number of vertices in the subject database (see reference 24 for more details). This terrain remediation processing was time consuming due to two reasons: the sampling process used an O(NT) algorithm and the full matrix least-squares linear system solution method that was used runs in O(V') time with O(TV) memory requirements. The sparse matrix technique reduces these requirements to linear complexity in T for both memory and run time.

It is important to have a sampling algorithm that accounts for the shape of the reference surface (Figure 7.1). A certain number of random sample points per triangle are desirable to have good results.

Figure 7.1. Subject database triangle (shaded) and Reference database polygons.

An algorithm that generates random points inside of a triangle bounding box may waste many points, mainly if the triangle has a small area in relation to its bounding box. An efficient way to generate a random point inside of a triangle is to use a parametric equation of a plane 25:

P = tP1 + uP + vP3

(9)

With 0