Direct Segmentation for Reverse Engineering - TU Chemnitz

0 downloads 0 Views 248KB Size Report
Nov 18, 2002 - In Reverse Engineering a physical object is digitally re- constructed from a set of boundary points. In the seg- mentation phase these points are ...
Direct Segmentation for Reverse Engineering M. Vanˇco

Guido Brunnett†

November 18, 2002

Abstract

Before surfaces can be fitted to the data points, it is necessary to group these points into appropriate subsets, a process which is referred to as segmentation [9, 10]. That segmentation can be based on surface properties, which have been estimated from the data points is an obvious fact. However, to implement an efficient and reliable segmentation is a real challenge. This paper gives detailed information on various problems that had to be solved in realizing our segmentation method. Strategies and tools are presented that are fundamental for a successive performance of a segmentation algorithm. One fundamental idea is to avoid the computation of a triangulation of the data set. Instead of using a triangulation we use a more general data structure which can be efficiently computed even for large data sets: the neighborhood graph. Based on neighborhood information we estimate surface properties up to second order (namely normal vectors and principal curvatures) and perform the segmentation in two consecutive steps. In the first step that is based on normal vectors, sharp or nearly sharp edges (i.e. edges with a very high variation of normal vectors) in the object are detected. In the second step principal curvatures are used to subdivide the data set. Here, tangent continuous but curvature discontinuous edges are detected and segments that lie on the same simple algebraic surface (as planes, cones, cylinders or spheres) are detected and joined. A lot of papers have been published recently on the topic of reverse engineering. Most of them are concerned with the problem of polyhedral reconstruction, while some present tools for certain aspects of the reverse engineering process as surface fitting or curvature estimation. We are aware of only one publication that is purely devoted to the segmentation process in reverse engineering [10]. Our work differs in the following respects from the segmentation procedure published in [10]:

In Reverse Engineering a physical object is digitally reconstructed from a set of boundary points. In the segmentation phase these points are grouped into subsets to facilitate consecutive steps as surface fitting. In this paper we present a step segmentation method with subsequent classification of simple algebraic surfaces. Our method is direct in the sense that it operates directly on the point set in contrast to other approaches that are based on a triangulation of the data set. The segmentation process involves a fast algorithm for k-nearest neighbors search and an estimation of first and second order surface properties. The first order segmentation, that is based on normal vectors, provides an initial subdivision of the surface and detects sharp edges as well as flat or highly curved areas. One of the main features of our method is to proceed by alternating the steps of segmentation and normal vector estimation. The second order segmentation subdivides the surface according to principal curvatures and provides a sufficient foundation for the classification of simple algebraic surfaces. If the boundary of the original object contains such surfaces the segmentation is optimized based on the result of a surface fitting procedure.

1 Introduction The problem of building a 3D model from an unstructured set of points measured from the surface of a given physical object appears in many areas including computer graphics, computer vision and reverse engineering [4]. Building such a model is a problem of growing importance since efficient 3D scanning technologies such as laser range scanning become more and more available. In order to sample 3D objects adequately, multiple scans have to be taken. Merging the measurements of each scan results in a large set of unstructured data points. Now, the goal is to derive a surface model from the measured points automatically.

The method in [10] is based on first order information only. Our first order segmentation has the following completeness property that is helpful for a reliable curvature estimation: any piece of a sharp or nearly sharp

 TU

Chemnitz: [email protected] † TU Chemnitz: [email protected]

1

edge belongs to two different segments.

types. The postprocessing consists of the steps of segment classification, segment extension and segment joining. Segment extension is applied to all types of classified segments, while segment joining is only applied to the special case of cones. Cones are treated in special way because we know that they appear in several slices of a thickness related to one segmentation parameter. Therefore it is reasonable to join these slices into one segment before a conical surface is fitted to the data. Our method is extremely fast (the whole segmentation is done within a few seconds for a data set of 100 thousand points), which allows the user to fine tune necessary thresholds interactively. Besides speed the quality and robustness of the segmentation are remarkable. For noisy data sets with noise in the range of 0.2% of the dimension of the data satisfactory segmentation results could be obtained.

No information is given on how the detection of sharp edges is made exact, while in our paper this point plays a crucial role. Furthermore, no information is given on how segmentation flaws are removed by cleanup procedures. The layout of this paper is as follows: in section 2 we shortly summarize necessary prerequisites for segmentation as k-nearest neighbors computation, normal vector estimation and approximation of the principal curvatures. Section 3 is devoted to first order segmentation. Normals are estimated based on neighborhoods that may cross sharp edges, therefore it is impossible to detect such edges precisely. In many cases sharp edges cannot be detected at all because the normals vary smoothly across the sharp edge. We solve this chicken and egg situation in the following way. First, we make sure that we create segments in the vicinity of sharp edges. This is done by limiting the allowed variance of normal vectors within a segment by a variance threshold. Now, a segment boundary close to a sharp edge is either created because of a detected normal discontinuity or an exceeding normal variance. Then we reestimate normal vectors based on neighborhoods that are truncated at segment boundaries. By repeating both steps normal vector estimation and thus the detection of sharp edges is significantly improved. Our experiments have shown that already after three repetitions a sufficient accuracy has been achieved. Another important aspect of our first order segmentation method is the application of several cleanup procedures that take care of segmentation flaws that appear as segments with a very small number of points. As the result of these procedures small segments are joined with neighboring larger ones until remaining small segments correspond to characteristic features of the surface as sharp edges or regions of high curvature. In section 4 we describe our second order segmentation which is based on principal curvatures. In the vicinity of curvature discontinuous edges we observed a similar phenomenon as described for the first order segmentation close to the edges. Therefore, we also use an extended number of segmentation criteria based both on curvature values and variance of curvatures within a segment. However, we do not recompute curvatures and segmentations instead we intend to locate curvature continuous edges via a classification of the surfaces adjacent to the edge. This is a possible because we are especially interested in objects modeled with simple algebraic surfaces as planes, cones, cylinders, spheres. Consequently, the postprocessing of the second order segmentation is based on the recognition of the surface

2

Prerequisites for segmentation

Any segmentation process is based on geometric properties of the surface to be reconstructed. In this section we will summarize our experiences with different methods to estimate normals and curvatures from the point set. Since the computation of these properties requires the knowledge of the local neighborhood of each surface point, we consider this problem first.

2.1

k-nearest neighbors computation

In our approach we approximate the neighborhood of a point P by the set of k-nearest neighbors of P. This set provides sufficient information of the local behavior of the surface. The authors have developed an efficient method for knearest neighbors computation that is based on a hashing strategy. The fundamental steps of this algorithm are as follows: Data structuring: The bounding box of the point set is subdivided with respect to the x, y and z coordinate such that non overlapping boxes result. During the subdivision a binary tree is created, whose leaves contain the regions with points. For the subdivision the median value is used in order to guarantee that every region contains the same number of points (up to one point). For every region a hash table is allocated and all points of the region are projected into this table. For the hashing the following hash function Index

2

x  y  z  MinSum  TABLESIZE  MaxSum  MinSum

maxxi  yi  zi 

MaxSum

method that consists of a plane of regression fit followed by a fit with a higher order surface. For the approximation with an analytic surface of degree two a first estimation of the normal vector N s in P is needed, that is obtained from a plane of regression fit. We define a new local orthogonal coordinate system, so that P is the origin and Ns coincides with the z-axis of this coordinate system. The neighborhood of P and P are transformed from this new coordinate system into the global coordinate system and approximated by a quadratic or cubic surface z f x y using the least square method. The estimated normal in P is the inverse transformation of the normal vector of the surface in P. Our tests have shown, that the appropriate neighborhood size for point sets with noiseless sampling is in the range of k 10. For this neighborhood size the estimation was incorrect only on sharp edges. Increasing of the neighborhood size results in a smoothing effect of the normal vectors in the vicinity of the edges, but did not improve them on smooth surfaces. For noisy data sets the neighborhood size has to be increased in order to obtain satisfactory results. However, these large neighborhoods result in a strong smoothing close to the edges. Extensive tests showed, that the optimal choice of the estimation method should be made depending on the surface shape. In general the neighborhood size should be chosen in the range of 20–25 neighbors. The subsequent segmentation step needs consistent oriented normal vectors. For this we employ the orientation propagation method suggested by Hoppe [1]. The direction of one normal vector is chosen and this direction is propagated to adjacent points traversing the weighted minimal spanning tree of the points, where the weights correspond to the angles between normal vectors of adjacent points. Note that this method works satisfactory for well and average quality sampled data, but our tests have shown that it may fail for highly complex data sets. In this case it is necessary to allow different neighborhood sizes for different parts of the point set.

i

minxi  yi  zi 

MinSum

i

is used for the points distribution, where ‘Index’ denotes the index of the point in the hash table, ‘MinSum’ is the minimum and ‘MaxSum’ the maximum sum of the coordinates of the points in a point cluster; x y z are the coordinates of the current point and ‘TABLESIZE’ is the size of the hash table. Searching: In order to determine the k-nearest neighbors of P, the following operations are performed: Find in the binary tree the region which contains the point P. Search in the corresponding hash table for k-nearest neighbors and use the k th neighbor to determine the searching sphere. If the searching sphere intersects or contains adjacent regions, search in the hash tables of these regions. X1

X3

X3 R8 R7

Y4

Y4 Y4

Y4 4 R2

R6 R5 1

C2 2

Y2

R1

Y2 Y4

Y2

3 C1

R3 R4

Y4

Y4 X3 X3 R i − regions after median subdivision C1 − searching sphere with centre P1 and radius = d(P1,P3)

Figure 1: 2-nearest neighbors searching in 2D

2.3 A detailed description of this algorithm can be found in [5, 6]. Note, that the performance of this method is superior to a KD-Tree based approach (see [2]).

2.2

Computation of principal curvatures

For second order segmentation the knowledge of the principal curvatures is of fundamental importance. To compute them we use the approximation method suggested by Martin et al. [3]. We take the computed normal vector N p at a point P and fix a new coordinate system with the origin at P and z-axis pointing in the direction of the normal vector N p . In this coordinate system we approximate the neighbor points of P with a quadratic surface f x y ax2  bxy  cy2  dx  ey  f using the least

Approximation of the normal vectors

In [7] we presented a detailed description and comparison of three different methods in order to establish a reliable procedure for normal vector estimation. Our studies showed that best results could be obtained with a two step 3

square method ∑ki 0  f xi yi   zi 2  min 1 . The normal vector in the modified coordinate system has than the form N pm  ∂∂xf  ∂∂yf 1. After computation of the coefficients gi j and hi j of the first and second fundamental form, we obtain the principal curvatures as the solution of the quadratic equation det h i j  λgi j  0. The accuracy of the principal curvatures estimated by this method obviously depends on the quality of the approximated normal vectors. Furthermore, the curvatures depend on the size of the chosen neighborhood. In general, the neighborhood for curvature estimation has to be larger than the one for normal estimation. The neighborhood has to be enlarged with the increase of noise contained in the data. For noisy data sets we use a default neighborhood size of 30.

3

If the object is expected to include sharp edges with interior angles δ1    δn , α has to be chosen in the following way: α  minδ1    δn π  δ1 π  δn. β controls the flatness of the segments. For smooth surfaces the segmentation is only due to this angle. In the vicinity of sharp edges the initial normal estimation may provide incorrect normals that perform a “smooth” transition across the edge (see figure 4). In this case β will provide the segment split. The initial segmentation has the following undesirable property: besides the larger segments (of a size mainly controlled by β) that provide a reasonable segmentation of the data set, a huge number of small segments (number of points  20) are produced that are certainly unwanted. In order to reduce the number of small segments we implemented three procedures to clean up the segmentation. The first of these procedures joins a small segments with a neighboring larger one, if this process violates the βcriterion only by a prescribed tolerance ε. More precisely we proceed as follows:

First order segmentation

The relevance of the first order segmentation is to detect sharp edges, regions of high curvature as well as flat areas in the object to be reconstructed. Since for a reliable curvature estimation sharp edges have to be located, first order segmentation is a prerequisite for higher order segmentation. One of the main features of our method is to proceed by alternating the steps of segmentation and normal estimation as described below. Each segmentation allows to improve the normal estimation which in turn can be the basis for a refined segmentation.

3.1

Cleaning-up, pass 1: For all small segments S f do: Consider the set NP of all neighbors of all edge points of S f . Store for each segment the number of points of NP that it contains. We call this entry the neighborhood index. Examine all segments with a neighborhood index bigger than a specified threshold (e.g. 40% of all neighbors). Among these find the segment S with the smallest angle deviation η between its reference normal and reference normal of S f . If η  β  ε join S f and S. Update the reference normal of S.

The initial segmentation

Our segmentation method uses two angles α and β to subdivide the surface into point clusters such that the following angle criteria are fulfilled:

Remaining small segments are processed by a second procedure that intends to reduce the size of a small segment by repeatedly extracting its edge points. Cleaning-up, pass 2: Let S1 denote a small segment

The angle between the normal vectors N r Ni of two adjacent points Pr Pi within one segment; Pi  N Pr  has to be smaller than α ; ( Nr Ni   α). The angle between the normal vector N i of a point Pi , which is a candidate to be added to an existing segment, and the reference vector N re f of the segment has to be smaller than β ( Ni Nre f   β). Here, the reference vector Nre f of a segment is the normalized sum of normal vectors of all points in the segment.

Search for an edge point Pi of S1 with a neighbor Qi j that belongs to a large (regular) segment S 2 with reference normal Nre f .

The meaning of the angles α and β can be described as follows: α controls the edge split by specifying the maximum acceptable angle between adjacent normal vectors.

Repeat until the list of edge points of S 1 is empty or all of its points are marked as not extractable.

If such a point exists and the relations NPi NQi j   α  NPi Nre f   β  ε hold, then add Pi to S2 and update the list of edge points of S 1

Small segments that still remain after execution of the second cleaning-up procedure belong in general to one of the following categories:

1 We

also tested the approximation with a cubic surface, but the quadratic surface turned out as more stable for noisy data.

4

a) they describe regions with high curvature or bad sampling, where the normal vector estimation procedure failed even for extended neighborhoods.

Since the current segmentation provides rough information about the object’s surface, we can approximately detect edges or regions with high curvature. Therefore the neighborhood of a point P in a regular segment can be temporarily adjusted, so that N P does not contain neighbors beyond sharp edges.

b) they occur along sharp edges of the object. The third procedure intends to simplify the segmentation by joining the remaining small segments. The segment that results from the process of joining small segments to one segment is called connected segment. All other segments are referred to as regular ones. Figure 2 illustrates the first order segmentation displaying the regular segments only, on figure 3 the connected segment is shown.

For every segment a list of its segment neighbors is created and the normal recomputation procedure proceeds as follows: Take a point Pi of a segment S j . Mark all segment neighbors S m of S j , with the propm N j   δ, where N m and N j are the erty Nre f re f re f re f reference vectors of the segments S m and S j resp. and δ is a threshold. Check if all neighbors of Pi belong to the marked segments. If not, remove them from N Pi . If the neighborhood was not changed, continue with a next point. Otherwise extend N Pi  to a full neighborhood as follows:

Figure 2: First order segmetation for artificially generated object consisting of 22,211 points. 41 segments result containing 21,112 points

– Take the neighbors of the points in N Pi , which belong to marked segments and have not yet been included into N Pi . Estimate the normal vector of P from the modified neighborhood using one of the described estimation methods and discard the modified neighborhood. The procedure recomputes only normal vectors of regular segments (normal vectors in connected segments can be recomputed e.g. by increasing of the neighborhood). The whole process (segmentation with subsequent recomputing) can be repeated, but as the tests have shown, more than two repetitions gives only very small improvements. The recomputation works very fast (the worst case complexity is On, but in general only normal vectors of edge points are recomputed) and for an object with many sharp edges it provides good improvement of the normal vector estimation. If the transition between two segments is smooth, then the normal vectors of the boundary points of these segment were well estimated with the initial estimation procedure and it is undesirable to recompute these normal vectors. It is the role of the angle δ to avoid the modification of the neighborhoods on this situation. Figure 4 shows the effect of normal vector recomputation for an object with a hole and many sharp edges. The tests have shown that many objects can be reliably segmented with a fixed choice of α and β. We use α 8Æ β 20Æ as default values.

Figure 3: The connected segment of the object on the figure 2

3.1.1

Iterating segmentation and normals estimation

We already mentioned that the normal vectors close to the edges are estimated inaccurately because the neighborhoods contain points on both sides of the edge. This is especially the case if the data set results from a poor sampling that requires the use of large neighborhoods. These normal vectors often disturb the segmentation process and cause a large number of small segments. Therefore we implemented a procedure for normal vector recomputation, which searches for the points in the vicinity of sharp edges and improves the estimation of their normal vectors by modification of the point neighborhood. 5

exist two directions vmin Pi  vmax Pi  of minimum and maximum normal curvature κ min Pi  κmax Pi  (the principal curvatures). If the second fundamental form of S (that comprises all curvature information of the surface) changes continuously from P1 to P2 , the closeness of these points suggests to assume that the differences κmin P1   κmin P2  and κmax P1   κmax P2  will be small. We will therefore prescribe two thresholds ∆κ min and ∆κmax that correspond to the maximum deviation of the principal curvatures for two neighboring points to be accepted as points of one curvature continuous segment. In section 2.3 the equations for computing the principal curvatures have been given. Since curvatures are more sensitive to noise in the data than normals, it is necessary to choose larger neighborhoods for curvature estimation than for normal estimation. Note, that these neighborhoods are correctly truncated to lie completely on one side of a sharp edge. However, the first order segmentation cannot locate tangent continuous but curvature discontinuous edges (e.g. between a cylinder and a sphere). Therefore a neighborhood for curvature estimation can cross such an edge. This will lead to wrongly estimated curvatures that vary smoothly across the edge. In analogy to our approach in the first order case we introduce two additional parameters into the segmentation procedure. First we define a reference minimum and reference maximum curvature of a segment as the average of any value κ min P and κmax P within that segment: f f 1 ns 1 ns κre κre max min ns ∑i 1 κmin Pi  ns ∑i 1 κmax Pi . Then we only accept a point Pi to be included into a segment, re f re f if the differences κ min  κmin Pi  and κmax  κmax Pi  re f re f stay smaller than the prescribed thresholds ∆κ min ∆κmax respectively. The segmentation proceeds by initializing a segment by a single point and recursively checking the neighbors of any point in the segment. If a neighboring point is found that satisfies the four segmentation criteria described above, the point is included in the segment. If no such point can be found the segment is considered to be complete and a new segment is initialized. The starting point for any new segment is taken from a sorted point list. This list contains all data points sorted in ascending lexicographical order of the absolute values of both curvatures κ min and κmax , i.e. IndexP  IndexQ iff κmin P  κmin Q or κmin P κmin Q and κmax P  κmax Q. Within this list any point that already belongs to a segment is flagged. The first unflagged entry is chosen as the starting point for a new segment. This choice assures that the segmentation will proceed from simple to more complex geometries, a fact that is advantageous in many situations. Consider two adjacent tangent continuous surfaces of

Figure 4: First step normal estimation (upper image) and the normal vectors after recomputing (lower image)

4 Second order Segmentation In the previous section we presented a reliable method for detection of sharp (i.e. tangent discontinuous) or nearly sharp edges (i.e. tangent continuous ones with a very high variation of normals). Our first order segmentation involves the following completeness property: any sharp or nearly sharp edge belongs to two different segments. This property assures that in the process of normal recomputation the neighborhoods are correctly truncated to one side of a critical edge. The completeness property is obtained by introducing the second control angle β, that refers to the deviation of a point normal to the reference vector of a segment. However, it is this angle that leads to an unwanted tessellation of surfaces (e.g. surfaces of revolution) into several strips. This disadvantage could be overcome by computing a final segmentation (with highly accurate normals due to the previous steps) that involves only the angle α. However, it is advisable to perform this additional pass of the segmentation based on second order information, since this allows to include tangent continuous but curvature discontinuous edges into the model. In this section we will describe our second order segmentation strategy that is based on the notion of principle curvatures. Consider two neighboring points P1 P2 on a discretized piecewise C2 surface S and assume that these points are not separated by a sharp edge. For each point Pi there 6

different curvature (e.g. a cylinder and a sphere). Our choice guarantees that the segmentation will be initialized in the interior of the plane while an arbitrary choice may have initialized a segment at the boundary between the surfaces. The first choice would result in a mainly planar segment that slightly enters into the cylinders. The second choice however would create a small segment that would be resolved in the following postprocessing steps. The segmentation terminates when all entries in the sorted point list are flagged. In contrast to the first order segmentation that provides very good results on a vast class of objects for a fixed choice of values α β (see section 3), the quality of the second order segmentation highly depends on the choice of re f re f the four thresholds ∆κ min ∆κmax ∆κmin ∆κmax . For this reason we have developed a semi-automatic procedure to determine these parameters, see [8]. The only user input required is a vague statement about the estimated percentage of noise in the data (e.g. low, medium high). Following to the creation of the initial curvature based segmentation a major postprocessing is performed that is too involved to be explained in detail in this paper. The main elements of the postprocessing are:

two are very complex scanned real objects containing algebraic surfaces as well as free form surfaces. The time complexity depends mainly on the number of geoemtric surfaces in the object than on the number of points. Note, that if an object consists only of free form surfaces, the postprocessing can be switched off by the user.

Figure 5: The point set of an artificially generated (without noise) mechanical object containing 54,854 points

Segment classification. This is based on fitting procedures for the basic surface types as planes, cylinders, cones and spheres.

Figure 6: First step: normal vector based segmentation, consisting of 65 segments containing 54,169 points

Segment extension. We try to extend or join segments based on the knowledge of the surface types and their algebraic description. Iteration between the steps of classification and extension. For most test data 3 iterations are sufficient to provide a satisfactory segmentation. The figures 5- 8 illustrate the whole segmentation process starting with the normal based segmentation through curvature based segmentation and the subsequent postprocessing. The figures 9, 10 and 11 show the final segmentation for objects reconstructed from noisy data sets. The data sets used for these figures were contaminated by random noise in the range of 0.1% (fig. 9), 0.1% (fig. 10) and 0.2% (fig. 11) of the length of the diagonal of the objects’ bounding boxes. The whole curvature based segmentation procedure (with subsequent postprocessing and classification) is time optimized, in order to allow many iterations and tuning of the curvature segmentation parameters. Table 1 shows the time consumption of the segmentation of a few objects (run on Pentium 4 1.7 GHz system, with 1 GB memory). The first three objects are artificially created objects consisting of only algebraic surfaces. The last

Figure 7: Second step: curvature based segmentation, consisting of 25 segments containing 48,361 point

Figure 8: After postprocessing: classification and extension of the recognized segments - 11 segments containing 53,740 points

7

# of points 26,045 54,854 90,974 236,381 294,923

Curv. estim. neighborhood 20 20 20 25 25

Curv. estimat. time 2.61 5.74 9.65 24.73 35.42

Seg. time 0.26 0.60 1.43 8.44 24.72

Postproces. time 0.74 0.79 0.66 11.39 7.34

Table 1: Curvature segmentation and postprocessing (including the classification) time consumption of some real and artificial objects on Rendering, June 1996 [3] Martin, R.R. and V a´ rady, T. Estimation of Principal Curvatures from Range Data, RECCAD Deliverable Documents 2 and 3 Copernicus Project No. 1068, Report GML 1997/5, Computer and Automation Institute, Hungarian Academy of Sciences, Budapest, 1997

Figure 9: Curved Box: 18 segments, 27,371 points (of 27,792)

[4] Mencl, R. and Muller, ¨ H.: Interpolation and Approximation of Surfaces from Three-Dimensional Scattered Data Points Research Report No. 662/1997, December 1997 [5] Vanˇco, M., Brunnett, G. and Schreiber, Th.: A hashing strategy for efficient k-nearest neighbors computation, Proceeding CGI 1999, 1999, 120–128

Figure 10: Revolutional object: 12 segments, 54,660 points

[6] Vanˇco, M., Brunnett, B. and Schreiber, Th.: A Direct Approach Towards Automatic Surface Segmentation of Unorganized 3D Points, Proceedings SCCG 2000, 2000 [7] Vanˇco, M. and Brunnett, B., Consistent orientation of segmented models recovered from digitized data, Proceedings WSCG 2001, Pilsen 2001 [8] Vanˇco, M. and Brunnett, B., Semi-automatic Curvature Based Segmentation of Digitized Data, preprint University of Technology Chemnitz, 2002

Figure 11: MechPart: 23 segments, 19,949 points (of 22,211)

[9] V´arady, T., Martin, R.R. and Cox, J.: Reverse Engineering of Geometric Models - An Introduction Computer-Aided Design, Vol. 29, April 1997, 255– 268

References [1] Hoppe, H., DeRose, T., Duchamp, T., McDonald, J. and Stuetzle, W.: Surface Reconstruction from Unorganized Points, Proceedings SIGGRAPH ’93, 1993, 19–26

[10] Va´ rady, T., Ko´ s, G., Benk¨o, P.: Reverse Engineering Regular Objects: Simple segmentation and surface fitting procedures, IJSM (International Journal of Shape Modeling), (Procs. of CAGD: New Trends and Applications, 1997), Vol.4, 1998, 127–142

[2] Jensen, H. W.: Global illumination using photon maps, Proceedings Seventh Eurographics Workshop 8