Optimal Landmark Distributions for Statistical Shape Model Construction Tobias Heimann, Ivo Wolf and Hans-Peter Meinzer Div. Medical and Biological Informatics German Cancer Research Center Im Neuenheimer Feld 280, Heidelberg, Germany
[email protected] ABSTRACT Minimizing the description length (MDL) is one of the most promising methods to automatically generate 3D statistical shape models. By modifying an initial landmark distribution according to the MDL cost function, points across the different training shapes are brought into correspondence. A drawback of the current approach is that the user has no influence on the final landmark positions, which often do not represent the modeled shape adequately. We extend an existing remeshing technique to work with statistical shape models and show how the landmark distribution can be modified anytime during the model construction phase. This procedure is guided by a control map in parameter space that can be set up to produce any desired point distribution, e.g. equally spaced landmarks. To compare our remeshed models with the original approach, we generalize the established generalization and specificity measures to be independent of the underlying landmark distribution. This is accomplished by switching the internal metric from landmark distances to the Tanimoto coefficient, a volumetric overlap measure. In a concluding evaluation, we generate models for two medical datasets with and without landmark redistribution. As the outcome reveals, redistributing landmarks to an equally spaced distribution during the model construction phase improves the quality of the resulting models significantly if the shapes feature prominent bulges or other complex geometry. Keywords: Statistical Shape Models, Shape Correspondence, Remeshing, Performance Characterization
1. INTRODUCTION Since their introduction by Cootes et al.,1 statistical shape models have become popular tools for automatic segmentation of medical images. The main challenge of the approach is the point correspondence problem in the model construction phase: On every training sample for the model, landmarks have to be placed in a consistent manner. While manual labeling is a time-consuming but feasible solution for 2D models with a limited number of landmarks, it is highly impractical in the 3D domain: Not only is the required number of landmarks dramatically higher than in the 2D case, but it also becomes increasingly difficult to identify and pinpoint corresponding points, even for experts. Several automated methods to find the correspondences in 3D have been proposed. Brett and Taylor use a pairwise corresponder based on a symmetric version of the ICP algorithm. 2 All training shapes are decimated to generate sparse polyhedral approximations and then merged in a binary tree, which is used to propagate landmark positions. Shelton measures correspondence between surfaces in arbitrary dimensions by a cost function which is composed of three parts representing Euclidean distance, surface deformation and prior information. 3 The function is minimized using a multi-resolution approach that matches highly decimated versions of the meshes first and iteratively refines the results. Frangi et al. build a shape model of the heart using volumetric nonrigid registration.4 Employing multi-resolution B-spline deformation fields, they match all training shapes to an atlas Copyright 2006 Society of Photo-Optical Instrumentation Engineers. This paper was published in Proc. SPIE Vol. 6144, 61441J, Medical Imaging 2006: Image Processing; Joseph M. Reinhardt, Josien P. Pluim (Eds.) and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
and propagate a set of landmarks back to the individual shapes using the respective inverse transforms. Paulsen and Hilger match a decimated template mesh to all training shapes using thin plate spline warping controlled by a small set of manually placed anatomic landmarks.5 The resulting meshes are relaxed to fit the training shapes by a Markov random field regularization. Another approach based on matching templates is presented by Zhao and Teoh6 : They employ an adaptive-focus deformable model to match each training shape to all others without the need for manually placed landmarks. The shape yielding the best overall results in this process is subsequently used to determine point correspondences, enhanced by a ”bridge-over” procedure for outliers. A common characteristic of these methods is that they base their notion of correspondence on general geometric properties, e.g. minimum Euclidean distance and low distortion of surfaces or volumes. A different approach is presented by Davies et al.7 who propose to minimize a cost function based on the minimum description length of the resulting statistical shape model. The procedure is assessed superior performance in comparison with a selection of other approaches.8 In our last article,9 we presented an improved minimization scheme for this method which is based on a gradient descent optimizer with a simplified cost function, providing both better results and faster performance. In this work, we focus on the landmark quality of models generated by the automatic optimization of correspondence.
2. PRELIMINARIES 2.1. Statistical Shape Models The input to our algorithm for statistical shape model generation is a collection of 3D meshes of our training shapes. To describe these shapes in the statistical model, we employ the popular point distribution models 1 (PDMs). In a PDM, each input mesh is described by a set of n landmarks and stored as a single vector x, where the coordinates for landmark i can be found at (xi , xi+n , xi+2n ). ∗ The vectors of all training samples form the columns of the landmark configuration matrix L. Applying principal component analysis (PCA) to this matrix delivers the principal modes of variation pm in the training data and the corresponding eigenvalues λm . Restricting the model to the first c modes, all valid shapes can be approximated by the mean shape x ¯ and a linear combination of displacement vectors: x=x ¯+
c X
ym pm
(1)
m=1
2.2. Quantifying Correspondence In order to describe the modeled shape and its variations correctly, landmarks on all training samples have to be located at corresponding positions. In their work about automatic landmark generation, 10 Davies et al. use the minimum description length of the shape model to quantify this correspondence. In practice, less complex cost functions based on the eigenvalues λm as the ones proposed by Kotcheff and Taylor11 or Thodberg12 produce very similar results. As we need manageable derivatives for our gradient descent optimization, we use the simplified version of the MDL function described in Ref. 12 to quantify correspondence.
2.3. Mesh Parameterization To define an initial set of correspondences and a means of manipulating them efficiently, we need a convenient parameter domain for our training shapes. For reasons of simplicity, we will restrict the discussion to shapes of genus 0, which are topologically equivalent to a sphere — most shapes encountered in medical imaging (e.g. kidneys, liver, heart ventricles, brain surface, mandibula) fall into this category. Thus, we are looking for a one-to-one mapping which assigns every point on the surface of a mesh a unique position on the unit sphere, described by two parameters longitude θ ∈ [0..2π] and latitude φ ∈ [0..π]. In practice, this mapping is specified by a spherical mesh of the same topology as the input mesh. ∗ In statistical shape model literature, the term ”landmark” is frequently used to describe the points of the PDM. This is a difference to the common definition of a landmark as an especially characteristic point, e.g. at a salient feature.
Figure 1. Kernel configurations for different levels of detail. Red colors mark regions with large vertex movements, blue ones those with no modification.
To generate a parameterization for a given shape, a number of different approaches exist, all attempting to minimize the inevitable distortion. Typically a method preserves either local angles or facet areas while trying to minimize distortions in the other. An overview of recent work on this topic can be found in Ref. 13. For an initial parameterization, Davies uses diffusion mapping, a simplified version of the method described by Brechb¨ uhler 14 which is neither angle- nor area-preserving. Due to our optimization strategy (Sect. 2.4), our focus lies on preserving angles, a behavior guaranteed by conformal mapping functions. Therefore, we employ the variational method presented in Refs. 15, 16 to generate an initial conformal parameterization Ω i for each training sample i.
2.4. Optimizing Correspondence With an initial parameterization Ωi for each training sample, we can acquire the necessary landmarks by mapping a set of spherical coordinates to each shape. To optimize the point correspondences with the cost function, we maintain a fixed set of global landmarks Ψ and modify the individual parameterizations Ω i . This enables us to alter number and placement of landmarks on the unit sphere at any stage of the optimization, as we will show in Sec. 3. In the following, we will briefly resume the employed optimization method. For a more detailed description of the procedure, we refer the reader to Ref. 9. Correspondences for a shape are modified locally by applying a warp on all vertices of the spherical parameterization mesh inside a specific kernel region. The modification consists of a change of the spherical coordinates (θ, φ) that is restricted by a Gaussian envelope function. The optimal direction and amplitude of the move are estimated by calculating the derivatives of the employed cost function for each landmark and averaging the individual values for the kernel. Since the spherical landmark set Ψ is left undisturbed, moving the parameterization changes the relative landmark locations and leads to new positions for all landmarks in a local area of the training shape. Because of the strict local limits, it is possible to optimize several kernels at different positions of the sphere concurrently. The process of calculating derivatives and updating the parameterizations for all training shapes is conducted iteratively in a gradient descent fashion. After each iteration, kernel positions are rotated randomly to ensure an equal treatment of all landmarks. At the beginning, large kernels are used to optimize correspondences of coarse features first. When the cost function does not decrease sufficiently anymore, we switch to the next level of detail with a larger set of smaller kernels and reduced step sizes. All in all, we have four levels of detail; three example configurations are displayed in Fig. 1. After termination of the last level, the optimization is considered converged.
3. LANDMARK RECONFIGURATION As described in Sec. 2.3, our initial parameterization functions are angle-preserving to provide the best conditions for the presented optimization scheme. However, since parameterizations are modified in the course of optimization, the resulting landmarks will generally not feature any specific properties. In this section we will
Figure 2. Landmarks and corresponding triangulations on a training shape for the liver model (displayed transparently). For the shape on the left, the correspondence optimization was started with conformal parameterizations, on the right with diffusion mapping. Both versions exhibit considerable variation in landmark density, leading to deficient shape representations. The two-dimensional display evokes the impression of vertices postitioned inside the volume, but all landmarks are located on the surface of the training shape.
present methods to reconfigure landmarks after or during the optimization process in order to achieve certain desirable properties for our final model.
3.1. Distributing Landmarks Equally Depending on the shapes of the training samples, parameterizations can exhibit considerable area deformation. Although landmarks are distributed equally over the surface of the spherical parameterization mesh, the density of the mapped positions on the training shapes will vary widely. This issue becomes critical when landmarks are spread so sparse in certain regions that the triangulation does not represent the shape accurately anymore (see Fig. 2). A solution to this problem is to change the global landmark set Ψ: Since the point correspondences are specified by the individual parameterizations, reconfiguring landmarks on the sphere will maintain these correspondences while at the same time generating a new triangulation.
3.2. Remeshing Surfaces The problem of finding new landmark positions for a shape model is closely related to the problem of remeshing a polygonal surface, i.e. finding new vertex positions and a new triangulation for a single shape. Since this is an active field of research in the computer graphics community, there exists a multitude of different approaches for remeshing (see e.g. Ref. 17–19). In order to utilize one of these methods for reconfiguring the landmarks of our shape model, we need a way to incorporate the shape variation into the process. In our case, we accomplish this by using the entire collection of training meshes as input (from which all the variance information is derived). The natural links between these meshes are the parameterizations, and conveniently, there are some remeshing approaches which work in parameter space. One of those is the work by Alliez et. al, 20 which we will briefly resume in the following. First, the input mesh is converted into a collection of two-dimensional maps in parameter space. These maps define the desired vertex densities for the entire surface and can be set up to any desired distribution, e.g. most economic representation (i.e. a sparser net of vertices in regions with lower curvature) or a uniform vertex distribution. The higher the values in a certain area of the map, the denser the vertices will be placed in the corresponding surface area of the mesh. In order to reach an equal distribution of vertices, a distortion
Figure 3. Area distortion and resulting landmark positions for one half-sphere. The image on the left shows the ratio between area on model surface and area in parameter space. After dithering the map, the resulting white pixels are connected by a two-dimensional Delaunay triangulation. When this geometry is mapped back to the sphere and joined with the other half-sphere, it forms the prototype for the new landmark configuration.
ratio is computed for each triangle of the mesh, defined as the area of the triangle on the input mesh divided by the area on the 2D map. The triangle is then drawn on the map with the value of this ratio as color, leading to a map of the area distortion on the shape (see Fig. 3 on the left). Subsequently, this map is dithered with a predefined number of points using an error diffusion algorithm as the one presented in Ref. 21. Positions of all points are mapped back to the mesh and give a first solution for an equal distribution of vertices. The new vertex connections are determined by a Delaunay triangulation on the 2D maps (see Fig. 3 on the right). In case of multiple maps, the vertices of all parts have to be re-connected for the new mesh. This is accomplished by a separate, one-dimensional dithering process along the boundary of all maps. Each resulting pixel is inserted in both maps adjacent to the respective boundary. These constrained vertices are connected to the existing points inside the map forming a convex triangulation (the outer circle in Fig. 3 on the right). When mapping back the vertices to the mesh, corresponding boundary points will end up in the same location and are stitched together.
3.3. Adaptation to Shape Models The adaptation of this remeshing approach to statistical shape models is straight-forward: Instead of a single mesh that has to be remeshed, we have an entire collection of training shapes that has to be triangulated with the same topology. Conveniently, we already have parameterizations for all these meshes, albeit of spherical topology. To obtain the required two-dimensional images, we separate each parameterization in two half-spheres and use stereographic projection to create two disk-like patches for each mesh. Since the spherical parameterizations correspond over all shapes, the two-dimensional maps match as well. Thus, we can average all patches representing the same side of the sphere and end up with two control maps on which the dithering is performed. The resulting landmark densities will be optimal to represent the entire collection of training shapes, which means that they will also represent the shape model in an optimal way. An example of a successfully reconfigured training shape is displayed in Fig. 4.
4. PERFORMANCE MEASURES In order to estimate the impact of the presented landmark reconfiguration, we need an objective performance index for evaluation. In his thesis,22 Davies describes the measures of generalization ability and specificity to
Figure 4. The same shape as in Fig. 2 with a reconfigured landmark set. All landmarks are equally spread over the surface, leading to an accurate representation of the shape data.
compare the performance of different models.
†
Generalization ability quantifies the capability of the model to represent new shapes. It can be estimated by performing a series of leave-one-out tests on the training set, measuring the difference of the omitted training shape to the closest match of the reduced model. Specificity describes the validity of the shapes the model produces. The value is estimated by generating random parameter values from a normal distribution with zero mean and the respective standard deviation from the PCA. The difference of the generated shape to the closest match of the training set is averaged over a number of 10,000 runs. Both measures are defined as functions of the number of modes or parameters used by the model and displayed as piecewise linear graphs. Smaller values indicate better models. To quantify the difference between two shapes x and y with landmark i located at the position l x (i) respective ly (i), originally22 the sum of squares error was used: dsse (x, y) =
n X i=1
|lx (i) − ly (i)|2
(2)
Later on,8 this metric was replaced by the mean absolute distance, making it stable in regard to changes of the number of landmarks n: n 1X |lx (i) − ly (i)| (3) dmad (x, y) = n i=1
4.1. Pitfalls of Model Evaluation When using these measures to compare the performance of different shape models, one has to assure that the evaluation is valid and unbiased. In the following, we list some of the common pitfalls that are prone to causing invalid results and deserve special attention: † Davies also introduces the measure of compactness, which we do not use because it is not as general as the other two measures.9
Figure 5. Color-coded standard deviation for all landmarks on the surface of the liver model. The values range from 1.3 (dark blue) to 5.2 (bright red). The inhomogeneous concentration of high-variance areas at protruding bulges is clearly visible.
4.1.1. Scale Since current evaluation measures are based on distances between landmarks, the models to be compared must have exactly the same size. But how can we define a standard size for the evaluation if all training samples are scaled with individual factors before building the landmark matrix L? To maintain model size at a stable level, √ one possibility is to scale the mean shape to feature a RMS radius of r = 1/ n.12 In this case, changing the number of landmarks obviously leads to a different model size, but varying only landmark positions does as well (since both the shape center and the average distance to it will change, leading to a different RMS radius). A safer way is to leave the training samples at their original sizes and let the model scale for a best fit. However, this procedure is not suited for inputs of different sizes as all errors will be weighted with the input scale. For our evaluation, we scale all models to a common size determined by the input meshes. P Since we want to allow n variations of the individual scale factors si , we constrain their average to a stable level: i=1 si /n = 1. 4.1.2. Number of landmarks When measuring distance between shapes with the sum of squares error, it is obvious that both models must have the same number of landmarks. Using mean absolute distance instead seems to remedy this problem. However, models with a larger number of landmarks naturally capture more detail and thus more variance of the shape. Thus, they face a disadvantage when it comes to a comparison with a coarser model. To sum up, only models with the same number of landmarks should be compared using the current evaluation methods. 4.1.3. Landmark distribution The most dangerous (because most unexpected) problem is the one of different landmark distributions. Any two non-identical models will have their landmarks located at different positions. In addition to the consequences listed in Sec. 4.1.1, one has to keep in mind that the variance of a shape model varies considerably over the surface. An example for this can be seen in Fig. 5. If two shape models with similar mean shapes and displacement vectors are compared, the model with a larger part of landmarks in low-variance areas will always produce better performance values. In the evaluation performed in Ref. 9, we solved this problem by adjusting the distribution of one model to match that of the second one. While this method does work, it implicates a slight disadvantage for the model with the changed distribution, since its landmarks were not optimized for these specific locations.
4.2. A New Metric The depicted problems of evaluating shape models originate in the way the difference between two model instances is measured: Both sum of squares error as well as mean absolute distance are metrics focused on landmarks and landmark distance. However, if we are using shape models in image analysis, we are most often interested in the resulting segmentation (i.e. the volume encompassed by the model) than the exact landmark locations. Looking at performance evaluation in image segmentation,23 there are several metrics that seem to be better suited for the task of shape comparison, e.g.: • Hausdorff distance. This metric measures the maximum distance between two surfaces. If d(x, y) is the distance between two points, the Hausdorff distance between two surfaces X and Y is defined as: DHD (X, Y ) = max(h(X, Y ), h(Y, X))
with
h(X, Y ) = max min d(x, y) x∈X y∈Y
(4)
• Average squared surface distance. Since the Hausdorff distance is very sensitive to outliers, an alternative is to average the squared distances over both surfaces: X 1 X min d2 (x, y) + (5) min d2 (y, x) DASD (X, Y ) = y∈Y x∈X 2 x∈X
y∈Y
• Volumetric overlap. There are several measures to quantify the similarity between two volumes A and B, a popular one being the Tanimoto coefficient24 (also known as Jaccard coefficient): CT =
|A ∩ B| |A ∪ B|
(6)
The Tanimoto coefficient yields 1 if both shapes are identical and 0 for no overlap at all. To transform the measure into a distance metric, the obvious way is to negate it as in DT = 1 − CT . From a computational point of view, measuring surface distances is more complex and time-consuming than measuring overlap, even considering that the surface of the model has to be converted into a binary volume first. Especially for the evaluation of specificity (which involves 100,000s of shape comparisons), computational requirements have to be taken into account. For this reason, we opted for the Tanimoto coefficient as a volumetric overlap measure. For the computation of generalization ability and specificity, the generated instances of the model are compared to the original input training shapes, not their model approximations. This procedure ensures that the metric measures the unbiased performance and is sensitive to deficient shape representation as shown in Fig. 2.
5. EXPERIMENTS AND RESULTS In order to test the proposed landmark reconfiguration scheme, three shape models were built for each test dataset. The first model was optimized with the standard gradient descent approach presented in Ref. 9. The final resulting parameterizations were forwarded to the landmark reconfiguration algorithm, which spread the landmarks equally on all shapes without changing correspondences, delivering the second model. For the generation of the third model, the optimization was interrupted every 10 iterations to reconfigure the landmarks. Since each reconfiguration introduces a small change in the cost function value, this procedure interferes with the convergence criterion of the optimizer. To provide the same conditions for all models, the optimization was conducted up to the same number of iterations as the first model needed.
Figure 6. Three of 18 training shapes for the statistical model of the right lung.
5.1. Datasets For the experiments, we used two medical datasets from clinical routine imaging. The fist dataset is based on 18 segmentations of the same right lung at different times during the breathing cycle. The original image data is an MRI 3D FLASH sequence with 77x128 matrix size and 52 partitions, resulting in a voxel size of 3.75x3.75x3.8 mm. All segmentations were filtered, triangulated and smoothed, producing meshes containing between 6.500 and 10.000 triangles. Three example shapes can be seen in Fig. 6. The shape model was built using 642 landmarks. The second training set includes 32 shapes of different livers. The original images are contrast agent enhanced CT scans which were rescaled to isotropic voxels of 1 mm size. The 3D meshes created from the segmentations were decimated to contain around 3.000 to 4.000 triangles. The shape model was built using 2562 landmarks. Since the liver is an organ with protruding bulges and enormous variability, it should be a very interesting candidate for landmark reconfiguration.
5.2. Results For both datasets, all three models were evaluated with the landmark-independent generalization and specificity measures described in Sec. 4.2. Because of the high resolution of the liver dataset, the binary volumes for the comparison of this model were generated at half the original resolution. The test machine was a 3.0GHz Intel Pentium 4 with Windows XP and 512MB of memory. Calculation of generalization ability took less than 10 minutes for both models, but the specificity needed considerably longer: For ten modes of variance with 10,000 generated random shapes per mode, the liver model required a computation time of approximately 18.5 hours at half the original resolution. Due to the low resolution and smaller training size of the lung model, this one needed only about an hour for the same task. The results of the calculations are displayed in Fig. 7. While the generalization ability does already improve noticeable for both models if the landmarks are reconfigured once after convergence, the improvements become significant (i.e. with non-overlapping error bars) only when the reconfiguration is executed continuously during the optimization. Since the specificity measure generates much smaller standard errors (due to the higher number of shape comparisons involved), the improvements are already significant for one reconfiguration. However, a continuous reconfiguration delivers the best results here as well.
6. DISCUSSION The distribution of landmarks on the training shapes is - beyond the correspondence issue - a crucial point in model construction. As our results indicate, equally distributed landmarks improve the quality of the correspondence optimization in terms of generalization and specificity, which should also lead to a better performance of the model in the final clinical application. One explanation for this outcome is that the equally distributed
Lung Generalization Ability 10.5
Standard optimization Reconfiguration after convergence Continuous reconfiguration
9.2 Volumetric Error
10 Volumetric error
Lung Specifity 9.4
9.5 9 8.5 8 7.5
9 8.8 8.6 8.4 8.2
Standard optimization Reconfiguration after convergence Continuous reconfiguration
8 1
2
3
4
5 6 Used modes
7
8
9
7.8
10
1
2
3
4
Liver Generalization Ability 26
Volumetric Error
Volumetric Error
16 14 5 6 Used Modes
7
10
17 16.5 16 Standard optimization Reconfiguration after convergence Continuous reconfiguration
15 4
9
18 17.5
15.5
3
10
19
18
2
9
18.5
20
1
8
Liver Specificity
22
12
7
19.5
Standard optimization Reconfiguration after convergence Continuous reconfiguration
24
5 6 Used Modes
8
9
10
14.5
1
2
3
4
5 6 Used Modes
7
8
Figure 7. Generalization and specificity values for both datasets. The volumetric error is specified as 100(1 − C T ) with CT as the Tanimoto coefficient. In both cases, the displayed 10 modes account for 90 percent of the total variance.
landmarks produce triangulations that better represent the input shapes, which is obviously of advantage when the generated shapes are compared to the original binary volumes. But in addition to this, it seems natural that an automatic correspondence algorithm should weight all areas of the surface equally. However, the original optimization method favors correspondence for areas with dense landmarks, while protruding edges or bulges – the intrinsic shape features – are often only sparsely sampled and thus not incorporated as much as would be necessary. When the reconfiguration is performed repeatedly during correspondence optimization, balance between different parts of the shape is preserved. The price for this is that the cost function fluctuates with each reconfiguration, which makes it harder to determine the best point to raise the level of detail in the optimization or to identify the final convergence of the algorithm. The presented volumetric metric for generalization and specificity is more application-oriented than the two common metrics that only measure distances between landmarks: While there may be exceptions where exact landmark locations are required, the enclosed volume is the result the user is going to see from the shape model for common segmentation tasks, thus this is the variable that should be evaluated. The Tanimoto coefficient is only one possibility to accomplish this task, Hausdorff and average squared surface distance are interesting alternatives which we are considering for an enhancement of our evaluation procedure. However, the computation time will be considerably longer for these measures, and it might only be a viable solution for the less demanding evaluation of generalization ability.
ACKNOWLEDGMENTS We would like to thank Tomos Williams from the Div. of Imaging Science, University of Manchester, UK for the data of the diffusion mapping parameterization in Fig. 2. The original MR lung data was kindly provided by Christian Plathow, the 3D meshes were created by Max Sch¨obinger. The CT liver data and segmentations were supplied by Matthias Thorn and Jan-Oliver Neumann.
REFERENCES 1. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models – their training and application,” Computer Vision and Image Understanding 61(1), pp. 38–59, 1995. 2. A. D. Brett and C. J. Taylor, “A method of automated landmark generation for automated 3D PDM construction,” Image and Vision Computing 18(9), pp. 739–748, 2000. 3. C. R. Shelton, “Morphable surface models,” Int. Journal of Computer Vision 38(1), pp. 75–91, 2000. 4. A. F. Frangi, D. Rueckert, J. A. Schnabel, and W. J. Niessen, “Automatic construction of multiple-object three-dimensional statistical shape models: Application to cardiac modeling,” IEEE trans Medical Imaging 21(9), pp. 1151–1166, 2002. 5. R. R. Paulsen and K. B. Hilger, “Shape modelling using markov random field restoration of point correspondences,” in Proc. IPMI, pp. 1–12, 2003. 6. Z. Zhao and E. K. Teoh, “A novel framework for automated 3D PDM construction using deformable models,” in Proc. SPIE Medical Imaging, 5747, 2005. 7. R. H. Davies, C. J. Twining, T. F. Cootes, J. C. Waterton, and C. J. Taylor, “3D statistical shape models using direct optimisation of description length,” in Proc. European Conference on Computer Vision, Part III, pp. 3–20, Springer, 2002. 8. M. Styner, K. T. Rajamani, L.-P. Nolte, G. Zsemlye, G. Sz´ekely, C. J. Taylor, and R. H. Davies, “Evaluation of 3D correspondence methods for model building,” in Proc. IPMI, pp. 63–75, 2003. 9. T. Heimann, I. Wolf, T. G. Williams, and H.-P. Meinzer, “3D active shape models using gradient descent optimization of description legth,” in Proc. IPMI, pp. 566–577, Springer, 2005. 10. R. H. Davies, C. J. Twining, T. F. Cootes, J. C. Waterton, and C. J. Taylor, “A minimum description length approach to statistical shape modelling,” IEEE trans. Medical Imaging 21(5), pp. 525–537, 2002. 11. A. C. Kotcheff and C. J. Taylor, “Automatic construction of eigenshape models by direct optimization,” Medical Image Analysis 2(4), pp. 303–314, 1998. 12. H. H. Thodberg, “Minimum description length shape and appearance models,” in Proc. IPMI, pp. 51–62, 2003. 13. M. S. Floater and K. Hormann, “Surface parameterization: a tutorial and survey,” in Advances in Multiresolution for Geometric Modelling, N. A. Dodgson, M. S. Floater, and M. A. Sabin, eds., Mathematics and Visualization, pp. 157–186, Springer, Berlin, Heidelberg, 2005. 14. C. Brechb¨ uhler, G. Gerig, and O. K¨ ubler, “Parametrization of closed surfaces for 3-D shape description,” Computer Vision and Image Understanding 61, pp. 154–170, March 1995. 15. X. Gu, Y. Wang, T. F. Chan, P. M. Thompson, and S.-T. Yau, “Genus zero surface conformal mapping and its application to brain surface mapping.,” in Proc. IPMI, pp. 172–184, 2003. 16. Y. Wang, X. Gu, T. F. Chan, P. M. Thompson, and S. Yau, “Intrinsic brain surface conformal mapping using a variational method,” in Proc. SPIE Medical Imaging, 5370, pp. 241–252, may 2004. 17. G. Peyre and L. D. Cohen, “Geodesic re-meshing and parameterization using front propagation,” in Proc. of VLSM, pp. 33–40, 2003. 18. V. Surazhsky and C. Gotsman, “Explicit surface remeshing,” in Proc. of Eurographics Symposium on Geometry Processing, pp. 17–28, (Aachen, Germany), 2003. 19. J. Vorsatz, C. R¨ossl, and H.-P. Seidel, “Dynamic remeshing and applications,” Journal of Computing and Information Science in Engineering 3(4), pp. 338–344, 2003. 20. P. Alliez, M. Meyer, and M. Desbrun, “Interactive Geometry Remeshing,” ACM Transactions on Graphics. Special issue for SIGGRAPH conference 21(3), pp. 347–354, 2002.
21. V. Ostromoukhov, “A simple and efficient error-diffusion algorithm,” in Proc. SIGGRAPH, pp. 567–572, ACM Press, 2001. 22. R. H. Davies, Learning Shape: Optimal Models for Analysing Shape Variability. PhD thesis, University of Manchester, Manchester, UK, 2002. 23. W. J. Niessen, C. J. Bouma, K. L. Vincken, and M. A. Viergever, “Error metrics for quantitative evaluation of medical image segmentation,” in Performance Characterization in Computer Vision, pp. 275–284, Kluwer Academic, 2000. 24. T. T. Tanimoto, “An elementary mathematical theory of classification and prediction,” tech. rep., IBM Research, 1958.