Learning-Based Deformable Registration of MR ... - Semantic Scholar

10 downloads 7369 Views 1MB Size Report
Digital Object Identifier 10.1109/TMI.2006.879320 matching and optimization ... feature vector as a morphological signature for each point in the image. However ...
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

1145

Learning-Based Deformable Registration of MR Brain Images Guorong Wu, Feihu Qi, and Dinggang Shen*

Abstract—This paper presents a learning-based method for deformable registration of magnetic resonance (MR) brain images. There are two novelties in the proposed registration method. First, a set of best-scale geometric features are selected for each point in the brain, in order to facilitate correspondence detection during the registration procedure. This is achieved by optimizing an energy function that requires each point to have its best-scale geometric features consistent over the corresponding points in the training samples, and at the same time distinctive from those of nearby points in the neighborhood. Second, the active points used to drive the brain registration are hierarchically selected during the registration procedure, based on their saliency and consistency measures. That is, the image points with salient and consistent features (across different individuals) are considered for the initial registration of two images, while other less salient and consistent points join the registration procedure later. By incorporating these two novel strategies into the framework of the HAMMER registration algorithm, the registration accuracy has been improved according to the results on simulated brain data, and also visible improvement is observed particularly in the cortical regions of real brain data. Index Terms—Best features, best scale selection, consistency measurement, deformable registration, feature-based registration, hierarchical registration, learning-based method, saliency measurement.

I. INTRODUCTION

D

EFORMABLE registration is a very important preprocessing step for medical image analysis. So far, various methods have been proposed [1]–[22], [37]–[42], which fall into three categories, i.e., landmark-based registration, intensity-based registration, and feature-based registration methods. Each category has its advantages and disadvantages. Landmark-based methods use prior knowledge of anatomical structures and thus are computationally fast. However, it is time consuming to manually place a sufficient number of landmarks for accurate registration. Intensity-based methods aim to maximize the intensity similarity of two images, and can be fully automated. However, the intensity similarity does not necessarily mean anatomical similarity. Feature-based registration methods formulate the image registration as a feature

Manuscript received March 21, 2006; revised May 10, 2006. Asterisk indicates corresponding author. G. Wu and F. Qi are with the Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, China (e-mail: [email protected]; [email protected]). *D. Shen is with the Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA (e-mail: [email protected]). Color versions of Figs. 1–10 are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TMI.2006.879320

matching and optimization problem [8]–[22], by defining a feature vector as a morphological signature for each point in the image. However, different image regions use the same kind of features to perform image registration, which might be good for some regions but not necessarily suitable for others. Therefore, the resulting registration methods are able to accurately register some, but not all, regions. In this way, the image registration results might be biased, i.e., better registration may be achieved only for the regions whose correspondences can be easily established by these features. The HAMMER registration algorithm [8] was proposed to integrate the advantages of various methods and, at the same time, to partly overcome their limitations. There are two novel strategies used in the HAMMER registration algorithm. First, it uses an attribute vector as a signature of each point, to reduce the ambiguity in correspondence matching during the image registration procedure. Each attribute vector includes image intensity, edge type and a number of rotation-invariant geometric moment invariants (GMIs) that are calculated in neighborhoods of multiple scales, to reflect the anatomy around each point in multiple resolutions. For each scale, there are thirteen GMIs that can be calculated from the zeroth-order, second-order, and third-order three-dimensional regular moments [8], [49]. In particular, four GMIs are formulated from the zeroth-order and the second order moments, and other nine GMIs are formulated from the thirdorder moments and from both the second-order and the thirdorder moments [49]. If attribute vectors can be designed to be as distinctive as possible, the correspondences across individual brains can be automatically determined. Note that the individual detection of correspondences might lead to false matches. Fortunately, by requiring the deformation fields to be smooth, false matches in the isolated locations can be potentially fixed by those correct correspondence detections in the neighborhoods. An advanced method for correcting false matches can be found in [44]. Second, since some parts of the brain can be identified more reliably than others, i.e., the roots of sulci, the crowns of gyri, and the corners of ventricles, a hierarchical deformation mechanism is also proposed in the HAMMER registration algorithm, to avoid being trapped in local minima. In particular, image points with more distinctive attribute vectors are first selected as active points to drive the image registration in the initial stages, while other points just follow the deformations of active points in their neighborhoods. With progress of the image registration, more and more image points are gradually added as active points, starting to drive the image registration. Thus, by hierarchically matching attribute vectors, the HAMMER registration algorithm produces relatively accurate registration results

0278-0062/$20.00 © 2006 IEEE

1146

for magnetic resonance (MR) brains. However, in order to further improve the image registration results, the two above-mentioned strategies, i.e., the design of an attribute vector for each point and the hierarchical selection of active points, should be refined, as explained next. First, the best geometric features should be separately designed for each point in the brain, in order to better detect its correspondences. In the HAMMER registration algorithm, GMIs are calculated from the fixed sizes of neighborhood around each point at each resolution, regardless of whether this point is located in the complicated cortical regions or in the simple uniform regions. Therefore, it is difficult to obtain the distinctive GMIs for each point in the image, as demonstrated in Section II. Although the best features have been studied for active shape models [23], [24], to our knowledge, no previous nonrigid registration methods concern the relationship between features and their corresponding scales and use this relationship to guide the image matching and correspondence detection during the registration procedure. Recently, Kadir and Brady [25] studied the implicit relationship between scale and saliency, and found that scale is intimately related to the problem of determining saliency and extracting relevant descriptions. They also proposed an effective method to detect the most salient regions in the image, by considering the entropy of local image regions over a range of scales and selecting regions with the highest saliency in both spatial and scale spaces. Based on [25], Huang et al. [26] proposed a hybrid linear registration method to align images under arbitrary poses. In their method, a small number of scale-invariant salient regions are first extracted, and then the correspondences of those regions are determined individually. For eliminating false matches, the final transformation between two images is estimated by jointly detecting the correspondences between multiple pairs of salient regions. Inspired by this idea of best-scale determination and saliency measurement, we propose to learn the best scales to compute the GMIs, and use the best-scale GMIs to distinguish each point from others during the registration procedure. Second, the selection of active points for adaptive image registration should be directly related to the saliency and consistency measures of image points. In the HAMMER registration algorithm, active points are intuitively selected according to a priori knowledge, i.e., the roots of sulci have more white matter (WM) volume, while the crowns of gyri have less WM volume but more cerebrospinal fluid (CSF) and background volume. Therefore, the selection of active points is simply performed by thresholding the zeroth-order GMIs that correspond to the volumes of different brain tissues. However, in our point of view, the selection of active points should be based on both saliency and consistency of geometric features such as GMIs, which are directly related to the performance of establishing good correspondences. In this paper, we design a learning-based method for selecting the best scales to calculate GMIs and also the active points to adaptively drive image registration. First, we present a generalized learning method to compute GMIs from the best scales, for

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

significantly reducing the ambiguity in feature matching during the image registration. Each image location will finally have its own best scale from which to calculate its geometric features. In particular, for each point, we require its GMIs computed from its best-scale neighborhood to be consistent over the corresponding points, but different from those of nearby points in all training samples. Entropy used in [25] is adopted here to quantitatively measure this requirement, and best scales are obtained by solving an energy minimization problem. Note that the proposed learning method can be easily extended to the selection of other best features [27] for image registration. Second, active points are hierarchically selected according to the integrated saliency and consistency measure defined for each point. Thus, the initially selected active points are the most salient points when compared with other brain points in the space, and also their features are consistent across different individuals. Thus, the initial registration of brains will be steered by those most distinctive and reliable points, which effectively increases the robustness as well as accuracy of our registration algorithm. The proposed learning-based registration method has been tested on both real and simulated MR brain images. For real MR brain images of elderly subjects, the experimental results show the visual improvement of registration with our method, especially in the cortical regions. For simulated images, the registration accuracy is improved by our method, not only on segmenting the regions of interest, but also on estimating the simulated brain deformations. In particular, compared to the HAMMER registration algorithm, the average deformation estimation error is reduced by 12.6% via our method trained only on the template image itself, and by 30.5% via our method trained on more samples. II. METHODS As briefly mentioned in Section I, it is necessary to compute the best-scale GMIs for each point, in order to improve the distinctiveness of GMIs and the ultimate accuracy of image registration. This idea will be made much clearer in this section. Section II-A investigates the relationship between saliency and scale in MR brain images, indicating the necessity of using the best scales to compute GMIs for better registration. Section II-B provides a learning-based method for determining the best scales for different brain locations. The advantages of using the best-scale GMIs for image matching are demonstrated in Section II-C. Section II-D provides a novel algorithm for hierarchically determining the active points based on an integrated saliency and consistency measure defined for each point. Finally, our whole registration algorithm is summarized in Section II-E. A. Saliency and Scale In the deformable registration of brains, it is important to make each brain point as salient as possible, in order to distinguish this point from others and thus facilitate the correspondence detection. As addressed in [25] and [28], the regional saliency is strongly related to the scale, i.e., size of image region. Actually, as demonstrated in the latter part of this section, the distinctiveness of GMIs is also intimately related to the scale

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

1147

Fig. 1. Salient regions in MR brain images. Thirty most salient regions are shown in each of three selected slices (a)–(c). Dots denote the centers of detected salient region, and the sizes of circle denote the scales of salient region. The circles with radius 4–8 mm, 9–15 mm, and over 16 mm are shown by solid, densely dashed, and sparsely dashed, respectively. This example shows that different regions need different scales in order to be as salient as possible.

of region used for calculating GMIs. In the next two paragraphs, the methods for determining the best scales and measuring the regional saliency are first briefly summarized according to [25]; a similar idea will be used in Section II-B to determine the best scales for the calculation of GMIs. Afterwards, the distinctiveness of GMIs with respect to scales is demonstrated in the end of this subsection. Saliency is defined based on local image complexity, using entropy (predictability), in [25]. However, if a local image exhibits self-similarity over a large range of scales, it is non-salient. Therefore, in the saliency definition, both the local image complexity and its self-dissimilarity in scale space should be considered as in [25]. Basically, a point with complex local image over a narrow range of scales is regarded as a perfect salient point. On the other hand, a point can be considered as nonsalient if only a small region around this point is evaluated, e.g., a point indicated by a black arrow in Fig. 1(b); however, it can become salient once a large region is evaluated. This exactly shows the importance of incorporating the scales into the saliency definition. The best scale, , for a region centered at a point can be determined by analyzing entropy in the local regions of different sizes [25]. In particular, for each point , the probability distri, is first calculated in a spherical rebution of intensity , , gion of radius , centered at . Then, the local entropy, is calculated from , i.e.,

The best scale for the region centered at point is selected as the one that maximizes local entropy , thus making the local image region as distinctive/complex as possible [25]. Since large local image difference is preferred, the regional saliency , is defined by the maximal local envalue of a point , tropy value , weighted by a self-dissimilarity measure in the scale space [25] of the best scale

(1)

is a constant. Note that an image region with large where entropy can be still regarded as nonsalient if its self-dissimilarity measure is small. Fig. 1 shows the 30 most salient regions in each selected slice of MR brain image, according to best scale and saliency definitions given above. The dots in Fig. 1 are the centers of the corresponding salient regions, and circles denote the scales of region. It can be observed that most salient regions are located at the prominent parts of brain, such as cortex and ventricular corners, which can act as the important landmarks to guide the deformable registration because of their uniqueness and distinctiveness. Thus, saliency of image features can be borrowed as a criterion for selection of active points for hierarchical image registration, as proposed in Section II-D. It is worth noting that a region regarded as salient according to one image might become nonsalient when more training samples are considered, if the features in this region are inconsistent over the corresponding regions of different training samples. Similarly, the GMIs should be calculated from the best scale for each brain point in order to make the point as salient as possible. To demonstrate this idea, a WM point, as indicated by a red cross in Fig. 2(a), is first selected, and its GMIs are compared with those of all other points in the brain. To evaluate the capabilities of different GMIs (calculated from different scales) in distinguishing this WM point from other brain points, four different scales, i.e., 8, 16, 30, and 40 mm, are used, as shown by circles in solid, densely dashed, dot–dashed, and sparsely dashed in Fig. 2(b). Four color-coded similarity maps, corresponding to four scales used, are given in Fig. 2(c)–(f), respectively. The dark red denotes the highest similarity. When a small neighborhood, i.e., scale equals to 8 mm, is used, this WM point is similar to many other points in the brain [Fig. 2(c)]. As the scale increases, more nearby structures are considered to distinguish this WM point from others, thus the GMIs become more and more distinctive [Fig. 2(d)–(f)]. Particularly, when scale 30 mm is reached, this WM point is only similar to its very close neighboring points [Fig. 2(e)], i.e., the peak of similarity is very sharp around the desired location. However, the larger scale does not always provide better distinctiveness for this WM point. For example, when scale is increased to 40 mm, the peak of similarity becomes much flatter than that obtained by scale 30

1148

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

Fig. 2. The importance of using best scales for computing GMIs, in order to best distinguish each point from others, such as distinguishing a WM point indicated by a red cross in (a). Four different scales, i.e., 8, 16, 30, and 40 mm, are used, respectively, to distinguish that WM point from others, as shown in (c)–(f). For the purpose of visual comparison, these four scales are represented by circles in solid, densely dashed, dot dashed, and sparsely dashed in (b), respectively. For each scale, the similarities of GMIs between that WM point and others are computed and color-coded as a similarity map, as shown in (c)–(f). Dark red denotes the highest similarity. It can be observed that GMIs computed at scale 30 mm make that WM point more distinctive, compared to other scales, since the peak of similarity is sharper around the desired position than any other cases.

mm. Therefore, it is necessary to determine a best scale for each brain location, i.e., scale 30 mm for this WM point, to compute the most distinctive GMIs for driving the image registration. In this study, we will design a learning-based method, using entropy that measures both the distinctiveness of each point relative to its nearby points, as well as the consistency of its GMIs over the corresponding points in all training samples, to obtain a smooth map of best scales for computing GMIs. B. Selection of Best Scales for Computing GMIs In this study, the brain registration is formulated as a problem of hierarchically matching attribute vectors of the points in the brains. The attribute vector of each point in a brain image should be designed to be as distinctive as possible, in order to distinguish this point from others in its neighborhood, . In the HAMMER registration algorithm [8], GMIs are calculated from a spherical neighborhood around each point , with a rathat is identical at all image locations. As demondius of strated in Fig. 1, different brain regions need different features in order to distinguish computed from their own best scales themselves from others [25]. For example, a point in the corto compute its distical region requires a different best scale

when compared with the points in the tinctive GMIs uniform regions. Therefore, it is significant to obtain a best scale for each point in the brain, based on local anatomy in the neighborhood. A learning-based method is proposed to select the best scale for each point in the template, in order to capture the most distinctive attributes for robust correspondence detection during the brain registration. Three criteria are used to select the best scales. First, the GMIs of a point computed from the best-scale neighborhood, should be different from those of nearby points in order to distinguish this point from the in its neighborhood nearby points. This is actually the main idea of the HAMMER is registration algorithm. Note that the size of neighborhood directly determined by the size of the search neighborhood used in the registration algorithm. Second, the resulting GMIs of the point should be statistically similar to the GMIs of its corresponding points in the training samples, i.e., consistent across individuals, if a set of training samples is available. In this way, the correspondence detection across different brains will become relatively easy. Note that in the HAMMER registration algorithm, the saliency of each point is determined by the template brain only; thus, a point determined as salient by the tem-

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

plate brain might still become nonsalient when more brain samples are considered, if the GMIs of this point vary dramatically across different brain samples. Third, the selected best scales are designed to be spatially smooth, for stable estimation. Entropy of the GMIs is used to mathematically formulate the above criteria, by following the idea of using entropy to measure feature saliency in [25], as briefly mentioned in Section II-A. The first criterion requires that the entropy of , , be maximized, the GMIs in the neighborhood thus making the GMIs of point as distinctive as possible . For simplicity, a hisfrom others in the neighborhood togram is created independently for each element of in the neighborhood , i.e., obtaining GMI vector . Therefore, can be mathematically defined , as, which is similar to the definition of given in Section II-A. The second criterion requires that the entropy of the GMIs over the corresponding points in the training samples, , be minimized, thus the GMIs of corresponding points are made as consistent as possible. The mathematical is similar to . The third definition of criterion requires that, for a point , the difference between its best scale, , and the best scale, , of its neighboring be minimized in a small neighborhood , i.e., point . Therefore, we can obtain best scales jointly for all image points, by minimizing an integrated energy function via a gradient-based algorithm (2) where and are two weights. The selection of and depends on the particular applications, as demonstrated in Section III. Notably, if there are no other training samples except the template, then we just use the best scale selection method [25], with a spatial smoothness constraint, to compute the best scales based on the template image itself. In this way, the resulting best scales will be similar to the ones given in Fig. 1, except that the best scales will be made to be spatially smooth. The learning-based method for selecting the best scales can be summarized as follows. 1) Select a set of brain samples, such as the 18 brains we used. (These brains are typical brains with various ventricle sizes and brain shapes, selected from the Baltimore Longitudinal Study of Aging (BLSA) project [30]. Each brain has been skull-stripped by a semiautomatic method and further tissue-segmented by a fuzzy segmentation method [43], before being affinely and nonrigidly aligned to the template in steps (2) and (3), respectively.) 2) Use an affine registration algorithm [29] to align those samples to a selected template, thereby obtaining affinely aligned brain samples. (Since GMIs are only invariant to rotation, not to affine transformations, it is important to affinely align those samples before calculating the GMIs from them in Step (4). Here, an individual brain with median ventricle size is simply selected as a template in order to make the registration of other samples to this selected template relatively easy, although a lot of efforts have been

1149

made for creation of human brain atlases [45]–[48]. Finally, it is worth noting that GMIs are not theoretically invariant to deformations, but reasonably invariant to smallscale deformations.) 3) Use the HAMMER registration algorithm [8] to register the template with each affinely aligned brain sample, thereby obtaining the correspondences of each template point in all brain samples. (Since the established correspondences will be used to determine the best scales from the training samples, it is important to achieve accurate correspondences. Currently, the HAMMER registration algorithm is used to establish the correspondences on our carefully preprocessed brain images. More accurate correspondences could be generated by first extensively labeling and landmarking a number of images [5], [11], and then applying a high-dimensional warping algorithm adequately constrained by these manual labels and landmarks. In addition, the correspondences established by different registration algorithms can be jointly used, thus allowing our learning-based method to integrate the merits of different registration algorithms. It is worth noting that one important goal of this paper is to demonstrate a framework showing the effectiveness of using a learning-based technique for brain image registration.) 4) For each template point and its corresponding points in the training samples compute the GMIs for different scales from the affinely aligned brains. 5) Determine the best scales, , for all template points jointly by minimizing the energy function in (2). For increasing the robustness of registration, most registration algorithms are implemented in a multiresolution fashion [8]. Thus, we need to select the best scales separately for each resolution, by performing the same best-scale selection method at each resolution. Three maps of best scales are obtained and shown in Fig. 3, for high, middle, and low resolutions, respectively. Best scales range from 4 to 24 voxels. The resulting best scales are actually adaptive to the brain anatomy. For example, small scales were selected in edge rich regions like cortex, and scales were increased gradually from the exterior to the interior brain regions, with the largest best scales selected for the uniform regions like WM region. Notably, in the low resolution, even a small best scale on the cortex will capture a large region at high resolution (Fig. 4), thereby providing the possibility of distinguishing between precentral and postcentral gyri. Also, since the registration algorithm is implemented in a multiresolution fashion, the registration results obtained from the low and middle resolutions will approximately align the two images, thereby local features in the high resolution, calculated from the small best scales, can be used to refine the registration, such as within the cortex, during the high-resolution registration stage. Fig. 4 shows the best scales selected for seven points on ventricular corners, sulcal roots, gyral crowns, and putamen boundary, in three different resolutions, respectively. For convenience, both low and middle resolution images have been upsampled to have the same size as the high resolution image. The size of the circle denotes the value of the best scale. Also, best scales ranging from 4 to 8 voxels are displayed by solid circles, best scales

1150

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

Fig. 3. Best scales selected for the image at three different resolutions, and further color-coded according to the color bar on the right.

Fig. 4. Best scales of seven selected points at three different resolutions. For convenience, the middle and low resolution images [(b) and (c)] were zoomed to the same size as the original image. Here, best scales ranging from 4 to 8 voxels are displayed by solid circles, best scales ranging from 8 to 15 voxels are displayed by densely-dashed circles, and best scales over 15 voxels are displayed by sparsely-dashed circles. (a) High resolution. (b) Middle resolution. (c) Low resolution.

ranging from 8 to 15 voxels are displayed by densely dashed circles, and best scales over 15 voxels are displayed by sparsely dashed circles. C. Advantages of Using Best-Scale GMIs By employing a learning-based best scale selection method as described above, we are able to use an adaptive scale to compute GMIs for each point in the brain, thus making it distinctive from its neighboring points and also similar to its corresponding points in the other brains. For example, for a template point on the sulcal root, as indicated by the cross in Fig. 5(a), it is similar to both the true correspondence indicated by the asterisk and the false correspondence indicated by the dot in the subject [Fig. 5(b)], if only local images are compared. Therefore, by measuring the similarities of this template point with all points in the subject image [Fig. 5(b)] via the attribute vectors computed from the neighborhoods of fixed scales (or sizes), , 4, 7 used, respectively, for the low, middle, such as and high resolution images in the HAMMER registration algorithm, it is not easy to establish correct correspondences since there exist multiple peaks in the similarity map, as color-coded and shown in Fig. 5(c). Red represents the most similar points, which include the false correspondence indicated by the dot in Fig. 5(b). Importantly, by using our learning-based best scale selection method, we can determine the best scales for this tem, 14, plate point at three different image resolutions (i.e., 8, respectively, for low-, middle-, and high-resolutions). Note that the best scales selected in the low and middle resolutions

, ) actually correspond to big regions ( around this template point at the high resolution, such as big circled images in the right panel of Fig. 5. In this way, we can use the selected best scales to calculate GMIs from all three resolution images for this template point, and confidently distinguish this template point from the two candidate points in Fig. 5(b) by using those best-scale GMIs. This has been clearly demonstrated by a color-coded similarity map in Fig. 5(d). Although it is possible to distinguish correspondences, for many brain points, using only the GMIs with fixed scales, it may be less distinctive, compared to our method of using the learned best scales. Fig. 6 shows an example of detecting correspondences in the subject image [Fig. 6(b)], for a template point in Fig. 6(a). According to a color-coded similarity map in Fig. 6(c), the method of using fixed scales to compute GMIs can distinguish the correspondences, while it is less distinctive as compared to our method of using the learned best scales, as indicated by a color-coded similarity map in Fig. 6(d). D. Hierarchical Selection of Active Points to Drive Image Registration The registration algorithm should hierarchically allow a number of selected points to seek for their correspondences during different registration phases. Those selected points are called active points, while others are called passive points since they only follow the deformations of active points. The hierarchical selection of active points in different registration phases can be briefly described next.

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

1151

Fig. 5. Advantages of using best scales to compute GMIs for correspondence detection. The similarity of a template point indicated by the cross in (a), is compared to any point in the subject (b), by respectively using GMIs with fixed scales (c) and with learned best scales (d). The color-coded similarity map in (d) indicates the distinctive correspondences, compared to the similarity map in (c) that has multiple peaks, with one peak corresponding to the false correspondence, as indicated by the dot in (b).

Fig. 6. Performances of using fixed scales and learned best scales in distinguishing particular brain points, such as ventricular corners. The point in (b), as indicated by a black cross, is a detected correspondence of the point in (a), by comparing the GMIs of either fixed scales or learned best scales. In (c) and (d), the red denotes high similarity, and the blue denotes no similarity. Although the detected correspondence, respectively by fixed scales and learned best scales, looks similar, it is less distinctive in the similarity map (c) when using fixed scales.

• During the initial registration phases, most salient points should be selected as active points to look for their correspondences, since it is relatively easier for them to identify their correspondences among a relatively small number of candidate points. The other points passively follow the deformations of active points in their neighborhoods. • With progress of deformable registration, those less salient points get close to their corresponding positions. In this way, they can be added as active points to reliably drive the image registration, leading to the refinement of registration results. • Finally, all points will be considered as active points for image registration. Accordingly, the selection of active points for driving image registration is significant, and it should be based on both saliency and consistency measures of each point, as detailed next.

• Saliency measure: As briefly described in Section II-A, the saliency of each point in an image can be defined by the maximal local entropy value, weighted by a self-dissimilarity measure in the scale space. We will use the similar definition to compute the saliency of GMIs of each point in an image, by replacing the entropy of intensity distribution with the entropy of GMIs distribution. Notably, since there are multiple images used as training samples in our case, for each point in the template, we need to average the saliency measures across its corresponding points in the training samples, and use the average as its overall saliency measure. Thus, for a point in the template, its overall , where saliency measure can be represented as the best scale is obtained by optimization of (2). • Consistency measure: In addition, besides requiring the saliency of a point in order to be selected as an active point, we also require the GMIs of its corresponding points

1152

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

Fig. 7. Subsequent selection of active points according to the integrated saliency and consistency measure. For clarity, only the active points on the grey/white matter interface are shown. Images (a)–(d) show the active points selected at four subsequent deformable registration phases. Most of the initially-selected active points, shown as red in (a), are located at ventricles, sulcal roots, and gyral crowns. Green points in (b), yellow points in (c), and blue points in (d) are added as active points in three subsequent registration stages.

to be consistent across different training samples. Otherwise, even a point can be considered as salient in the template brain, it might still be not a good candidate to be selected as an active point if its GMIs vary dramatically across different training samples. Therefore, we should require the entropy of the GMIs of the corresponding points in (2) to be small. to be small, i.e., the value of This is called the consistency measure for the point . We integrate these two items, i.e., overall saliency measure and consistency measure, into a single measure , to jointly measure the saliency and consistency of GMIs of each point in the template, and use it as a criterion for hierarchical selection of active points. In particular, for selecting the active points in the template, we will 1) first use the best scale, , obtained from (2) to calculate both and , and 2) then integrate them together as a single saliency and consistency measure of the template point to be used for active point selection. It should be mentioned that, for the subject brain, the selection of its active points has to be totally based on the saliency of its own GMIs, since before aligning the subject brain to the template, no learned informa-

tion can be used for guiding the selection of active points in the subject brain. Fig. 7 demonstrates a procedure of subsequently selecting active points in the template, during a deformable registration procedure in the fine resolution. The selection of an initial number of active points is based on two requirements, 1) active points should be selected from all important structures such as ventricles, sulci, and gyri, since they are morphologically significant to characterize the shape of brain (In particular, the selection of active points from ventricles are important for successful registration of elderly subjects’ brain images, often with large ventricles, onto a template with median ventricles); 2) active points should be relatively uniformly distributed in the whole brain. In our experiments, about 11% of the total brain points are selected as initial active points, as shown by the red points in Fig. 7(a). Most of them are located at sulcal roots and gyral crowns, which are the most distinctive regions. With the progress of image registration, more and more points are added as active points, such as green points in Fig. 7(b), yellow points in Fig. 7(c), and blue points in Fig. 7(d). For clarity, only the active points on the grey/white matter interface are shown. It is worth noting that

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

these requirements of selecting active points are similarly applied to all resolutions used in our multiresolution implementation, and about 11% of total brain points are selected as initial active points at each resolution. E. Summary of Learning-Based Deformable Registration Algorithm All image registration strategies developed in the HAMMER registration algorithm [8], such as the definitions of the attribute vector similarity and the energy function, are adopted by our registration algorithm, except that the fixed-scale GMIs are replaced by the best-scale GMIs for image matching. Also, the ad hoc active point selection method in the HAMMER registration algorithm is replaced by our active point selection method. Our registration algorithm is implemented in a multiresolution fashion, and it starts over with about 11% of brain voxels as initial active points at each resolution. The best scale for each location is determined in the template space. Thus, it is straight forward to compute the best-scale GMIs for each template point by using the precalculated best scales. However, for the subject image, it is impossible to compute the best-scale GMIs, since the subject is sitting in its own space. Even the best scales can be separately calculated for the subject image by using its own image, the obtained best scales are not necessarily consistent with the corresponding ones calculated in the template space via all training images; thereby they are eventually not useful. To overcome this problem, we will first align the subject to the template space by an affine registration algorithm [29], and then compute, in advance, the GMIs of all possible best scales (used in the template) for each subject point. Thus, when matching two respective template and subject points during the deformable registration procedure, we can take the GMIs of corresponding scales, according to the best scales used for the template point, as the current attributes of the subject point used to measure the similarity of these two points. In other words, the subject point does not determine which GMIs should be used for similarity measure. Instead, it is completely decided by the corresponding template point where the subject point is currently sitting. To save time in computation of the GMIs of all possible best scales for each subject point, we limit the number of all possible best scales, by selecting the best scales from a small set of candidate scales , since small differences in the scales will not significantly affect the saliency and consistency of the calculated GMIs. Our learning-based brain registration algorithm can be summarized as follows. It is similarly applied for each resolution used in our multiresolution implementation. Step 1) Affinely align the subject brain to the template space. Step 2) Compute the attribute vector for each point in the template, according to the best scales obtained in the training stage. Step 3) For each point in the subject brain, compute its attribute vector, with GMIs calculated on all possible best scales used in the template. , in Step 4) Determine a set of active points, the subject brain, based only on the saliency of their denotes the number of active points own GMIs.

1153

selected in the subject, which are fixed in the whole registration procedure. Step 5) Hierarchically select active points, , in the template brain, according to the integrated saliency and consistency measure of the denotes the number of active best-scale GMIs. points currently selected in the template. Initially, a small number of active points are selected; gradually, more and more points are added as active points, until all points in the template are selected as active points to drive the brain registration. Step 6) Register two brains by hierarchically matching their active points. • Determining forces from the subject active points: search in its For each subject active point neighborhood for a warped template active point with the highest similarity of attribute is a deformation field from vectors, where the template to the subject. If the degree of this similarity is above a certain threshold, a force is created in a direction from the template active to the subject active point, . As point mentioned in beginning of this subsection, the definition of similarity of two attribute vectors is simply adopted from [8], and also the threshold is set as 0.8 as in [8]. • Deforming template by subvolume matching: For each template active point if there exists forces from the subject active points (generated above), the subvolume around this template active point will be deformed directly by those forces in a Gaussian fashion [8]. Otherwise, we will first for all candidate subject points search around with similar attribute vectors. Then, we will tentatively deform the subvolume of to the location of each candidate subject point, to measure the overall similarity between corresponding subvolumes, respectively, in the template and in the subject. Finally, we will deform the subvolume of to the location of a candidate subject point with the highest overall similarity, if the degree of this similarity is above a certain threshold. • Smoothing the deformation field: Smooth the deformation field according to a Laplacian smoothness constraint [8]. It is worth noting that any other advanced smoothness constraints, i.e., Jacobian constraint [50], can be applied to constraining the smoothness of deformation field. Step 7) If the warping procedure is converged, i.e., the maximal change of deformation during the last iterative registration is less than a certain threshold (such as 0.1 mm), then stop. Otherwise, go to Step 5. III. RESULTS The proposed learning-based registration method has been evaluated on both real and simulated MR brain images, and its performance is compared with the performance of the HAMMER registration algorithm [8]. All experiments are

1154

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

Fig. 8. Visual improvement in registering brain images by the proposed method, particularly in the cortical regions circled. The template image, affine registration result, nonrigid registration result by HAMMER, and nonrigid registration result by our proposed method are displayed from the left column to the right column. It is observable that our registration method achieves more reasonable warping results compared to the HAMMER registration algorithm, especially in the regions superimposed with red ellipses. (a) Template; (b) by affine registration; (c) by HAMMER; (d) by proposed method.

performed on the volumetric brain images by using the same set of parameters on a PC (Pentium 4, 3.0 GHz), although most results are shown in the cross-sectional views. In (2), the value of is set to “1” when the training samples are available, and “0” when no training sample except a template is available. The value of is always set to “1” in all of our experiments, based on our experience. Our method needs about 2 h to complete the , registration of two brain images in the size of which is a half hour more than that required by the HAMMER registration algorithm.

A. Real MR Brain Images Experiments in this subsection demonstrate the performance of our method in registering new testing brains that are not included for training of our registration method. In particular, our method is used to register new elderly brain images randomly selected from the BLSA project [30], based on what we have learned from those 18 brains. We found the visual improvement by our method in areas such as cortical regions, although our method and the HAMMER registration algorithm perform similarly on most parts of brain regions. Fig. 8 shows some examples, comparing the template to the results, respectively, obtained by an affine registration method, the HAMMER registration algorithm, and our method. These results indicate that our method can align cortical regions more accurately, such as the regions highlighted by red ellipses.

B. Simulated MR Brain Images Simulated data is used to quantitatively evaluate the performance of our registration method. The simulated data is created by a brain deformation simulator [31], which produces relatively realistic brain deformations. Fig. 9 displays a brain in (a1), used as a template, and its deformed brains with various simulated deformations in (b1–b8). In addition, the hippocampal region was manually labeled in the template brain, thus the label for such a region can be warped together by the same deformation fields during the simulation procedure. By using our proposed registration method, we can estimate deformations between the template brain and each of its deformed brains, i.e., simulated brains, and further bring the warped label in the simulated brains back to the original space of the template brain. Thus, we can measure not only the errors between the simulated deformations and the estimated deformations, but also the overlay degree of the labeled hippocampal regions in the template space. We compare the registration performances of the four approaches on the simulated data. The four approaches are respectively described next. The first approach is the original HAMMER registration algorithm, denoted as Approach 1. In order to evaluate the importance of selecting active points based on the integrated saliency and consistency measures for hierarchical image registration, we only replace the ad hoc active point selection technique in the original HAMMER registration algorithm by our proposed active point selection criterion, and thus obtain the second registration algorithm,

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

1155

Fig. 9. Simulated brains for quantitatively evaluating the performances of four different registration approaches. A selected brain (used as template) and its eight deformed brains are displayed in (a1) and (b1)–(b8), respectively. The best scales, computed using the template as the only sample, are shown in (a2), where dark blue denotes the smallest scales selected and bright white denotes the largest scales selected, according to the color bar shown on the right.

TABLE I AVERAGE DEFORMATION ESTIMATION ERROR FOR EACH OF FOUR REGISTRATION METHODS (MILLIMETERS)

TABLE II OVERLAY DEGREE AND VOLUME ERROR ON LABELED HIPPOCAMPAL REGIONS

denoted as Approach 2. As mentioned before, our learning based method is able to compute best scales from a single template image or multiple training samples. In order to test whether the registration performance can be improved by using more training samples, we created Approach 3 and Approach 4 to respectively denote our registration method trained by only the template itself, and our registration method trained by multiple samples including the template (i.e., our full registration method). The difference between Approach 2 and Approach 3 is that in Approach 3 the best scales learned from the template are used to compute GMIs, while in Approach 2, fixed scales are used to compute the GMIs. In Approach 3, only the template brain is available as a training sample to obtain the best scale map, thus the parameter in (2) should be set to 0. For the template brain in Fig. 9(a1), the resulted best scale map in the high resolution is shown in Fig. 9(a2). The color coding is the same as in Fig. 3, i.e., dark blue is used for the smallest scale (four voxels) and bright white for the largest scale (24 voxels). The learned best scales are quite reasonable, even using only a single training sample, i.e., a template brain. That is, small scales are selected for the com-

plex cortical regions, and best scales are increased gradually from the exterior to the interior brain regions with the largest best scales assigned to the uniform regions such as WM region. In addition, best scales are smaller at the ventricular corners than at their neighboring points, since they are distinctive even using small scales. The average deformation estimation error and the overlay degree of labeled hippocampal regions are independently evaluated for each simulated brain by each of four approaches, as given in Table I and Table II, respectively. The average deformation estimation error is computed by averaging the errors between simulated deformations and our estimated deformations on all points in the simulated brain. The performance of Approach 2 is better than Approach 1, indicating the effectiveness of using the integrated saliency and consistency measure as a criterion for active point selection in hierarchical image registration procedure. The performance of Approach 3 (i.e., our proposed registration method using the template as the only training sample) is the best among the first three approaches. In particular, the average deformation estimation error is 0.95 mm by Approach 1, 0.92 mm by Approach 2, and 0.84 mm

1156

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006

the framework of the HAMMER registration algorithm, we achieved higher registration accuracy on both real and simulated data. In the future, we plan to apply our learning-based method to other features, i.e., wavelets [32], [33] and Gabor features [34], thereby obtaining an integrated set of different types of best features [35], [36] for more robust and accurate image registration. ACKNOWLEDGMENT The authors would like to thank Dr. S. Resnick and the BLSA for providing the datasets. Fig. 10. Performances of estimating the simulated deformations by the four registration approaches. Average deformation estimation error is 0.95 mm by Approach 1, 0.92 mm by Approach 2, 0.84 mm by Approach 3 (i.e., our registration method using the template as an only sample), and 0.66 mm by Approach 4 (i.e., our registration method using both template and other training samples). Importantly, 30.5% of error reduction has been achieved by our registration method (Approach 4), compared to the original HAMMER registration algorithm (Approach 1).

by Approach 3. That is, Approach 3 achieved 12.6% error reduction, compared to Approach 1 (i.e., the HAMMER registration algorithm). Importantly, Approach 4, which is trained by the template and four simulated data (A–D), has average deformation error of only 0.66 mm on the four testing simulated data (E–H). This indicates 30.5% error reduction by Approach 4, compared to Approach 1 (i.e., the HAMMER registration algorithm). The histogram of deformation estimation errors is calculated for each registration method, and displayed in Fig. 10. Also, based on the labeling results summarized in Table II, Approach 4 (i.e., our full registration method with additional training sample besides the template) achieves the best results, i.e., 93.7% for overlay degree and 1.2% for volume error. This experiment on simulated data verifies the advantages of simultaneously using the best-scale GMIs, saliency-based active point selection, and training samples besides the template, for hierarchical registration of brains. IV. CONCLUSION We have developed a learning-based method, with two advanced strategies, for deformable registration of MR brain images. First, the best-scale GMIs are learned for each brain point, in order to better distinguish this point during the deformable registration procedure, where the brain registration has been formulated as a feature matching and correspondence detection problem. For obtaining the distinctive best-scale GMIs for each brain point, our learning-based method required both the consistency of GMIs for corresponding points across all training samples and the difference of its GMIs from the GMIs of its nearby points. It further requires the spatial smoothness of the best-scale map. All of these requirements are integrated into a single entropy-based energy function, and are solved by an energy optimization method. Second, the active points used to drive the image registration are hierarchically selected during the deformable registration procedure, according to the integrated saliency and consistency measures learned from the training samples. Thus, by incorporating both the learned best-scale GMIs and a criterion of selecting active points (based on saliency and consistency measures) into

REFERENCES [1] Y. Wang and L. H. Staib, “Elastic model-based non-rigid registration incorporating statistical shape information,” in Proc. MICCAI’98, 1999, vol. 1496, pp. 1162–1173. [2] J. C. Gee, M. Reivich, and R. Bajcsy, “Elastically deforming 3D atlas to match anatomical brain images,” J. Comput. Assist. Tomogr., vol. 17, pp. 225–236, 1993. [3] G. E. Christensen and H. J. Johnson, “Consistent image registration,” IEEE Trans. Med. Imag., vol. 20, no. 7, pp. 568–582, Jul. 2001. [4] B. M. Dawant, S. L. Hartmann, and S. Gadamsetty, “Brain atlas deformation in the presence of large space-occupying tumours,” in Proc. MICCAI’99, 1999, vol. 1679, pp. 589–596. [5] P. Thompson and A. W. Toga, “A surface-based technique for warping three-dimensional images of the brain,” IEEE Trans, Med. Imag., vol. 15, no. 4, pp. 402–417, Aug. 1996. [6] I. Wells, M. William, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, “Multi-modal volume registration by maximization of mutual information,” Med. Image Anal., vol. 1, pp. 35–51, 1996. [7] C. Studholme, D. L. G. Hill, and H. DJ, “Multiresolution voxel similarity measures for MR-PET registration,” in Proc. IPMI, Ile de Berder, France, 1995, pp. 287–298. [8] D. Shen and C. Davatzikos, “HAMMER: hierarchical attribute matching mechanism for elastic registration,” IEEE Trans. Med. Imag., vol. 21, no. 11, pp. 1421–1439, Nov. 2002. [9] C. Pelizarri, G. Chen, and D. Spelbring, “Accurate three dimensional registration of CT, PET, and MR images of the brain,” J. Comput. Assist. Tomogr., vol. 13, no. 1, pp. 20–26, 1989. [10] C. V. Stewart, C. L. Tsai, and B. Roysam, “The dual-bootstrap iterative closest point algorithm with application to retinal image registration,” IEEE Trans. Med. Imag., vol. 22, no. 11, pp. 1379–1394, Nov. 2002. [11] S. F. Kristi Boesen, J. Huang, J. Germann, J. Stern, D. Louis Collins, A. C. Evans, and D. A. Rottenberg, “Inter-rater reproducibility of 3D cortical and subcortical landmark points,” presented at the 11th Annu. Meeting Organization Human Brain Mapping, Toronto, Canada, 2005, unpublished. [12] A. C. Evans, W. Dai, L. Collins, P. Neelin, and S. Marrett, “Warping of a computerized 3D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis,” SPIE Proc. Image Process. Algorithms Techniques, vol. 1445, pp. 236–246, 1991. [13] S. Pizer, D. Fritsch, P. Yushkevich, V. Johnson, and E. Chaney, “Segmentation, registration, and measurement of shape variation via image object shape,” IEEE Trans. Med. Imag., vol. 18, no. 10, pp. 851–865, Oct. 1996. [14] C. Davatzikos and J. L. Prince, “Brain image registration based on curve mapping,” in Proc. IEEE Workshop Biomed. Image Anal., Jun. 1994, pp. 245–254. [15] F. Bookstein, “Shape and the information in medical images: A decade of the morphometric synthesis,” Comput. Vision Image Understand., vol. 66, no. 2, pp. 97–118, 1997. [16] P. A. Freeborough and N. C. Fox, “Modeling brain deformations in Alzheimer’s disease by fluid registration of serial 3D MR images,” J. Comput. Assis. Tomogr., vol. 22, pp. 838–843, 1998. [17] C. Davatzikos, “Spatial transformation and registration of brain images using elastically deformable models,” Comput. Vision Image Understand., vol. 66, no. 2, pp. 97–118, 1997. [18] A. Kelemen, G. Szekely, and G. Gerig, “Elastic model-based segmentation of 3-D neuroradiological data sets,” IEEE Trans. Med. Imag., vol. 18, no. 10, pp. 828–839, Oct. 1999. [19] S. Sandor and R. Leahy, “Surface based labeling of cortical anatomy using a deformable atlas,” IEEE Trans. Med. Imag., vol. 16, no. 1, pp. 41–54, Feb. 1997.

WU et al.: LEARNING-BASED DEFORMABLE REGISTRATION OF MR BRAIN IMAGES

[20] M. Miller, S. Joshi, and G. Christensen, Large Deformation Fluid Diffeomorphisms for Landmark and Image Matching. San Diego, CA: Academic, 1999, pp. 115–131. [21] G. Subsol, J. P. Thirion, and N. Ayache, “A scheme for automatically building 3D morphometric anatomical atlases: Application to a skull atlas,” Med. Image Anal., vol. 2, no. 1, pp. 37–60, 1998. [22] N. Duta and M. Sonka, “Segmentation and interpretation of MR brain images: An improved active shape model,” IEEE Trans. Med. Imag., vol. 17, no. 6, pp. 1049–1062, Dec. 1998. [23] B. V. Ginneken, A. F. Frangi, J. J. Staal, B. M. T. H. Romeny, and M. A. Viergever, “Active shape model segmentation with optimal features,” IEEE Trans. Med. Imag., vol. 21, no. 8, pp. 924–933, Aug. 2002. [24] S. Li, L. Zhu, and T. Jiang, “Active shape model segmentation using local edge structures and AdaBoost,” MIAR, pp. 121–128, 2004. [25] T. Kadir and M. Brady, “Saliency, scale and image description,” Int. J. Comput. Vision, vol. 45, no. 2, pp. 83–105, 2001. [26] X. Huang, Y. Sun, D. Metaxas, F. Sauer, and C. Xu, “Hybrid image registration based on configural matching of scale-invariant salient region features,” in Proc. IEEE Workshop Image Video Registration, Washington D.C., Jul. 2004, p. 167. [27] L. Teverovskiy and Y. Liu, Learning-based neuroimage registration Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-04-59, Oct. 2004. [28] T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in Proc. Eur. Conf. Comput. Vision, 2004, pp. 228–241. [29] M. Jenkinson, P. R. Bannister, J. M. Brady, and S. M. Smith, “Improved optimisation for the robust and accurate linear registration and motion correction of brain images,” NeuroImage, vol. 17, pp. 825–841, 2002. [30] S. M. Resnick, A. F. Goldszal, C. Davatzikos, S. Golski, M. A. Kraut, E. J. Metter, R. N. Bryan, and A. B. Zonderman, “One-year age changes in MRI brain volumes in older adults,” Cerebral Cortex, vol. 10, pp. 464–472, 2000. [31] Z. Xue, D. Shen, B. Karacali, and C. Davatzikos, “Statistical representation and simulation of high-dimensional deformations: Application to synthesizing brain deformations,” presented at the MICCAI Conf., Palm Springs, CA, Oct. 26–29, 2005. [32] Z. Xue, D. Shen, and C. Davatzikos, “Determining correspondence in 3D MR brain images using attribute vectors as morphological signatures of voxels,” IEEE Trans. Med. Imag., vol. 23, no. 10, pp. 1276–1291, Oct. 2004. [33] S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 7, pp. 674–693, Jul. 1989. [34] D. Shen, Y. Zhan, and C. Davatzikos, “Segmentation of prostate boundaries from ultrasound images using statistical shape model,” IEEE Trans. Med. Imag., vol. 22, no. 4, pp. 539–551, Apr. 2003. [35] R. R. Coifman and M. V. Wickerhauser, “Entropy-based algorithms for best basis selection,” IEEE Trans. Inf. Theory, vol. 38, no. 2, pp. 713–718, Mar. 1992. [36] A. Mohamed and C. Davatzikos, “Shape representation via best orthogonal basis selection,” presented at the 7th Int. Conf. Med. Image Computing Computer Assisted Intervention (MICCAI), St. Malo, France, Sep. 26–30, 2004.

1157

[37] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Non-rigid registration using free-form deformations: Application to breast MR images,” IEEE Trans. Med. Imag., vol. 18, no. 8, pp. 712–721, Aug. 1999. [38] J. P. Thirion, “Non-rigid matching using deamons,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1996, pp. 245–251. [39] C. V. Stewart, C.-L. Tsai, and B. Roysam, “The dual bootstrap iterative closest point algorithm with application to retinal image registration,” IEEE Trans. Med. Imag., vol. 22, no. 11, pp. 1379–1394, Nov. 2003. [40] P. Lorenzen, M. Prastawa, B. Davis, G. Gerig, E. Bullitt, and S. Joshi, “Multi-modal image set registration and atlas formation,” in Med. Image Analy.. New York: Elsevier, to be published. [41] C. J. Twining, T. Cootes, S. Marsland, V. Petrovic, R. Schestowitz, and C. J. Taylor, “A unified information-theoretic approach to groupwise non-rigid registration and model building,” presented at the 19th Int. Conf. Inf. Process. Med. Imag., Glenwood Springs, CO, Jul. 10–15, 2005. [42] L. Zollei, E. Learned-Miller, E. Grimson, and W. Wells, “Efficient population registration of 3D data,” presented at the 10th Int. Conf. Comput. Vision (ICCV) Workshop Computer Vision Biomed. Image Applications: Current Techniques Future Trends, Beijing, China, Oct. 15–21, 2005. [43] A. F. Goldszal, C. Davatzikos, D. L. Pham, M. X. H. Yan, R. N. Bryan, and S. M. Resnick, “An image-processing system for qualitative and quantitative volumetric analysis of brain images,” J. Comput Assist. Tomogr., vol. 22, no. 5, pp. 827–837, Sep.–Oct. 1998. [44] H. L. Chui and A. Rangarajan, “A new point matching algorithm for non-rigid registration,” Comput. Vision Image Understand., vol. 89, pp. 114–141, 2003. [45] B. B. Avants and J. C. Gee, “Shape averaging with diffeomorphic flows for atlas creation,” in Proc. IEEE Int. Symp. Biomed. Imag., Arlington, VA, Apr. 15–18, 2004, pp. 595–598. [46] P. E. Roland, C. J. Graufelds, J. Wåhlin, L. Ingelman, M. Andersson, A. Ledberg, J. Pedersen, S. Åkerman, A. Dabringhaus, and K. Zilles, “Human brain atlas: For high-resolution functional and anatomical mapping,” Human Brain Mapp., vol. 1, pp. 173–184, 1994. [47] R. Kikinis, M. E. Shenton, D. V. Iosifescu, R. W. McCarley, P. Saiviroonporn, H. H. Hokama, A. Robatino, D. Metcalf, C. G. Wible, C. M. Portas, R. Donnino, and F. Jolesz, “A digital brain atlas for surgical planning, model-driven segmentation, and teaching,” IEEE Trans. Visual. Comput. Graphics, vol. 2, no. 3, pp. 232–241, Sep. 1996. [48] A. W. Toga and P. M. Thompson, “Brain atlases of normal and diseased populations,” Int. Rev. Neurobiol. Neuroimag., vol. 66, no. 3, pt. A, pp. 1–54, Sep. 2005. [49] C. H. Lo and H. S. Don, “3-D moment forms: Their construction and application to object identification and positioning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 10, pp. 1053–1064, Oct. 1989. [50] B. Karacali and C. Davatzikos, “Estimating topology preserving and smooth displacement fields,” IEEE Trans. Med. Imag., vol. 23, no. 7, pp. 868–880, Jul. 2004.

Suggest Documents