NON-RIGID REGISTRATION FOR AUTOMATIC ... - CiteSeerX

13 downloads 286 Views 87KB Size Report
Department of Biomedical Engineering and Center for Medical Image. Science and ... or three-dimensional images using a multi-resolution deformation scheme. ..... trix that we call B, changing the previous expression to ˜x = BT f. The filter vectors .... [3] H. Knutsson and M. Andersson, “Morphons: Segmentation using elastic ...
NON-RIGID REGISTRATION FOR AUTOMATIC FRACTURE SEGMENTATION Johanna Pettersson, Hans Knutsson, and Magnus Borga Department of Biomedical Engineering and Center for Medical Image Science and Visualization, Link¨oping University, Sweden ABSTRACT Automatic segmentation of anatomical structures is often performed using model-based non-rigid registration methods. These algorithms work well when the images do not contain any large deviations from the normal anatomy. We have previously used such a method to generate patient specific models of hip bones for surgery simulation. The method that was used, the morphon method, registers twoor three-dimensional images using a multi-resolution deformation scheme. A prototype image is iteratively registered to a target image using quadrature filter phase difference to estimate the local displacement. The morphon method has in this work been extended to deal with automatic segmentation of fractured bones. Two features have been added. First, the method is modified such that multiple prototypes (in this case two) can be used. Second, normalised convolution is utilized for the displacement estimation, to guide the registration of the second prototype, based on the result of the registration of the first one. Index Terms— Registration, Segmentation, Computer Tomography, Biomedical image processing

local forces are calculated from the image gradients. One method that uses phase instead of intensities to perform registration is presented by Mellor and Brady in [5]. In their method, the local phase in the images is found using a set of filters. Thereafter, they try to maximize the mutual information with respect to the phase representations of the images, instead of the ordinary intensity images. The morphon method utilizes local phase information to estimate the optical flow, which is proportional to the local phase difference between the prototype and the target image. In this paper we present an extension of the morphon method to perform automatic segmentation of fractured hip bones. The challenge when dealing with fractures is that the parts of the fractured bone has relocated to some degree, and thereby altered the anatomy in the region. Thus, it is difficult to create a prototype that can manage the varying shapes and the topological changes of the fractured bones. The morphon method has therefore been modified such that it works with two different prototypes, one for each part of the fractured bone. These prototypes are registered to the target data sequentially. Furthermore, normalized convolution has been applied to be able to use information from the registration of the first prototype to guide the registration of the second prototype.

1. INTRODUCTION Numerous registration methods have been presented throughout the years for medical image registration and segmentation [1]. They differ in the type of image features they work on, the way they measure similarity between the image features, the optimization scheme used to maximize image similarity, and the type of deformations they allow. The degrees of freedom of the deformation can be low, allowing only rigid transformations. For non-rigid registration methods the degrees of freedom are significantly higher. These types of deformations are often needed in medical registration applications to handle the high variations of the anatomical structures. An automatic segmentation of a certain structure can be obtained by registering a labeled model, typically generated in a manual segmentation process, to another dataset containing the interesting structure. We have previously used this technique for automatic segmentation of hip bones to create patient specific models for a hip surgery simulator system [2]. The registration is based on the morphon method [3], which is an iterative non-rigid registration technique that uses local phase difference to estimate the displacement. A model of the anatomical structure, which is referred to as the prototype in the morphon method, is repeatedly deformed, from coarse to fine resolution scale, to match the target volume. Quadrature filter phase difference is used to estimate a local displacement field in each iteration. This local displacement estimation is known as finding the optical flow and is used in e.g. the Demons algorithm [4], where the Thanks to the Swedish Agency for Innovation Systems (VINNOVA), the Swedish Foundation for Strategic Research (SFS) and Melerit Medical AB for funding.

2. METHOD This section contains a general description of the morphon method in 2.1, followed by a description of the contributions done in this work in 2.2. There is not room in this paper for a detailed explanation of each step in the algorithm. The morphon method has previously been presented in e.g. [3, 6], and the reader is referred to these papers for more details. 2.1. Standard Morphon Registration The morphon method handles registration of two or three dimensional images based on local displacement estimates, that indicate how each pixel in the prototype image should be displaced to better match the target image. The registration is an iterative process performed on multiple resolution scales to catch large global deformations as well as smaller and more local displacements. The algorithm can be divided into a few main steps, which are performed in each iteration: • Displacement estimation. The displacement between images is estimated using the local phase differences, which makes the method robust to intensity variations. Local phase is a description of the local structure in the image. The registration is, hence, based on matching structural similarities in the images rather than intensities. • Deformation field accumulation. A displacement field is found for each iteration and scale. These displacement estimates are accumulated to a deformation field that indicates

how the original prototype should be deformed to match the target image. • Deformation field regularisation. Using the accumulation of the local displacement estimates straight off would most likely tear the image apart. To avoid this, and obtain a smoothly varying deformation field, a regularisation step is necessary. • Deformation of the prototype. The final step in each iteration is to deform the original prototype according to the accumulated and regularised deformation field. The parameters that affect the outcome of the registration are the number of iterations and scales, together with the amount of regularisation of the deformation field. The number of resolution scales depends on how large the distance between the corresponding objects in the prototype and target image is. The morphon process must be initiated on a resolution scale that is able to catch that displacement. The number of iterations for each scale depends mainly on the complexity of the structures. More complex objects require more iterations for the prototype to adjust to the target. The regularisation parameter is the standard deviation of the regularizing kernel used when smoothing the deformation field. Since a large standard deviation smooths over a larger region, the degrees of freedom of the deformation is decreased. 2.1.1. The prototype The prototype is the image that is being deformed during the registration process. Hence, it should contain the objects that we want to find in the target image. However, since the similarity measure is based on comparing local phase, the prototype should mainly be designed to match the structure of the target objects, while intensity information is less important. 2.1.2. Displacement estimation The initial step in each iteration is to estimate a field describing the current local displacements between the images, which is equivalent to finding the optical flow. There are many techniques for computing an optical flow [7]. The morphon works with a method based on measuring the difference in quadrature phase [8, 9]. A set of quadrature filters (six for 3D data), each one sensitive to structures in a certain direction, is applied to the prototype and target images. The local displacement estimate in a certain direction is proportional to the local phase difference of the filter responses in that direction. The estimates for different directions are combined using the least square formulation in equation 1. min d

i2 Xh wi (ˆ nTi d − di )

(1)

i

In the above equation, d is the sought displacement field and n ˆ i is the direction of filter i. The variable di is the local phase difference estimate, i.e. the displacement estimate, associated with filter i. Finally, wi is a certainty measure, which is derived from the magnitude of the phase difference. This magnitude is high when the corresponding filter outputs have high magnitude, thus giving us reason to trust these estimates more. The output from this step is a field with a vector in each pixel position, which describes how the corresponding pixel in the prototype should be moved to decrease the difference in quadrature phase between the target image and the prototype image.

2.1.3. Accumulation When a field describing the displacement estimates for the current iteration is found, it is added to an accumulated deformation field. The accumulation is necessary to obtain a field that describes how the original prototype should be deformed in each iteration, to avoid a continuous deformation of the prototype. A continuous deformation of the prototype means that the prototype would be interpolated once in each iteration and this would introduce a blurring effect that would grow until the loop ends. Therefore we generate an accumulated field, that describes the deformation of the original prototype, and update this with the current displacement estimates in each iteration. This field is then used to deform the original prototype in the end of each iteration. The accumulation is done using the following formula: ′

da =

da ca + (da + dk ) ck ca + ck

(2)



where da indicates the updated accumulated deformation field, da the accumulated field from the previous iteration and dk the displacement estimates derived in this iteration. ca and ck are certainty estimates associated with the accumulated deformation field and the current displacement estimates, respectively. 2.1.4. Regularisation Since the displacement estimates are found by looking only locally in the image, adjacent estimates can be very divergent. To avoid tearing the image apart when deforming it, the field must vary smoothly over the image. This means that the displacement estimates should not be used straight off. First, a regularisation of the field is required. This is where the degrees of freedom of the deformation is defined. If the field is fitted to a global, rigid transformation model, only rotations and translations are allowed, while a more elastic field is obtained by fitting the estimates to some local function. The local, non-rigid model is necessary for the hip models and is implemented by performing local averaging of the field with a Gaussian low pass filter and normalised averaging [9]. 2.1.5. Prototype deformation The final step in each iteration is to deform the original prototype image according to the accumulated and regularised field. This is done using conventional linear interpolation techniques. 2.2. Extended Morphon for Fracture Segmentation The process in section 2.1 is sufficient when working on datasets where no fractures are present in the target objects. To use the morphon method to automatically segment fractures, the algorithm has been extended on some points. First, the registration of a single prototype has been modified to handle multiple prototypes (in this case two), as described in 2.2.1. Second, the displacement estimation has been changed to utilize normalised convolution, to be able to use information from the first prototype registration to guide the second one, as described in 2.2.2. 2.2.1. Multiple prototypes Since a fracture implies a topological change in a certain part of the anatomical region, the prototype must be able to deform considerably in this region, while still not loosing its shape in the other areas. These properties are quite difficult to combine in one prototype,

however, and to overcome the problem we have introduced the possibility to use multiple prototypes in the registration scheme. For the hip fracture application two prototypes are sufficient, one for each part of the fractured object. The two prototypes are registered sequentially to the target dataset. This approach, in combination with the technique described in 2.2.2, gives a registration scheme where the target image is gradually being “occupied” by the different prototypes. When the first prototype has found the corresponding structure in the target image, the second prototype searches for its corresponding structure in the rest of the target image. This feature is handled using normalised convolution. 2.2.2. Normalised convolution Normalised convolution is a method for performing convolution of incomplete and uncertain data. The method was first presented by Knutsson and Westin in [10]. For a more thorough description of the mathematical concepts, the reader is referred to [11]. Normalised convolution replaces standard convolution in the displacement estimation in the registration process. By using normalised convolution it is possible to define a region in the target image as uncertain or not relevant, i.e. the quadrature filters can be made insensitive to some parts of the image. Note that this is not the same as setting one part of the image to zero. Setting regions of the image to zero affects the content of the image by introducing edges between the zero regions and the rest of the image. By using normalised convolution, a region can be defined as not relevant by including a mask that defines the relevancy of each pixel in the image, in the convolution process. This means that structures in regions that has been specified as less relevant, i.e. uncertain, are ignored when comparing the prototype and the target image. The following explanation of normalised convolution is based on describing the convolution as a pointwise operation of the signal. The filter response is found as the scalar product between the signal vector f and the filter vector b in each point. Ordinary convolution is, thus, written as x ˜ = bT f , where x ˜ is the filter response. For a set of filters, we can position each filter vector as a column in a matrix that we call B, changing the previous expression to x ˜ = BT f . The filter vectors in matrix B can be described as basis functions for the filters. A basis function is, however, not necessarily defined only in the spatial neighbourhood of the filter. It can have unlimited expansion, although the values outside the specified region are not relevant. To limit the basis function, a so called applicability function is included, denoted Wa . This is a diagonal matrix with the values of the applicability function in the diagonal elements. Furthermore, a matrix containing the certainty values for the signal, is added to the expression. This matrix is denoted Wc and is also a diagonal matrix. Filtering a signal using normalised convolution can therefore be written as x ˜ = BT Wa Wc f . The filter outputs in x ˜ are the coordinates of the filters in the dual basis to B. To obtain the filter coordinates in basis B, the filter outputs are transformed with the matrix (BT Wa Wc B)−1 , resulting in the final expression for normalised convolution. x = (BT Wa Wc B)−1 BT Wa Wc f

(3)

In order to perform normalised convolution with a quadrature filter, two basis functions are used, one real and one complex. The first basis function is real and equal to one, b1 = 1. The second basis function is, together with the applicability function, exactly the complex filter itself, Wa b2 = q. The applicability function is equal to the envelope of the complex filter.

3. RESULTS The proposed registration scheme has been used to segment fractured hip bones from CT volumes. A 2563 section around the fracture, including the femur and part of the pelvis, has been cut from the full CT volume and used in the process. The type of fractures we are working with are cervical hip fractures, which means that the femoral neck is broken. This implies that the pelvis has not been affected and the femoral head is still located inside its socket, while the lower part of the femur has been dislocated. Since the fracture is located in approximately the same region in the CT volumes, one prototype can be used to find the femoral head and socket, and another prototype can be used to locate the rest of the femoral bone. The prototype that we have previously used for bones without fractures [2] is a manual segmentation of the hip bone. If this prototype is cut over the femoral neck, which is where the fracture is located, we obtain two prototypes that can move independently from each other and thereby be registered to the target data separately, fig 1.

(a)

(b)

(c)

Fig. 1. Isosurface plots of the prototypes. (a) The prototype used for bones without fractures. The colours indicate where this dataset has been split to create the prototypes used for the fractured bones. These are shown in (b) and (c). The fracture segmentation process is initiated by registering the prototype for the femoral head and socket, fig. 1(c). It is natural to begin with these parts of the bone since they have not been dislocated in the CT volume. The first step is, thus, to register this prototype to the target data using the standard morphon registration scheme. Secondly, the prototype for the femoral bone, fig. 1(b), is registered. An intermediate step is added to define which parts of the target image that has been reserved by the prototype in the first registration step. Based on the result from the deformation of the first prototype the algorithm creates a mask that covers the part of the target image the first prototype has been matched to. This mask is then used during the registration of the second prototype, where it “hides” corresponding parts of the target image. This way it is possible to prevent the two prototypes from being registered to the same structures in the target image. Fig. 2 show 2D slices from registering the prototypes to a dataset containing a fracture. 2D slices of axial (top row) and coronal (bottom row) slices are shown. The first column show slices of the target image and the second column are the prototypes before registration. The black line indicate the position at which the prototype has been split into two. These two have been registered sequentially, with the femoral head and cup prototype initially, followed by the registration of the femur. The third column is the prototypes after registration to the target image. It can be seen that they have been able to move separately from each other, while still not being registered to the same structures. The last column show the deformed prototypes on top of the target data. No quantitative analysis is included due to lack of ground truth data.

Fig. 2. 2D slices showing the result from registering the prototype to a dataset containing a fracture. The top row show axial slices and the bottom row show coronal slices. The first column show slices of the target dataset. The second column show the undeformed prototypes. The black line in this image is the position at which the prototype has been split into two, as described in 2.2.1. The third column show the two prototypes after deformation and the last column show the deformed prototype outlined on top of the target data.

4. CONCLUSIONS This work presents an extension of non-rigid morphon registration to perform automatic segmentation of fractured bones. In the extended method it is possible to use multiple prototypes and incorporate information of the target dataset to guide the prototypes during the registration. This information is used in the local displacement estimation step by utilizing normalised convolution instead of regular convolution of the images. For this application two prototypes are enough to segment the parts of the fractured bones. However, generalizing the method to work with more prototypes, in case of more complex fractures, would probably be quite straightforward. Furthermore it would be interesting to evaluate the method on other applications containing large topological changes in the structures. 5. REFERENCES [1] D. Hill, P. Batchelor, M. Holden, and D. Hawkes, “Medical image registration,” Physics in Medicine and Biology, vol. 46, no. 3, pp. R1–R45, March 2001. [2] J. Pettersson, H. Knutsson, P. Nordqvist, and M. Borga, “A hip surgery simulator based on patient specic models generated by automatic segmentation,” in Proceedings of the Medicine Meets Virtual Reality Conference (MMVR’06), Long Beach, California, USA, Jan 2006. [3] H. Knutsson and M. Andersson, “Morphons: Segmentation using elastic canvas and paint on priors,” in IEEE International Conference on Image Processing (ICIP’05), Genova, Italy, September 2005.

[4] J.-P. Thirion, “Non-rigid matching using demons,” in Proceedings of the IEEE Conference on computer vision and pattern recognition, June 1996, pp. 245–251. [5] M. Mellor and M. Brady, “Non-rigid multimodal image registration using local phase,” in Proceedings of MICCAI’04, C. Barillot, D.R. Haynor, and P. Hellier, Eds. 2004, pp. 789– 796, Springer-Verlag. [6] A. Wrangsj¨o, J. Pettersson, and H. Knutsson, “Non-rigid registration using morphons,” in Proceedings of the 14th Scandinavian conference on image analysis (SCIA’05), Joensuu, June 2005. [7] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,” Int. J. of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994. [8] D. J. Fleet and A. D. Jepson, “Computation of Component Image Velocity from Local Phase Information,” Int. Journal of Computer Vision, vol. 5, no. 1, pp. 77–104, 1990. [9] G. H. Granlund and H. Knutsson, Signal Processing for Computer Vision, Kluwer Academic Publishers, 1995, ISBN 07923-9530-1. [10] H. Knutsson and C-F. Westin, “Normalized and differential convolution: Methods for interpolation and filtering of incomplete and uncertain data,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 1993, pp. 515–523. [11] C-F. Westin, A Tensor Framework for Multidimensional Signal Processing, Ph.D. thesis, Link¨oping University, Sweden, SE581 83 Link¨oping, Sweden, 1994, Dissertation No 348, ISBN 91-7871-421-4.