Prostate boundary segmentation from 3D ultrasound images

16 downloads 27720 Views 1011KB Size Report
Canada and Department of Electrical and Computer Engineering, University of Western Ontario, ... the prostate boundary from 3D ultrasound images is an im-.
Prostate boundary segmentation from 3D ultrasound images Ning Hu Imaging Research Laboratories, The John P. Robarts Research Institute, London, Ontario N6H 5C1, Canada and Department of Electrical and Computer Engineering, University of Western Ontario, London, Ontario N6H 5C1, Canada

Do´nal B. Downey Imaging Research Laboratories, The John P. Robarts Research Institute, London, Ontario N6H 5C1, Canada, Department of Medical Biophysics, University of Western Ontario, London, Ontario N6H 5C1, Canada, and Department of Radiology, London Health Sciences Centre, London, Ontario N6H 5C1, Canada

Aaron Fenster Imaging Research Laboratories, The John P. Robarts Research Institute, London, Ontario N6H 5C1, Canada, Department of Radiology, London Health Sciences Centre, London, Ontario N6H 5C1, Canada, and Department of Medical Biophysics, University of Western Ontario, London, Ontario N6H 5C1, Canada

Hanif M. Ladaka) Imaging Research Laboratories, The John P. Robarts Research Institute, London, Ontario N6H 5C1, Canada and Department of Medical Biophysics and Department of Electrical and Computer Engineering, University of Western Ontario, London, Ontario N6H 5C1, Canada

共Received 8 July 2002; accepted for publication 5 May 2003; published 20 June 2003兲 Segmenting, or outlining the prostate boundary is an important task in the management of patients with prostate cancer. In this paper, an algorithm is described for semiautomatic segmentation of the prostate from 3D ultrasound images. The algorithm uses model-based initialization and mesh refinement using an efficient deformable model. Initialization requires the user to select only six points from which the outline of the prostate is estimated using shape information. The estimated outline is then automatically deformed to better fit the prostate boundary. An editing tool allows the user to edit the boundary in problematic regions and then deform the model again to improve the final results. The algorithm requires less than 1 min on a Pentium III 400 MHz PC. The accuracy of the algorithm was assessed by comparing the algorithm results, obtained from both local and global analysis, to the manual segmentations on six prostates. The local difference was mapped on the surface of the algorithm boundary to produce a visual representation. Global error analysis showed that the average difference between manual and algorithm boundaries was ⫺0.20⫾0.28 mm, the average absolute difference was 1.19⫾0.14 mm, the average maximum difference was 7.01⫾1.04 mm, and the average volume difference was 7.16%⫾3.45%. Variability in manual and algorithm segmentation was also assessed: Visual representations of local variability were generated by mapping variability on the segmentation mesh. The mean variability in manual segmentation was 0.98 mm and in algorithm segmentation was 0.63 mm and the differences of about 51.5% of the points comprising the average algorithm boundary are insignificant ( P⭐0.01) to the manual average boundary. © 2003 American Association of Physicists in Medicine. 关DOI: 10.1118/1.1586267兴 Key words: segmentation, three-dimensional ultrasound image, initialization, deformation, mesh

I. INTRODUCTION Prostate cancer is the most commonly diagnosed malignancy in men over the age of 50, and is found at autopsy in 30% of men at the age of 50, 40% at age 60, and almost 90% at age 90.1 Worldwide, it is the second leading cause of death due to cancer in men, accounting for between 2.1% and 15.2% of all cancer deaths.2 When prostate cancer is diagnosed at early stage, it is curable,3 and even at later stages treatment can be effective. Therefore, early diagnosis and precise staging of prostate cancer are of primary importance. Accurate and reproducible segmentation or outlining of the prostate boundary from 3D ultrasound images is an im1648

Med. Phys. 30 „7…, July 2003

portant first step when using images to support activities such as diagnosis and monitoring of prostate cancer as well as treatment planning and delivery. Manual contouring of sequential cross-sectional prostate images is time-consuming and tedious, and hence, a number of algorithms have been developed to segment the prostate boundary either automatically or semiautomatically. Most currently available algorithms focus on segmentation of 2D images. Prater and Richard4 used neural networks to segment the prostate; however, this approach requires extensive training sets, is slow, and makes the addition of user-specified information difficult.

0094-2405Õ2003Õ30„7…Õ1648Õ12Õ$20.00

© 2003 Am. Assoc. Phys. Med.

1648

1649

Hu et al.: Prostate boundary segmentation

Later, Richard et al.5 used the Laplacian-of-Gaussian edge operator followed by an edge-selection algorithm, which requires the user to select several initial points to form a closed curve. Their method correctly identified most of the boundary in a 2D ultrasound image of the prostate. An edgebased approach using nonlinear Laplace filtering was also reported by Aarnink et al.6,7 Pathak et al.8 used edge guidance for deformable contour fitting in a 2D ultrasound image and statistically demonstrated a reduction in the variability in prostate segmentation. Richard et al.9 segmented the prostate boundary by using a texture-based segmentation method, based on four texture energy measures associated with each pixel in the image. An automated clustering procedure was used to label each pixel in the image with the label of its most probable class. Although good results were obtained for 2D ultrasound images of the prostate, the algorithm was computationally intensive, requiring about 16 min to segment the boundary of a prostate in 2D with a 90 MHz SUN SPARCstation. Liu et al.10 presented an algorithm based on their radial bas-relief 共RBR兲 edge detector. First the edges in the image were highlighted by RBR, and then binary processing and area labeling were used to segment the boundary. Their results showed that RBR performed well with a good-quality image, and the result was marginally satisfactory for poorquality images. The RBR was able to extract a skeletonized image from an ultrasound image automatically. However, there were many spurious branches that created too much ambiguity to define the actual prostate boundary. Kwoh et al.11 extended the RBR technique by fitting a Fourier boundary representation to the detected edges, resulting in a smooth boundary. Careful tuning of algorithm parameters is required when using this approach. Knoll et al.12 proposed a technique for elastic deformation of closed planar curves restricted to particular object shapes using localized multiscale contour parametrization based on the wavelet transform. The algorithm extracted only important edges at multiple resolutions and ignored other information caused by noise or insignificant structures. This step was followed by a template matching procedure to obtain an initial guess of the contour. This wavelet-based method constrained the shape of the contour to predefined models during deformation. They reported that this method provided a stable and accurate fully automatic segmentation of 2D objects in ultrasound and CT images. Shape information has also been used in the form of point distribution models to segment the prostate.13 In our previous work,14 we reported on the development of an algorithm to fit the prostate boundary in a 2D image using the discrete dynamic contour with model-based initialization. A cubic interpolation function was used to estimate the initial 2D contour from four user-selected points, which was then deformed automatically to better fit the prostate boundary. Unlike other 2D segmentation techniques, in our approach, the user was able to edit the contour to improve its match to the prostate boundary. However, diagnosis and therapy planning of prostate cancer typically require the volume of the prostate and its shape Medical Physics, Vol. 30, No. 7, July 2003

1649

in 3D. Constructing a 3D prostate boundary from a sequence of 2D contours when using manual outlining or the 2D techniques is tedious and time-consuming. Recently, Ghanei et al.15 have used a deformable model to segment the prostate from 3D ultrasound images. Their approach required the user to initialize the model by outlining the prostate in 40%–70% of the 2D slices of each prostate, using six to eight vertices for each 2D contour, and then an initial 3D surface was generated. The running time of the algorithm was about 30 s on a SUN Ultra 20, but it was not clear whether the running time included outlining time. They compared algorithm and manual segmentation results by computing the ratio of the common pixels that were marked as prostate by both methods. The results showed an accuracy of nearly 89% and a three- to sixfold reduction in time compared to a totally manual outlining. No editing of the boundary was possible to improve the results. In this paper, our objectives are to describe an alternative semiautomatic 3D prostate segmentation approach and compare it to manual slice-by-slice outlining. Our 3D algorithm uses model-based initialization and a deformable model for boundary refinement. The algorithm requires the user to select only six points on the prostate boundary to initialize a 3D model and then a combination of internal and external forces drives the model to the prostate boundary. Our approach also allows the user to edit the boundary in problematic regions and then deform the model again to improve the final result. We have used our algorithm on six clinical cases and the accuracy and variability have been assessed by comparing our semiautomatic algorithm results, obtained from both local and global analysis, to manually segmented boundaries. In the following sections, the segmentation technique is described in detail and the accuracy and variability of the technique are examined.

II. METHODS A. Segmentation algorithm

The algorithm is based on a deformable model that is represented by a 3D mesh of triangles.16,17 The operation of the algorithm involves three major steps: 共1兲 initialization of the 3D mesh; 共2兲 automatic deformation to localize the prostate boundary; and 共3兲 interactive editing of the deformed mesh.

1. Initialization The user selects six control points (x ci ,y ci ,z ci ), i ⫽1,2,...,6, on the ‘‘extremities’’ of the prostate in order to initialize the mesh. Typically, we used the following control points: four points in an approximate central transverse 2D slice 共two points at the lateral extremes and two points at the top and bottom of the prostate on the central axis兲, one point at the prostate apex and one point at the base. A typical 3D ultrasound image with user-selected control points is shown in Fig. 1共a兲. An ellipsoid18 is estimated from the control points, and is described by

1650

Hu et al.: Prostate boundary segmentation

1650

is the angle between the x axis and the projection of vector r in the x – y plane, and ␩ is the angle between r and its projection on the x – y plane. The vector originates from the center of the ellipsoid (x 0 ,y 0 ,z 0 ), and sweeps out the surface of the ellipsoid as the two independent parameters ␩ and ␻ change in the given intervals. The ellipsoid’s surface is then represented by a mesh of triangles connecting these points.19 The ellipsoid estimated in this manner usually does not pass exactly through the six control points, nor does it follow the prostate boundary well. In order to obtain a better fit to act as initial mesh for the deformation step, the ellipsoid is warped using the thin-plate spline transformation, which is described in the appendix. The transformation maps the six ends of the semimajor axes of the ellipsoid into the corresponding control points. Figure 1共b兲 shows the initial mesh corresponding to the 3D prostate image shown in Fig. 1共a兲.

2. Deformation a. Dynamics. Our 3D deformable model is an extension of the 2D model described by Lobregt and Viergever.20 The position of vertex i on the mesh at time t⫹⌬t is calculated from its position at time t using the following finitedifference equations: pi 共 t⫹⌬t 兲 ⫽pi 共 t 兲 ⫹vi 共 t 兲 ⌬t,

共2a兲

vi 共 t⫹⌬t 兲 ⫽vi 共 t 兲 ⫹ai 共 t 兲 ⌬t,

共2b兲

ai 共 t⫹⌬t 兲 ⫽

FIG. 1. Operation of the 3D deformable model segmentation algorithm. 共a兲 3D ultrasound image with five of the six user-selected control points shown as white spheres. 共b兲 Initial mesh. 共c兲 Final deformed mesh.

冋册 冋

x y r共 ␩ , ␻ 兲 ⫽ z



a⫻cos共 ␩ 兲 ⫻cos共 ␻ 兲 ⫹x 0 ⫽ b⫻cos共 ␩ 兲 ⫻sin共 ␻ 兲 ⫹y 0 , c⫻sin共 ␩ 兲 ⫹z 0

␲ ␲ ⫺ ⭐␩⭐ 2 2, ⫺␲⭐␻⭐␲ 共1兲

where (x 0 ,y 0 ,z 0 ) is the center of the ellipsoid and a, b, and c are the lengths of the semimajor axes in the x, y, and z directions, respectively. This assumes that the length, width, and height of the prostate are approximately oriented along the x, y, and z axes of the 3D image. The value of x 0 is taken to be the average of the x coordinates of the two control points with extreme x values. Similarly, y 0 and z 0 are taken to be the average of the two control points with extreme y and z values, respectively. The parameters a, b, and c are computed as half the distance between the two control points with extreme x, y, and z values, respectively. The vector r共␩,␻兲 gives the position of a point on the ellipsoid, where ␻ Medical Physics, Vol. 30, No. 7, July 2003

1 f 共 t⫹⌬t 兲 , mi i

共2c兲

where pi ⫽(x i ,y i ,z i ) T is the vertex’s position and vi and ai are its velocity and acceleration, respectively. m i is its mass and ⌬t is the time step. For simplicity, the mass of each vertex and the time step are taken to be unity. The initial position of each vertex is specified by the user defined initial mesh, and the initial velocity and acceleration are set to zero. fi is a weighted combination of internal (f int i ), external d (f ext i ), and damping (fi ) forces applied to each vertex i of the mesh:20 int ext ext d fi ⫽w int i f i ⫹w i f i ⫹fi .

共3兲

The position, velocity and acceleration of each vertex are then iteratively updated using Eqs. 共2兲 and 共3兲. Iterations continue until the mesh reaches equilibrium, which occurs when the velocity and acceleration of each vertex become approximately zero, i.e., when 储 vi 储 ⭐ ⑀ 1 and 储 ai储 ⭐ ⑀ 2 , where ⑀ 1 and ⑀ 2 are two small positive constants close to zero. Both ⑀ 1 and ⑀ 2 were set to 0.01. In cases where equilibrium cannot be reached, deformation continues until a user specified maximum number of iterations is reached. The maximum number was set to 40 in this study. For all six prostates segmented in this work 共see Sec. II B兲, equilibrium could not be reached after 40 iterations. The number of iterations to reach equilibrium depends critically on the choice of ⑀ 1 and ⑀ 2 : More iterations are required when smaller values are used; however, the resulting mesh configuration satisfies the requirements for equilibrium more closely. We found that

1651

Hu et al.: Prostate boundary segmentation

1651

increasing the maximum to 200 iterations did lead to convergence within 150 iterations in all cases at the expense of increased computation time and diminishing gains in accuracy. For instance, the difference in volume enclosed by the surface at 40 iterations and after convergence was only 0.4% on average, and the maximum difference in the position of points on the mesh was only 0.2%. The results after 40 iterations are adequate, and we chose to terminate the deformation procedure after 40 iterations. b. Forces. External forces drive vertices toward edges in the ultrasound image and are defined in terms of a 3D potential energy function derived from the image:21 E 共 x,y,z 兲 ⫽ 储 ⵜ 共 G ␴ * I 共 x,y,z 兲兲储 ,

共4兲

where E represents the energy associated with a pixel in the image having coordinates (x,y,z), G ␴ is a Gaussian smoothing filter with a characteristic width of ␴, and I is the image. The ‘‘*’’ operator represents convolution, ‘‘ⵜ’’ is the gradient operator, and 储•储 is the magnitude operator. Image-based forces can be computed from the potential energy function using ⵜE 共 x,y,z 兲 f ext共 x,y,z 兲 ⫽ . max储 ⵜE 共 x,y,z 兲储

共5兲

The energy has a local maximum at an edge and the force computed from the energy serves to localize this edge. The denominator in Eq. 共5兲 serves to normalize the external forces to the range 关0,1兴, which is the same as the range of internal forces. The external force at each vertex with coordinates (x i ,y i ,z i ) is sampled from the image-based force field using bilinear interpolation: ext f ext i ⫽f 共 x i ,y i ,z i 兲 .

共6兲

The external forces f ext i are vector quantities, having both magnitude and direction. Only the component of the force normal to the surface is applied at the vertex since the tangential component can potentially cause vertices to bunch up during deformation: ext f ext i ⫽ 共 f i "ri 兲 ri ,

共7兲

where ‘‘•’’ denotes the vector dot product, and ri is the normal vector at vertex i. The internal force is the resultant surface tension at vertex i. It keeps the model smooth in the presence of noise and is simulated by treating each edge of the model as a spring. Surface tension at vertex i is defined as the vector sum of each normalized edge vector connected to the vertex: f int i ⫽⫺

1 M

M

e

ij , 兺 j⫽1 储 ei j 储

共8兲

where ei j ⫽pi ⫺p j is a vector representing the edge connecting vertex i with coordinate pi to an adjacent vertex j with coordinate p j , and M is the number of edges connected to vertex i, as shown in Fig. 2. Again, only the normal component is applied: Medical Physics, Vol. 30, No. 7, July 2003

FIG. 2. Schematic diagram showing the mesh geometry used to calculate the surface tension at vertex i with coordinates pi on the mesh. Bold lines indicate all the edges attached to vertex i. This geometry is used to calculate the internal force at vertex i(f int i ), which is given by the normal component of the vector sum of each normalized edge vector connected to the vertex.

int f int i ⫽ 共 f i "ri 兲 ri .

共9兲

To prevent a vertex from oscillating during deformation, a damping force is applied to vertex i that is proportional to the velocity vi at the vertex: fdi ⫽w di vi ,

共10兲

where w di is a negative weighting factor. For simplicity, identical weighting factors were assigned to each vertex for all segmentations. The values for w img i , d , and w were set to 1.0, 0.2, and ⫺0.5, respectively. The w int i i values were selected empirically; however, as a general rule, compared to w int a larger value for w img i i favors deformation toward edges rather than smoothing due to internal forces. img For noisier images, w int i should be increased relative to w i . d The choice of w i appears to be less critical, and a small negative value was found to provide good stability. Figure 1共c兲 shows the resulting mesh after the initial mesh has been deformed to fit the 3D image of the prostate.

3. Editing In some cases, the initialization procedure may place some mesh vertices far from the actual prostate boundary. These vertices may not be driven toward the actual boundary by the deformation process because image-based forces only act over short distances from edges. Thus, we have included an editing procedure that allows the user to edit the mesh by dragging a point on the mesh to its desired location. The mesh can be re-deformed after editing. In order to keep the mesh smooth when a point is dragged, points within a sphere of a user-defined radius centered at the displaced vertex are automatically deformed using the thin-plate spline transformation. The extent of smoothing is determined by the radius of the sphere with a larger radius affecting more neighboring vertices. If necessary, the displaced points on the edited mesh can be clamped, i.e., will not move during deformation, and the entire mesh can be re-deformed to obtain a better fit to the actual prostate boundary.

1652

Hu et al.: Prostate boundary segmentation

1652 TABLE I. The number of 2D slices and the number of points in the manually outlined prostate boundaries. Prostate

Number of slices

Number of points

1 2 3 4 5 6

33 18 22 32 20 18

813 399 507 714 388 377

mation, the model follows the actual prostate boundary well except in the two indicated areas where it was attracted to other edges. Figure 3共c兲 shows the mesh after the user edited the two points indicated by the arrows. After re-deformation, the contour in Fig. 3共d兲 fits the prostate boundary well. B. Prostate images

Six prostate images of patients who were candidates for brachytherapy were acquired using a 3D transrectal ultrasound 共TRUS兲22 system developed in our laboratory. The system consists of three elements: a conventional ultrasound machine with a biplane transrectal ultrasound transducer 共Aloka 2000兲; a microcomputer with a frame-grabber; and a motor-driven assembly to hold and rotate the transducer. The transducer probe was mounted in a probe holder assembly and rotated around its long axis by a computer-controlled motor. One-hundred 2D B-mode images were collected as the transducer was rotated at constant speed through 180° in 8 s. A 3D image was then reconstructed from this series of 2D images and was ready for viewing about 1 s after the acquisition was completed.23–25 C. Evaluation of segmented boundaries

1. Manual segmentation

FIG. 3. A cross section of the 3D prostate image and segmented contour to demonstrate the editing process. 共a兲 Initial outline, 共b兲 after first deformation, 共c兲 two vertices 共shown as arrows indicate兲 are edited to the actual boundary of the prostate, 共d兲 after second deformation.

Figure 3 shows an example where the segmented surface had to be edited. For clarity, only 2D cross sections through the image and the mesh are shown, although both are three dimensional. Figure 3共a兲 shows that most of the initial model closely follows the actual prostate boundary except in areas indicated by the arrows. Figure 3共b兲 shows that after deforMedical Physics, Vol. 30, No. 7, July 2003

In order to evaluate the performance of the algorithm, the surfaces segmented using the algorithm were compared to surfaces manually outlined by a technician. The technician was trained by a radiologist, and has several years of experience in analyzing ultrasound images of the prostate. The technician is representative of users in a radiological oncology department. Manual segmentation was performed using a multiplane reformatting image display tool.23 The 3D prostate images were resliced into set of transverse parallel 2D slices along the length of the prostate at 2 mm intervals at mid-gland and at 1 mm intervals near the ends of the prostate where the prostate shape changes more rapidly from one slice to the next. The number of 2D slices for the prostates used in our segmentation studies ranged from 18 to 33 as tabulated in Table I. The table also shows the total number of points in each manual prostate segmentation; the total number of points in the algorithm segmented meshes was 994. The prostate boundary was outlined in each slice, resulting in a stack of 2D contours, then the contours were tessellated into a 3D meshed surface using the NUAGES software package.26 Manual outlining of the prostates from the 2D

1653

Hu et al.: Prostate boundary segmentation

FIG. 4. An example of a manually segmented prostate superimposed on a sagittal section of the prostate. 共a兲 The stack of 2D contours resulting from manual outlining the prostate in sequential 2D slices. 共b兲 The 3D mesh generated from the stack of 2D contours.

slices required about 1–1.5 h for each prostate. Figure 4共a兲 shows an example of the stack of slices, and Fig. 4共b兲 shows the mesh produced by manual segmentation for the same prostate as shown in Fig. 1.

2. Accuracy a. Local measures. The signed distance d j between each point j ( j⫽1,2,...,N) on the algorithm segmented surface and the corresponding point on the manually outlined surface was used as a local measure of the algorithm’s performance. As shown in Fig. 5, to compute d j , the centroid of the algorithm segmented surface was first calculated. Radial lines were then drawn from the centroid projecting outward to each point on the algorithm segmented surface and d j was computed as the difference from a point on the algorithm segmented mesh to the corresponding point on the manual outline. The corresponding point was defined as the intersection point of the radial line to the manual outline. d j is a signed value, and it is positive when the point on the algo-

1653

rithm mesh is outside and negative otherwise. The absolute difference 兩 d j 兩 of each point was color mapped on the surface of the algorithm segmented mesh so it can provide a visual representation to show where the larger differences between algorithm and manual segmentations occur. b. Global measures. The global performance of the 3D segmentation algorithm was evaluated by computing the mean difference 共MD兲, giving a measure of the segmentation bias; the mean absolute difference 共MAD兲, giving a measure of the segmentation error; and the maximum difference 共MAXD兲,14 giving a measure of maximum error between algorithm and manual segmentations for each prostate k (k ⫽1,2,...,6): 1 MDk ⫽ N

N



dj ,

共11a兲

兺 兩 d j兩 ,

共11b兲

MAXDk ⫽max兩 d j 兩 .

共11c兲

j⫽1

1 MADk ⫽ N

N

j⫽1

The global performance of the algorithm was also assessed by calculating the percent difference, ⌬V k %, in prostate volume computed from the manually segmented surface (V m,k ) and that computed from the algorithm segmented surface (V a,k ) for prostate image k (k⫽1,2,...,6): ⌬V k %⫽

V m,k ⫺V a,k ⫻100. V m,k

共12兲

Based on the measures of each prostate, the average MD, MAD, MAXD and ⌬V k % of six prostates were calculated. These values are useful for algorithm development and optimization.27

3. Variability

FIG. 5. Comparing two boundaries by determining the difference d j from the corresponding points. First the centroid of the algorithm boundary is determined, and then a ray is projected from the centroid to point j on the algorithm boundary. The corresponding point on the manual boundary was defined as the point where the ray intersects with the manual boundary. The difference, d j , is the signed distance from the point on the algorithm boundary to the corresponding point on the manual boundary. Medical Physics, Vol. 30, No. 7, July 2003

Since the algorithm required the user to initialize the process by manually selecting six points, variable final boundaries may occur. Thus, we also studied the variability in the algorithm segmentation and compared it to manual segmentation variability. One of the six prostate images was segmented ten times by the same technician using both manual and semiautomatic segmentation techniques. The variability in the manual and the semiautomatic segmentation was calculated using the following three steps: 共1兲 determining an average segmentation mesh for the each set of ten meshes; 共2兲 calculating the difference of each individual mesh to the average mesh; and 共3兲 computing the standard deviation of the mesh distribution locally about the surface of the average boundary. After obtaining the average boundaries and local variability of both manual and algorithm boundaries, a t-test was used to determine whether the difference of corresponding points between manual and algorithm segmentations was significant. a. Generating the average mesh. In order to generate an average mesh from a set of M meshes, ␹ 1 , ␹ 2 ,..., ␹ M (M

1654

Hu et al.: Prostate boundary segmentation

1654

FIG. 6. Cross sections through the image in Fig. 1 showing the algorithm segmentation 共solid line兲 and manual segmentation 共dotted line兲. 共a兲 3D ultrasound image of the prostate with transverse, coronal and sagittal cutting planes indicated by b, c, and d, respectively, to show 2D cross-sectional images. 共b兲 Transverse cross section of the image and the boundaries corresponding to the plane shown in 共a兲. 共c兲 Coronal cross section of the image and the boundaries. 共d兲 Sagittal cross section of the image and the boundaries.

⫽10 in this study兲, the first mesh ␹ 1 was taken as an initial mesh. For each vertex p1i (i⫽1,...,N, where N is the total number of vertices in the mesh兲 of ␹ 1 , the corresponding points p2i , p3i ,...,pM i in the meshes ␹ 2 , ␹ 3 ,..., ␹ M were found by using the same method explained in Sec. II B 2. Note that the double subscript notation 共e.g., p ji ) in this section and the next two refers to vertex i of mesh j. A vertex, ¯pi on the average mesh ¯␹ was determined by computing the average of all corresponding points p1i , p2i , p3i ,...,pM i : M

1 p . 共13兲 M j⫽1 ji b. Difference to average mesh. The difference d ji of the corresponding point in each mesh in ␹ 1 , ␹ 2 ,..., ␹ M to ¯pi in the average mesh ¯␹ was calculated, then the average difference at ¯pi in the average mesh ¯␹ was found using ¯pi ⫽



M

¯d ⫽ 1 d . 共14兲 i M j⫽1 ji c. Standard deviation. For vertex ¯pi on the average mesh ¯␹ , the standard deviation of the distribution of differences was calculated using



␴ i⫽



1 M ⫺1

M

兺 共 d ji ⫺d¯ i 兲 2 .

j⫽1

共15兲

The standard deviation ␴ i at each vertex on the average mesh was mapped on the average mesh surface to produce a visual display of local variability. Medical Physics, Vol. 30, No. 7, July 2003

III. RESULTS A. Accuracy

The quality of the fit of the algorithm and manually segmented meshes to the actual boundary of the prostate can be seen in Fig. 6. Figure 6共a兲 shows the algorithm mesh in the 3D ultrasound image, with 共b兲 transverse, 共c兲 coronal, and 共d兲 sagittal cutting planes to show 2D cross-sectional images. Figure 6共b兲 shows a 2D transverse mid-gland slice of the 3D image with corresponding cross sections through the algorithm and manually segmented meshes superimposed. Figure 6共c兲 shows a coronal section and Fig. 6共d兲 shows a sagittal section with the corresponding cross section of the meshes. The manual and algorithm contours in Fig. 6 are similar to each other and follow the prostate boundary well in regions where the contrast is high and the prostate is clearly separated from other tissues. In regions of low signal, the actual boundary is difficult to discern, and the manual and algorithm segmented outlines differ from each other, such as in regions near the bladder 共indicated by the white arrow兲 and the seminal vesicle 共indicated by the black arrow兲, as shown in Fig. 6共d兲. The complete semiautomatic segmentation procedure took less than 1 min on a Pentium III 400 MHz personal computer, with about 20 s taken by the deformation algorithm. Figures 7共a兲 and 7共b兲 show two views of the algorithm segmented surface for the prostate shown in Fig. 1, with the absolute difference values 兩 d j 兩 between the algorithm and manual segmented meshes mapped on the surface. The surface is shaded so black regions correspond to 兩 d j 兩 ⫽0 and white represents maximum difference, which in this case was

1655

Hu et al.: Prostate boundary segmentation

1655 TABLE II. Comparison between manual and algorithm segmentation for the six prostates studied. ⌬V k % is the volume difference between manual and algorithm segmentations, MDk is the mean difference, MADk is the mean absolute difference, and MAXDk is the maximum difference, of manual and algorithm boundaries. Average and standard deviation of the global measures are also shown. SEM is the standard error of the mean. Volume 共cm3兲

FIG. 7. 共a兲 and 共b兲 The prostate boundary with absolute difference between manual and algorithm segmented surfaces, 兩 d j 兩 , mapped on surface. Black indicates zero error and white is the maximum error of 6.59 mm as shown on the scale. 共a兲 A view of the prostate perpendicular to the transverse plane in the direction from the base to the apex, and 共b兲 a view perpendicular to the sagittal plane in the direction from the patient right to left. 共c兲 and 共d兲 The local standard deviation mapped on the average manual segmented prostate boundary. 共e兲 and 共f兲 The local standard deviation mapped on the average algorithm segmented prostate boundary. Black regions indicate zero standard deviation and white regions represent maximum standard deviation, which was 3.0 mm for manual and 3.4 mm for algorithm segmentation. 共c兲 and 共e兲 The same viewing direction of the prostate as in 共a兲, and 共d兲 and 共f兲 are the same viewing direction as in 共b兲.

6.59 mm. Figure 7共a兲 shows a view of the prostate perpendicular to the transverse plane in the direction from the base of the prostate to the apex, and Fig. 7共b兲 shows a view perpendicular to the sagittal plane in the direction from the patient right to left. Table II lists the volumes calculated from the manually and algorithm segmented boundaries, and ⌬V k % determined by comparing the volumes of each prostate. ⌬V k % is positive for all prostates, indicating manually segmented boundaries are larger than algorithm segmentations. Table II also shows the global accuracy measures, MD, MAD, and MAXD of the prostates. The average and standard deviation values of the global accuracy measures, MD, MAD, MAXD, and ⌬V% are computed by averaging the individual values for all six prostates. The average MD was close to zero 共⫺0.20⫾0.28 mm兲, but does vary over the surface of the prostate, becoming negative in some regions and positive in others. The average MAD was 1.19⫾0.14 mm, the average MAXD was 7.01⫾1.04 mm, and the average value of ⌬V% Medical Physics, Vol. 30, No. 7, July 2003

Prostate k

Manual V m,k

Algorithm V a,k

⌬V k %

MDk 共mm兲

MADk 共mm兲

MAXDk 共mm兲

1 2 3 4 5 6

36.93 27.36 22.87 24.22 23.54 21.58

33.63 26.33 20.80 22.42 20.85 21.06

8.95 3.75 9.05 7.41 11.42 2.39

⫺0.45 ⫺0.10 ⫺0.32 0.13 ⫺0.48 ⫺0.12

1.16 1.03 1.39 1.03 1.33 1.17

8.45 5.69 7.04 6.59 7.58 7.32

Average Standard deviation SEM

26.08 5.65

24.18 5.08

7.16 3.45

⫺0.20 0.28

1.19 0.14

7.01 1.04

2.31

2.07

1.41

0.11

0.06

0.42

was 7.16⫾3.45. Note that the standard error of the mean 共SEM兲 for volume measurements is less than that for currently used approaches for prostate volume determination, suggesting that the analysis of six prostates is sufficient. For instance Tong et al.28 report an intra-operator SEM of 3.6 cm3 using manual planimetry of 3D TRUS images, and 12.2 cm3 for the height-width-length method.

B. Variability

Figures 7共c兲 and 7共d兲 show the local variability in the manually segmented boundaries, while Figs. 7共e兲 and 7共f兲 show the algorithm segmented boundaries. Figures 7共c兲 and 7共e兲 show the same view direction of the prostate as in Fig. 7共a兲, and Figs. 7共d兲 and 7共f兲 are the same viewing direction as in Fig. 7共b兲. In these figures, the standard deviation of the local boundary distribution is mapped on the average boundary with black regions indicating zero standard deviation and white regions representing maximum standard deviation, which was 3.0 mm for manual segmentation and 3.39 mm for algorithm segmentation. The average standard deviation over the whole prostate surface in manual segmentations was 0.98⫾0.01 mm, whereas it was 0.63⫾0.02 mm for the algorithm. Figure 8 shows the result of the t-test comparing the local standard deviation in manual segmentations to that in algorithm segmentations. The gray-levels on the average algorithm segmented boundary indicate the significances of differences between manual and semiautomatic segmentations: black regions correspond to the differences which were found to be significant ( P⭐0.01), while white regions correspond to regions where the differences were not significant. These results showed that differences of 51.5% of the points comprising the algorithm average boundary to manual average boundary were not significant.

1656

Hu et al.: Prostate boundary segmentation

1656 TABLE III. Comparison of boundaries segmented using the previous 2D segmentation algorithm 共Ref. 14兲 and the current 3D approach for prostate image 6 using the 2D versions 共Ref. 14兲 of the metrics MD, MAD, and MAXD. The results represent the average values for two 2D image slices at mid-gland and for the average of two slices at the lateral margins. 2D algorithm

3D algorithm

Slice

MD 共mm兲

MAD 共mm兲

MAXD 共mm兲

MD 共mm兲

MAD 共mm兲

MAXD 共mm兲

Mid-gland Margins

0.40 ⫺2.33

2.09 2.35

5.40 5.00

0.23 ⫺1.20

1.94 1.22

5.20 3.30

volume effects are significant. The results obtained at midgland using either algorithm are comparable to each other; however, there is a significant difference at the lateral margins. In this area, the boundaries segmented using the 3D algorithm are closer to those segmented manually. IV. DISCUSSION A. 3D semiautomatic segmentation algorithm

FIG. 8. Results of t-test between manual and algorithm segmented boundaries, mapped on the algorithm average mesh. Regions where the difference is insignificant are shown in white and significant 共P⭐0.01兲 regions are shown in black. 共a兲 The same viewing direction of the prostate as in Fig. 7共a兲. 共b兲 The same viewing direction as in Fig. 7共b兲.

C. Editing

The number of editing operation was recorded for each segmented image. Prostate images 1 and 6 required two editing operations, but the rest of the prostate images did not require editing. D. Comparison of 2D and 3D segmentation algorithms

The performance of the 3D segmentation algorithm was compared to our previously published 2D segmentation algorithm.14 Two-dimensional images were extracted from prostate image 6 and segmented using the 2D algorithm, and the 2D versions of the metrics MD, MAD, and MAXD as described previously14 were computed with respect to the corresponding manual outlines. Two-dimensional contours were also extracted from the 3D meshes at the corresponding locations and compared to the corresponding manual outlines. Table III lists these values for the average of two slices obtained at mid-gland and also for the average of two slices obtained at the lateral margins of the prostate where partial Medical Physics, Vol. 30, No. 7, July 2003

The segmentation algorithm described in this paper is more efficient in terms of time than manual segmentation because it uses global prostate shape information and therefore requires little user input 共six points兲 to rapidly initialize the model, followed by local refinement using an efficient deformable model. On average, the technician involved in this study spent 5– 6 s to view the image and select initial points from the prostate boundary. The deformation procedure took about 20 s to localize the boundary. For results that needed to be modified by the user, editing could be finished and re-deformation could on average be completed in about 30 s. The software has not been optimized for efficiency, and the whole algorithm currently requires less than 1 min on a 400 MHz PC to segment a prostate boundary from a 3D ultrasound image, whereas manual segmentation usually required 1–1.5 h. The segmentation algorithm makes use of a 3D approach to complete the task by extracting the image information from the 3D image directly. Although the manual segmentation and 2D automatic or semiautomatic segmentation algorithms can provide a 3D representation of the prostate, selecting slices from the 3D image is arbitrary and controlled by the operator, which leads to increased variability. Furthermore, the 2D segmentation approaches neglect correlations in anatomy between neighboring slices. Determining the prostate boundary from only one 2D image is difficult when shadows are present and at the apex and base of the prostate. The operator has to move back and forth frequently between slices to form an impression of the 3D anatomy of the prostate, a time-consuming procedure that may lead to error. The initialization by the six points selected from the extremes of the prostate can provide a good estimate to the prostate shape under most circumstances, which guides the deformation to extract the actual prostate boundary. In principle, as few as four points should be required to estimate all of the parameters of the ellipsoid described by Eq. 共1兲: one at

1657

Hu et al.: Prostate boundary segmentation

the center of the ellipsoid and one at the end of each axis. These points would directly give the parameters (x 0 ,y 0 ,z 0 ), a, b, and c which are required in Eq. 共1兲. However, we found that selecting the center proved to be more difficult than selecting the extremities. The six extreme points were easiest to locate. We found additional points 共more than six兲 to be redundant. In cases where the prostate shape is very complex and difficult to represent with approximately six extremes, a better mathematical presentation of the prostate shape is needed. B. Accuracy

The results of the algorithm were compared to manual segmentation using surface and volume difference measures. It is difficult to conclude whether the difference between two methods is caused by inaccuracies in semiautomatic segmentation, or by manual segmentation. Ideally, the performance of the algorithm should be compared to a gold standard, which cannot be obtained when imaging prostates in patients. Unlike some other applications, which can be tested using phantoms, the existing phantoms of the prostate do not mimic the characteristics of the prostate and produce unrealistically better ultrasound images than the actual prostates. Both MD and ⌬V% measures showed that the algorithm segmented boundary is smaller than the manually segmented boundary for prostates. The algorithm assumes that edges in the image correspond to inflection points in the gray-level profile through the image. We observed that when the boundary is visible, most operators do not select the position of the boundary as corresponding to the inflection point, but tend to select points on the brighter side of the inflection point, resulting in a larger manually segmented volume. This is apparent in Figs. 6共b兲 and 6共c兲 where the manually segmented boundary 共dotted兲 is generally larger than the algorithm segmented boundary. However, this was not observed in regions where the boundary is either difficult to discern or absent in the images 关e.g., areas indicated by the arrows in Fig. 6共d兲兴. Comparing the average values of the measures MD, MAD, and MAXD to those previously published for our 2D algorithm, it would appear that the performance of the current 3D algorithm is worse than that of the 2D algorithm. Values previously reported for our 2D algorithm were for 2D image slices acquired at mid-gland, where the prostate shape is approximately elliptical and our 2D algorithm is easy to initialize, resulting in very accurate segmentation. Results for the 3D algorithm include the ends of the prostate, and we expect poorer performance due to partial volume effects if a direct comparison between these values is made.

1657

to zero. At those regions, the variability is not sensitive to the initialization because the forces of the deformable model can drive the vertices to the desired edges, which are likely the actual prostate boundaries. At the region with high variability, i.e., mapped bluish color, the different initialization will lead to different segmentation because the low contrast at the edges cannot always attract vertices to the same location. The high variability reflects the high noise or missing edges and segmentation at those regions is sensitive to the initialization. By performing the t-test, it was found that the differences of about 51.5% of the mesh vertices in the average algorithm boundary to the average manual segmentation are insignificant. Editing is necessary even though it may introduce higher variability, because at regions where the actual prostate boundary contrast is either very weak or missing, user intervention is needed to guide the deformable model. In addition, when the prostate shape is very irregular and initialization cannot represent the prostate well, users can use the editing tool to change the model locally before the deformation. int d Variations in the parameters ␴, w img i , w i , and w i affect the segmented boundaries, and a different choice can potentially be an important source of variability in boundary shape. A quantitative study of how the choice of parameters affects segmentation results would involve varying all parameters over the entire parameter space as well as varying the initialization points, and is beyond the scope of the current work. We are currently developing automatic procedures for testing the sensitivity of semiautomatic algorithms to variations in parameters. V. CONCLUSION A semiautomatic algorithm was developed in this study. The algorithm is based on model-based initialization and local mesh refinement using a deformable model. The initialization procedure requires the user to select only six points on the prostate boundary from the 3D image. An editing tool is provided to allow the user to modify the mesh locally after deformation. The algorithm segmentation requires less than 1 min on a Pentium III 400 MHz personal computer. The accuracy and variability of the algorithm have been assessed by comparing to the manual results. Generally, the algorithm segmented mesh and the manually segmented mesh are similar, except at regions where the actual prostate boundary is missing or high noise is present, and the algorithm produces less variable results than the manual segmentation.

C. Variability

As stated earlier, the segmentation algorithm has a lower variability than manual segmentation. Although initializing and editing the model may introduce variability because of user intervention, the deformation will automatically track the edges without user intervention. It can be seen from Figs. 7共e兲 and 7共f兲 that most regions of the algorithm segmented boundary are nearly pure red, indicating low variability close Medical Physics, Vol. 30, No. 7, July 2003

ACKNOWLEDGMENTS The authors would like to thank Yunqiu Wang and Congjin Chen for technical assistance. Funding for this work was provided by the Canadian Institutes of Health Research 共to A.F.兲, the Ontario Research and Development Challenge Fund 共to A.F.兲, and by the Natural Sciences and Engineering Research Council of Canada 共to H.M.L兲. The third author

Hu et al.: Prostate boundary segmentation

1658

1658

共A.F.兲 holds a Canada Research Chair Tier 1 and acknowledges the support of the Canada Research Chair Program.

APPENDIX The thin-plate spline is a mathematical transformation based on the physical feature of bending of a thin metal sheet. The transform29 provides a smooth mapping function between two sets of homologous 共source and target兲 points in 3D: 共 x,y,z 兲 → 共 f x 共 x,y,z 兲 , f y 共 x,y,z 兲 , f z 共 x,y,z 兲兲 ,

共A1兲

where f x (x,y,z), f y (x,y,z), and f z (x,y,z) are the components of the vector-valued thin-plate spline transformation function f (x,y,z) in the x, y, and z directions, respectively: f 共 x,y,z 兲 ⫽ 关 a 0 ⫹a 1 x⫹a 2 y⫹a 3 z 兴

冋兺



n



i⫽1

w i U 共 储共 x,y,z 兲 ⫺Pi 储 兲 .

共A2兲

In Eq. 共A2兲, the Pi represent the source points. The first part of f (x,y,z) in square brackets is an affine transformation representing the behavior of f (x,y,z) at infinity, and the second part is the weighted sum of the 3D root function U(r) ⫽ 储 r 储 . The root function is a fundamental solution of the biharmonic equation ⌬ 2 U⫽0, where ⌬U⫽



⳵2

⳵2

⳵2

⳵x

⳵y

⳵z2

⫹ 2

⫹ 2



U.

U satisfies the condition of minimum bending energy. To find the parameters in Eq. 共A2兲, the following matrix equation must be solved: 共 W兩 a兲 T ⫽L⫺1 Y,

共A3兲

where W⫽(w 1 ,w 2 ,...,w n ) are the parameters of the second part of f (x,y,z), a⫽(a 0 ,a 1 ,a 2 ,a 3 ) is the vector of affine K transformation parameters, and L⫽ 关 PT P0兴 with

冉 冉 冊

K⫽

P⫽

0

U 共 r 12兲

U 共 r 21兲

0

]

]

U 共 r n1 兲

U 共 r n2 兲

1

P1

1

P2

]

]

1

Pn

¯

U 共 r 1n 兲

¯

U 共 r 2n 兲

¯

0

]

]



,

r i j ⫽ 储 Pi ⫺P j 储 ,

and Y⫽共V兩0 0 0兲, where V⫽( v 1 , v 2 ,..., v n ) is the vector of the coordinates of the target points. a兲

Corresponding address: Department of Medical Biophysics, Medical Sciences Building, University of Western Ontario, London, Ontario N6H 5C1, Canada. Tel: 共519兲 661-2111 Ext. 86551; Fax: 共519兲 661-2123; electronic mail: [email protected] 1 L. Garfinkel and M. Mushinski, ‘‘Cancer incidence, mortality, and survival trends in four leading sites,’’ Stat. Bull. Metrop. Insur. Co. 75, 19–27 共1994兲.

Medical Physics, Vol. 30, No. 7, July 2003

2

E. Silverberg, C. C. Boring, and T. S. Squires, ‘‘Cancer statistics,’’ CaCancer J. Clin. 40, 9–26 共1990兲. 3 F. Lee, S. T. Torp-Pederson, and R. D. Mcleary, ‘‘Diagnosis of prostate cancer by transrectal ultrasound imaging,’’ Urol. Clin. North Am. 16, 663– 673 共1989兲. 4 J. S. Prater and W. D. Richard, ‘‘Segmenting ultrasound images of the prostate using neural networks,’’ Ultrason. Imaging 14, 159–185 共1992兲. 5 W. D. Richard, C. K. Grimmell, K. Bedigian, and K. J. Frank, ‘‘A method for 3D prostate imaging using transrectal ultrasound,’’ Can. J. Otolaryngol. 17, 73–79 共1993兲. 6 R. G. Aarnink, R. J. B. Giesen, A. L. Huynen, J. J. M. C. H. de la Rosette, F. M. J. Debruyne, and H. Wijkstra, ‘‘A practical clinical method for contour determination in ultrasonographic prostate images,’’ Ultrasound Med. Biol. 20, 705–717 共1994兲. 7 R. G. Aarnink, A. L. Huynen, R. J. B. Giesen, J. J. M. C. H. de la Rosette, F. M. J. Debruyne, and H. Wijkstra, ‘‘Automated prostate volume determination with ultrasonographic imaging,’’ J. Urol. 共Baltimore兲 153, 1549–1554 共1995兲. 8 S. D. Pathak, V. Chalana, D. R. Haynor, and Y. Kim, ‘‘Edge guided delineation of the prostate in transrectal ultrasound images,’’ Proceedings of the First Joint Meeting of the Biomedical Engineering Society and IEEE Engineering in Medicine and Biology Society, Atlanta, GA, October 1999, p. 1056. 9 W. D. Richard and C. G. Keen, ‘‘Automated texture based segmentation of ultrasound images of the prostate,’’ Comput. Med. Imaging Graph. 20, 131–140 共1996兲. 10 Y. J. Liu, W. S. Ng, M. Y. Teo, and H. C. Lim, ‘‘Computerized prostate boundary estimation of ultrasound images using radial bas-relief method,’’ Med. Biol. Eng. Comput. 35, 445– 454 共1997兲. 11 C. K. Kwoh, M. Y. Teo, W. S. Ng, S. N. Tan, and L. M. Jones, ‘‘Outlining the prostate boundary using the harmonics method,’’ Med. Biol. Eng. Comput. 36, 768 –771 共1998兲. 12 C. Knoll, M. Alcaniz, V. Grau, C. Monserrat, and M. C. Juan, ‘‘Outlining of the prostate using snakes with shape restrictions based on the wavelet transform,’’ Pattern Recogn. 32, 1767–1781 共1999兲. 13 C. F. Arambula and B. L. Davies, ‘‘Automated prostate recognition: A key process for clinically effective robotic prostatectomy,’’ Med. Biol. Eng. Comput. 37, 236 –243 共1999兲. 14 H. M. Ladak, F. Mao, Y. Wang, D. B. Downey, D. A. Steinman, and A. Fenster, ‘‘Prostate boundary segmentation from 2D ultrasound images,’’ Med. Phys. 27, 1777–1788 共2000兲. 15 A. Ghanei, H. S. Zadeh, A. Ratkewicz, and F. F. Yin, ‘‘A threedimensional deformable model for segmentation of human prostate from ultrasound images,’’ Med. Phys. 28, 2147–2153 共2001兲. 16 J. D. Gill, H. M. Ladak, D. A. Steinman, and A. Fenster, ‘‘Accuracy and variability assessment of a semiautomatic technique for segmentation of the carotid arteries from three-dimensional ultrasound images,’’ Med. Phys. 27, 1333–1342 共2000兲. 17 Y. Chen and G. Medioni, ‘‘Description of complex objects from multiple range images using an inflating balloon model,’’ Comput. Vis. Image Underst. 61, 325–334 共1995兲. 18 F. Solina and R. Bajcsy, ‘‘Recovery of parametric models from range images: The case for superquadrics with global deformation,’’ IEEE Trans. Pattern Anal. Mach. Intell. 12, 131–147 共1990兲. 19 W. J. Schroeder, K. M. Martin, L. S. Avila, and C. C. Law, The VTK User’s Guide 共Kitware, New York, 1998兲. 20 S. Lobregt and M. A. Viergever, ‘‘A discrete dynamic contour model,’’ IEEE Trans. Med. Imaging 14, 12–24 共1995兲. 21 T. McInerney and D. A. Terzopoulos, ‘‘A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4D image analysis,’’ Comput. Med. Imaging Graph. 19, 69– 83 共1995兲. 22 S. Tong, D. B. Downey, H. N. Cardinal, and A. Fenster, ‘‘A threedimensional ultrasound prostate imaging system,’’ Ultrasound Med. Biol. 22, 735–746 共1996兲. 23 A. Fenster and D. B. Downey, ‘‘3D ultrasound imaging: A review,’’ IEEE Eng. Med. Biol. Mag. 15, 41–51 共1996兲. 24 A. Fenster, D. B. Downey, and H. N. Cardinal, ‘‘Three-dimensional ultrasound imaging,’’ Phys. Med. Biol. 46, R67–R99 共2000兲. 25 T. R. Nelson, D. B. Downey, D. H. Pretorius, and A. Fenster, ThreeDimensional Ultrasound 共Lippincott-Raven, Philadelphia, 1999兲. 26 B. Geiger, ‘‘Three-dimensional modeling of human organs and its application to diagnosis and surgical planning,’’ Institut National de Re-

1659

Hu et al.: Prostate boundary segmentation

chereche en Informatique et Automatique Report No. 2105, 1993. F. Mao, J. Gill, and A. Fenster, ‘‘Segmentation of carotid artery in ultrasound images: Method development and evaluation technique,’’ Med. Phys. 27, 1961–1970 共2000兲. 28 S. Tong, H. N. Cardinal, R. F. McLoughlin, D. B. Downey, and A. 27

Medical Physics, Vol. 30, No. 7, July 2003

1659 Fenster, ‘‘Intra- and interobserver variability and reliability of prostate volume measurement via two-dimensional and three-dimensional ultrasound imaging,’’ Ultrasound Med. Biol. 24, 673– 681 共1998兲. 29 F. L. Bookstein, ‘‘Principal warps: Thin-plate splines and the decomposition of deformations,’’ IEEE Trans. Pattern Anal. Mach. Intell. 11, 567– 585 共1989兲.