Real-Time Volume Rendering Visualization of ... - Semantic Scholar

12 downloads 226447 Views 809KB Size Report
surgery, radiotherapy planning, and computer-aided diagnosis. In the visualization ..... ship degrees uij for all N voxels to cj, as shown in Fig. 2(1). The uij are ..... [28] R. Westermann and B. Sevenich, “Accelerated volume ray-casting using texture mapping ... B.S. (honors) degree in computer science and technology in 2001.
IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

161

Real-Time Volume Rendering Visualization of Dual-Modality PET/CT Images With Interactive Fuzzy Thresholding Segmentation Jinman Kim, Member, IEEE, Weidong Cai, Member, IEEE, Stefan Eberl, Member, IEEE, and Dagan Feng, Fellow, IEEE

Abstract—Three-dimensional (3-D) visualization has become an essential part for imaging applications, including image-guided surgery, radiotherapy planning, and computer-aided diagnosis. In the visualization of dual-modality positron emission tomography and computed tomography (PET/CT), 3-D volume rendering is often limited to rendering of a single image volume and by high computational demand. Furthermore, incorporation of segmentation in volume rendering is usually restricted to visualizing the presegmented volumes of interest. In this paper, we investigated the integration of interactive segmentation into real-time volume rendering of dual-modality PET/CT images. We present and validate a fuzzy thresholding segmentation technique based on fuzzy cluster analysis, which allows interactive and real-time optimization of the segmentation results. This technique is then incorporated into a real-time multi-volume rendering of PET/CT images. Our method allows a real-time fusion and interchangeability of segmentation volume with PET or CT volumes, as well as the usual fusion of PET/CT volumes. Volume manipulations such as window level adjustments and lookup table can be applied to individual volumes, which are then fused together in real time as adjustments are made. We demonstrate the benefit of our method in integrating segmentation with volume rendering in its application to PET/CT images. Responsive frame rates are achieved by utilizing a texture-based volume rendering algorithm and the rapid transfer capability of the high-memory bandwidth available in low-cost graphic hardware. Index Terms—Dual-modality positron emission tomography and computed tomography (PET/CT), fuzzy C-means cluster analysis, interactive three-dimensional (3-D) segmentation, multi-volume rendering, real-time volume rendering.

I. INTRODUCTION DVANCES in digital medical images are resulting in increased image volumes from the acquisition of four-dimensional (4-D) imaging modalities, such as dynamic positron emission tomography (PET), and dual modality PET

A

Manuscript received August 22, 2005; revised December 24, 2005. This work was supported in part by the ARC and RGC grants. J. Kim and W. Cai are with the Biomedical and Multimedia Information Technology Group, School of Information Technologies, University of Sydney, Sydney, NSW 2006, Australia (e-mail: [email protected]). S. Eberl is with the Biomedical and Multimedia Information Technology Group, School of Information Technologies, University of Sydney, Sydney, NSW 2006, Australia, and also with the Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital, Sydney, NSW 2050, Australia. D. Feng is with the Biomedical and Multimedia Information Technology Group, School of Information Technologies, University of Sydney, Sydney, NSW 2006, Australia, and also with the Center for Multimedia Signal Processing, Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Kowloon, Hong Kong. Digital Object Identifier 10.1109/TITB.2006.875669

and computed tomography (PET/CT). These images have introduced significant challenges for efficient visualization [1]–[3]. In line with the advances in image acquisition, threedimensional (3-D) visualization algorithms have been developed that enable real-time visualization of multidimensional volumes using low-cost hardware instead of restricting it to high-end expensive workstations [4]–[6]. 3-D visualization has become an attractive method for imaging applications, including image-guided surgery and radiotherapy, and computer-aided diagnosis [3], [4], [7]–[12]. In these applications, segmentation is often employed, which enables visual separation and selection of specific volumes of interest (VOIs) [6], [12]–[18]. Segmentation of the image volume can be performed manually by a physician. However, such delineation is subjective, and hence, may not be reproducible, and it is time consuming. Fully automated methods can only be applied successfully within precisely defined bounds and they cannot guarantee accurate delineation under all circumstances, thus requiring some kind of operator intervention, such as in interactive segmentation. Studies involving interactive segmentation in 3-D visualization have often been limited to rendering the preprocessed segmentation results [16]–[18]. However, these methods render only the segmented VOIs, without placing them in the context of surrounding structures. In [18], a method of correcting segmentation errors from volume-rendered VOIs by adjusting the radius of the viewable volume to reveal the surrounding image was presented. Although this method allows a physician the ability to correct for segmentation errors in volume rendering, it was limited to only rendering the surrounding voxels within the radius of the VOIs and did not take into consideration that the surrounding voxels may have no relation to the VOI. These interactive segmentation methods were all based on visualization of a single volume of images. In dual-modality PET/CT images, which consist of co-registered functional and anatomical image volumes, the ability to visualize the segmentation result with both image volumes can be of considerable benefit. For instance, segmentation of tumor structures from low-resolution, functional PET image data can benefit from overlaying it on the CT to provide an anatomical frame of reference and precise localization. In this paper, we investigated and validated the incorporation of interactive segmentation into real-time 3-D visualization of PET/CT images. We present a fuzzy thresholding segmentation method for PET images in real-time volume rendering. In the segmentation of functional PET images, cluster analysis

1089-7771/$25.00 © 2007 IEEE Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.

162

IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 11, NO. 2, MARCH 2007

based on kinetic behavior has previously been found effective in classifying kinetic patterns [19]–[21], including segmentation of regions of interest [19], and the generation of parametric images from huge data sets [20]. In these approaches, PET images were partitioned into a predefined number of cluster groups based on “crisp” clustering, where one voxel was assigned to a single cluster group. The fuzzy extension to the crisp clustering, such as the fuzzy C-means (FCM) cluster analysis [22], presents the advantage of assigning probabilities of each voxel belonging to a particular cluster. This attribute is utilized in this paper to control the segmentation by simple and computationally efficient thresholding of the cluster probabilities. We describe and evaluate a fuzzy thresholding technique and its integration into an interactive multi-volume viewer (IMV2 ) is demonstrated. The IMV2 allows fusion of the segmentation with PET or CT images as well as the usual fusion of PET and CT images. Volume manipulation tools designed for PET/CT visualization and which allow manipulation of individual volumes, e.g., thresholding the CT and adjusting window levels of PET images are incorporated. The resultant manipulated volumes are then fused together in real time as the adjustments are made. II. METHOD 2

The IMV consists of four major steps as shown in the flowchart in Fig. 1: 1) segmentation of PET images using FCM cluster analysis into cluster groups based on functional similarity; 2) volume rendering of PET, CT, and segment data using texture-based rendering technique; 3) interactive fuzzy thresholding of PET data with real-time volume rendering of dualmodality PET/CT; and 4) volume manipulation tools such as window level adjustments and lookup table (LUT) applied to PET/CT volume rendering. A. Automated 4-D FCM Cluster Analysis of Dynamic/Static PET Images Prior to segmentation, the image data are preprocessed as follows: low-count background areas in the PET images are removed (set to zero) by thresholding. Isolated voxels and gaps are then removed and/or filled by a 3 × 3 × 3 morphological opening filter followed by a closing filter. For dynamic PET data, tissue time activity curves (TTACs) are extracted for each nonzero voxel to form the kinetic feature vector f (t) of time interval t(t = 1, 2, . . . , T ), where T is the total number of time points. For static images, a single frame is acquired at t = T . The FCM cluster analysis based on [22] is applied to assign each of the N feature vectors to one of a set number C of distinct cluster groups. For each cluster, centroids are assigned as the feature vectors of distinct, randomly selected voxels. The value of each centroid voxel is replaced with the average of the 3 × 3 × 3 surrounding voxels to avoid false selection of a noisy outlier that may result in a cluster with a single member. FCM cluster analysis minimizes the objective function J, according to J=

N  C  i=1 j=1



2 ¯ uP ij D fi (t), fcj (t)

Fig. 1. Flowchart of the proposed interactive multi-volume visualization. After segmenting the PET image using FCM cluster analysis (step 1), the segmentation map and fuzzy logic layer are constructed. The segmentation map, PET, and CT image volumes are rendered using texture-based volume rendering (step 2). The fuzzy logic layer is then used to interactively adjust the rendered segmentation volume by fuzzy thresholding (step 3). These volumes can be fused and interchanged in real time with volume manipulation tools (step 4) included in the IMV2 .

where P (1 ≤ P ≤ ∞) is a weighting exponent on each fuzzy membership, which determines the amount of fuzziness of the resulting classification, and uij is the membership degree of the ith feature vector in the cluster j. The similarity measure between the ith feature vector fi (t) and the cluster centroid ¯fcj (t) of the jth cluster group cj was calculated using the Euclidean distance Dij given by  T 1/2     2 D fi , ¯fc = s(t) fi (t) − ¯fc (t) (2) j

where s(t) is a scale factor of time point t(t = 1, 2, . . . , T ) equal to the duration of the tth frame divided by the total dynamic acquisition time. The scale factor s(t) gives more weight to the longer frames, which contain more reliable data. The minimization of J is achieved by iteratively updating the uij and the cluster centroids ¯fcj (t) with uij =

(1)

j

t=1

C k=1



1 D (fi (t),¯ fc j (t)) D (fi (t),¯ fc (t))

 P 2−1

k

Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on June 25,2010 at 02:06:31 UTC from IEEE Xplore. Restrictions apply.

(3)

KIM et al.: REAL-TIME VOLUME RENDERING VISUALIZATION OF DUAL-MODALITY PET/CT IMAGES

163

N ¯fc (t) = j

P i=1 uij fi (t) N P . i=1 uij

(4)

Thus, a probabilistic fuzzy  membership degree is assigned to every voxel i, such that C j=1 uij = 1.0. The procedure is terminated when the convergence criterion ε in the range of [0, 1] is satisfied, i.e.,



Suggest Documents