Journal of X-Ray Science and Technology 22 (2014) 1–18 DOI 10.3233/XST-130405 IOS Press
1
An improved distance-driven method for projection and backprojection Chuang Miaoa,b, Baodong Liua,b, Qiong Xuc and Hengyong Yua,b,∗ a Biomedical
Imaging Division, VT-WFU School of Biomedical Engineering and Sciences, Wake Forest University, Winston-Salem, NC, USA b Department of Biomedical Engineering, Division of Radiologic Sciences, Wake Forest University, Winston-Salem, NC, USA c Institute of Image Processing and Pattern Recognition, Xi’an Jiaotong University, Xi’an, Shaanxi, China Received 13 November 2012 Revised 17 September 2013 Accepted 23 October 2013 Abstract. Fast and accurate image reconstruction is the ultimate goal of iterative methods for limited-angle, few-view, interior problems, etc. Recently, a finite-detector-based projection model was proposed for iterative CT reconstructions, which was called area integral model (AIM) and has shown a high spatial resolution but with a high computational complexity. On the other hand, the distance-driven model (DDM) is the state-of-the-art technology to model forward projection and backprojection, which has shown a low computational complexity but relative low spatial resolution than AIM-based method. Inspired by the DDM, here we propose an improved distance-driven model (IDDM), which has a similar computational complexity with the DDM-based method and comparative spatial resolution with the AIM-based method. In an ordered-subset simultaneous algebraic reconstruction technique (OS-SART) framework, the AIM, IDDM and DDM are implemented and evaluated using a sinogram from a phantom experiment on a Discovery CT750 HD scanner. The results show that the computational cost of DDM- and IDDM-based methods is similar, which is 6 to 13 times faster than the AIM-based method assuming the same number of iterations. The spatial resolution of AIM- and IDDM-based method is comparable, which is better than DDM-based method in terms of full-width-of-half-maximum (FWHM). Keywords: Computed tomography (CT), image reconstruction, area integral model, distance-driven model, OS-SART
1. Introduction Computed tomography (CT) reconstruction is a process of reconstructing n-dimensional (nD) image data from a set of integrals of that data over lower-dimensional subspaces. In fan-beam or cone-beam CT geometry, the projection operations used for traditional iterative reconstruction usually consider the case in which a projection data are the integrals over lines (one-dimensional), this is referred to as the x-ray transform. CT is one of the well-established examples of the x-ray transform in medical imaging [1]. In CT applications, the projection and/or backprojection model is required for image reconstruction, ∗ Corresponding author: Hengyong Yu, Biomedical Imaging Division, VT-WFU School of Biomedical Engineering and Sciences, Wake Forest University Health Sciences, Winston-Salem, NC, 27157, USA. E-mail:
[email protected].
c 2014 – IOS Press and the authors. All rights reserved 0895-3996/14/$27.50
2
C. Miao et al. / An improved distance-driven method for projection and backprojection
artifact correction, or simulation purposes. Particularly, let f (x) be a 2D compactly supported function. The projection can be modeled by a line integral model (LIM) ∞ P (α, β) = f (α + tβ)dt, (1) 0
where α ∈ R2 and β ∈ S represent the source position and a 2D unit vector, R2 denotes the 2D real space, and S is the 2D unit circle. The backprojection model generally is defined as the transpose (or adjoint) of the projection model. The most prevalent application of the backprojection operation is in the filtered backprojection (FBP) reconstruction algorithms, which are based on analytic inversion formulae for the Radon transform [1]. In the filtering stage, usually a ramped weighting is applied to account for the variable density of sampling, either in the spatial or in the frequency domain. A low-pass filter is often combined with ramp filter in order to suppress noise and aliasing artifacts. The backprojection is simply ‘smearing’ each projection (weighted) back across the image. Parallel to the development of analytic reconstruction algorithms, the iterative reconstruction methods are proposed, in which repeated applications of the projection and backprojection are used to approximate the image that best fits the measurements according to an appropriate objective function. In addition, the iterative reconstruction method can be used for solving few-view [2,3], limited-angle [4,5] and other problems in which analytic reconstruction formulas are not available or sub-optimal. When both the analytic and iterative methods are numerically implemented, the projection and/or backprojection operation plays an important role in the overall computation. There are many methods to model the projection and backprojection procedures for a discrete imaging object. All of those models compromise between computational complexity and accuracy. To our best knowledge, the current projection/backprojection models can be divided into three categories [6]. The first is the pixel-driven model, which is usually used in the implementations of backprojection for FBP reconstruction [7–9]. By connecting a line from the focal spot through the pixel center, a location of interaction on the detector is determined. A value is obtained from the detector samples via interpolation, and is used to update the pixel values [7–9]. Pixel-driven projection is a similar process, but the detectors are updated with pixel values through using similar weights. Simple pixel-driven projection is rarely used because it causes high-frequency artifacts [10,11]. The second is the ray-driven model, which is used for forward projection. It connects a line from the focal spot through the image to the detector cell centre, and an interaction location is determined on the image pixels. Then a value is obtained using linear interpolation from the image pixel values, and the result is accumulated on the detector cell. Ray-driven method is rarely used in the backprojection because it tends to introduce artifacts [6,11]. The state of the art is the distance-driven model (DDM), which combines the advantages of the pixeldriven and ray-driven models [11,12]. It can be used in the projection and/or backprojection processes. In order to calculate the normalized weighting coefficients used in the projection and backprojection, the key is to calculate the length of overlap between each image pixel and each detector cell, and the normalized length of overlap will be used to calculate the weights in projection and backprojection. Recently, a finite-detector-based projection model was proposed for iterative CT reconstructions by Yu and Wang [13], which was also called area integral model (AIM). This model without any interpolation is different from all the aforementioned projection/backprojection models. Here we proposed an improved distance-driven model (IDDM), which has a similar computational cost with the DDM-based method and comparative accuracy with the AIM-based method. Compare to the DDM- and IDDM-based methods, the AIM-based method is more accurate but on the other hand more time-consuming due to the high computing cost of the system matrix.
C. Miao et al. / An improved distance-driven method for projection and backprojection
3
In this paper, we will perform a comprehensive theoretical analysis and extensive numerical experiments to quantitatively evaluate the IDDM-, DDM- and the AIM-based methods assuming a fan-beam geometry of a typical GE CT scanner. Our goal is to ultimately demonstrate the reasonability of the IDDM based method. This work will have a direct impact on several applications including the development of fast and accurate iterative CT reconstruction for high resolution. The rest of this paper is organized as follows: In Section II, we will briefly summarize and comparatively analyze the AIM, DDM and IDDM; In Section III, numerical experiments will be performed and the results will be presented; In Section IV, we will conclude the paper. 2. Method 2.1. Discretized description of a CT imaging system Many imaging system, such as CT scanners can be modeled by the following linear equations [14]: W f = p,
(2)
where p ∈ P represents projection data, f ∈ F represents an unknown image, and the non-zero matrix W : F → P is a projection operator. For practical applications, the discrete-discrete model is assumed. In other words, f and p are vectors, F and P are the corresponding vector spaces. The projection data p is usually measured by detector cells, which implies that p is already discrete. For the two-dimensional (2D) case, an image can be discretized by superimposing a square grid on the image. Usually f is assumed as constant in each grid cell, which is referred to as a pixel. As a result, we have a 2D digital image f = (fi,j ) ∈ RI × RJ , where the indices 1 i I , 1 j J are integers. Define fn = fi,j , n = (i − 1) × J + j,
(3)
with 1 n N , and N = I × J , we can re-arrange the image into a vector f = [f1 , f2 , . . . , fN ]T ∈ RN . We may use both the signs fn and fi,j to denote the image. Let pm be the mth measured datum with mth ray. Equation (2) can be rewritten as pm =
N
ωmn fn , m = 1, 2, . . . , M,
(4)
n=1
where M is the total number of rays and ωmn is the weighting coefficient that represents the relative contribution of the nth pixel to the mth measured datum. Therefore, we have a system matrix W = (ωmn )M ×N and two vectors f = [f1 , f2 , . . . , fN ]T ∈ RN and p = [p1 , p2 , . . . , pM ]T ∈ RM for the discrete-discrete model. The major difference between each model is how to quantify the contribution of each pixel to each projection ray. 2.2. System models 2.2.1. AIM As indicated by the name, the AIM refers to the name by qualifying the contribution of each pixel related to the overlapped area between each pixel and each ray path. As shown in Fig. 1, the AIM considers an x-ray as a ‘fat’ line or a narrow fan-beam [13], which covers a region connecting the x-ray source and the two endpoints of a detector cell. The weighting coefficient is a normalized area. It is defined as the ratio between the overlapped area (Smn ) and the corresponding fan-arc length which is the product of the narrow fan beam angle (γ ) and the distance from the center of the pixel to the x-ray source. For the details of the derivation, please refer to [13].
4
C. Miao et al. / An improved distance-driven method for projection and backprojection
Fig. 1. Area integral model assuming a fan beam geometry.
Fig. 2. Distance-driven model assuming a fan beam geometry.
2.2.2. DDM The state-of-the-art of the projection model is the DDM, which combines the advantages of the pixeldriven and ray-driven methods. The DDM considers the detector cell and image pixel with width. In order to calculate the normalized weighting coefficients, the key is to calculate the length of overlap between each image pixel and each detector cell [6]. To calculate the overlapped length, we need to map all the detector cell boundaries onto the centerline of the image row of interest. One can also map all pixel boundaries in an image row of interest onto the detector, or map both sets of boundaries onto a common line such as x-axis. In Fig. 2, we show this idea assuming a typical 2D fan beam geometry. We map the two boundaries of each detector cell onto the centerline of the image row of interest. The two boundary locations (sample locations) of nth pixel are represented by xn and xn+1 . The two intersection locations (destination locations) between the centerline of the image row of interest and the two boundaries of the mth ray are represented by ym and ym+1 . Based on these boundaries, we can calculate the overlapped length between each image pixel and each detector cell to define the weighting coefficient for the projection and backprojection by applying the normalized distance-driven kernel operation (see Appendix), and the corresponding weighting coefficient is defined as Eq. (A3). It is worth to mention that because there are large interpolation errors at 45 degrees direction for DDM, we need to map detector cell boundaries to the column of interest after 45 degrees transition. For the details of the DDM, please refer to [6]. The 2D fan beam geometry DDM can be extended to 3D cone-beam geometries by applying the distance-driven approach both in the in-plane direction and in the patient-bed translation direction, resulting in two nested loops. Instead of using the overlapped length between each pixel and each detector cell, the overlapped area between each voxel and each detector cell are used to calculate the weighting coefficients. For the 3D cone beam geometry, the computational cost is much higher than the 2D case. 2.2.3. IDDM The IDDM also considers the width of x-ray. In order to compute the weighting coefficient, instead of
C. Miao et al. / An improved distance-driven method for projection and backprojection
5
Fig. 3. Improved distance-driven model assuming a fan beam geometry.
mapping the x-ray to the center line of the image row of interest, we map the x-ray to the upper and lower boundaries of the row of interest. On each boundary, the weighting coefficient is computed similar to that of DDM, and the final weighting coefficient is the average of the weighting coefficients of the upper and lower boundaries. Similar to the DDM, the mapping direction needs to be changed at each 45 degrees to reduce the interpolation errors. As illustrated in Fig. 3, assuming a typical 2D fan beam geometry, the two boundaries of each detector cells are uniquely mapped onto the upper and lower boundaries of the image row of interest. Then the overlapped length between each image pixel and each detector cell can be determined. The normalized distance-driven kernel operation (Eq. (A2)) can be applied to both upper and lower boundaries and the corresponding weighting coefficients can be calculated by (Eq. (A3)). The final weighing coefficient will be the average of the weighing coefficients of the upper and lower row boundaries. The 2D IDDM can also be extended to 3D cone beam geometry by employing the same strategy of the DDM. Compared to the DDM, the IDDM considers all the related pixels in each row which results in a higher accuracy (See details in Section 2.3). Because the lower boundary of one row is the upper boundary of the next row, the computational cost of IDDM-based method is similar to that of the DDM-based method. 2.3. Theoretical analysis of different models Before we evaluate different models with physical experiments, we will qualitatively analyze the performance of different projection/backprojection models in different conditions. We will qualitatively analyze the performance of each model in terms of accuracy, the ability of high resolution reconstruction and the likelihood of introducing artifacts assuming a 2D parallel beam geometry. Similar results can be obtained in the fan beam geometry and the results in 2D cases can be always extended to the 3D cases. We will analyze the performance of each model when the detector element size is smaller than, comparable and greater than the pixel size. When the detector element size is greater than the pixel size, it can also test the ability of high resolution reconstruction of each model. This is because the reconstructed images with a pixel size smaller than the detector element size can be viewed as high resolution reconstruction compare to the detector resolution. We will analyze this by taking a case where the image consists of only one row and projection data consists of only one direction. For each case, we assume that the image pixel size is fixed while the detector element size is keep changing. The results
6
C. Miao et al. / An improved distance-driven method for projection and backprojection
(a)
(b)
(c)
Fig. 4. Area integral model with one row image and one detector element assuming a 2D parallel geometry. (a), (b) and (c) are the cases that the detector element size is comparable, greater and smaller than the pixel size. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
can be extended to more general cases with multiple rows in the image domain and multiple views in the projection domain. 2.3.1. AIM The AIM refers to the name by determining the contribution of each pixel related to the overlapped area between each pixel and each ray path. Figure 4 shows the AIM in three cases with one row image and one detector cell assuming a 2D parallel geometry. Since the AIM considers width of the detector cell, the dotted line represents the virtual line from the focal spot to the detector center. The solid lines along the two boundaries of the detector element show the pixels that are actually covered by the ray path. (a), (b) and (c) are the cases that the pixel size is comparable, smaller and greater than the detector cell size, respectively. We can see that AIM considers the contribution of each pixel related to the ray path, which is the corresponding overlapped area divided by the distance from the source to the pixel center. Given a ray path in a fan-beam geometry, if there are two pixels with same pixel values in two different rows and the overlapped areas between each pixel and the ray path are same, the pixel which is closer to the x-ray source has more contribution to the ray path because it attenuated more photons and the AIM consider this physical process by introducing a normalized factor of beam width at the pixel location, which is the product of the distance from the source to the pixel center and the fan angle of the ray path. In the 2D parallel geometry, the normalization factor becomes the distance from the source to the pixel center. The computational cost of the AIM-based method is high because the computation of the overlapped area and the normalization factor needs many multiplication operations. In summary, the AIM can be used in projection and backprojection processes no matter what the detector element and the pixel sizes are. Although the computation cost of the AIM is high, it provides a high accuracy. 2.3.2. DDM The DDM considers the width of the x-ray. In order to calculate the normalized weighting coefficients, DDM needs to calculate the overlapped length between each image pixel and each detector cell. Figure 5 shows the DDM with one row image and one detector cell assuming a 2D parallel geometry. (a), (b) and (c) are the cases that the detector element size is comparable, greater and smaller than the pixel size, respectively. For each case, DDM might omit some pixels’ contribution. For example, for the case (a), there are three pixels (p2, p3, p4) covered by the ray path. However, DDM only considers
C. Miao et al. / An improved distance-driven method for projection and backprojection
(a)
(b)
7
(c)
Fig. 5. Same with Fig. 4 but with DDM. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/ XST-130405)
(a)
(b)
(c)
Fig. 6. Same with Fig. 4 but with IDDM. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/ XST-130405)
the contributions of p2 and p3. It omits the contribution of p4 and regards p3 as a full contribution which is not exactly. Similar phenomenon exists in (b) and (c). In the following, we will see that the IDDM will overcome this drawback. Theoretically, the DDM is less accurate than the AIM with a lower computational cost. 2.3.3. IDDM The IDDM also considers the width of the x-ray. Figure 6 shows the IDDM with one row image and one detector element assuming a 2D parallel geometry. (a), (b) and (c) are the cases that the detector element size is comparable, greater and smaller than the pixel size, respectively. For each case, like the AIM, the IDDM considers each related pixel’s contribution to the ray path. For example, for the case (a), there are three pixels (p2, p3, p4) related to the ray path. Unlike the DDM, the IDDM considers the contribution of p4 and didn’t regard p2 as a full contribution. Similarly, in (b) and (c) the IDDM approach accomplishes this in a more elegant way. Because the lower boundary in one row is the upper boundary in next row, the computational cost of the IDDM approach will be as low as the DDM approach. In summary, the IDDM approach can be used in the projection and/or backprojection processes no matter what the detector element and the pixel sizes are. Theoretically, the computational cost of the IDDM approach will be as low as the DDM approach, but the accuracy will be as high as the AIM when the
8
C. Miao et al. / An improved distance-driven method for projection and backprojection
detector size is comparable or greater than the pixel size. On the other hand, the accuracy of the IDDM is higher than the DDM for any case. 2.4. OS-SART reconstruction Ordered-subset simultaneous algebraic reconstruction technique (OS-SART) can be expressed as [15], ω pm − pm ˜ mn fn(k+1) = fn(k) + λk , k = 0, 1, 2, . . . , (5) m ∈φl ωm n Wm+ m∈φl
where k indicates the number of iteration, λk is relaxation parameter which is critical to the quality and speed of the reconstruction (in our implementations, we set λk to be 1 for simplicity), φl represents the set of ray indexes in the lth subsets (l = k mod Nφ + 1 ∈ {1, 2, . . . , Nφ }, Nφ is the total number of N N (k) subsets), pm ˜ = n=1 ωmn fn and Wm+ = n=1 ωmn . The weighted method in the fast iterative shrinkage-thresholding algorithm [16] was employed to accelerate the convergence of the OS-SART. The steps of the fast OS-SART can be summarized as follow: Step 0. Take y 1 = f 0 ∈ RN , t1 = 1. Step k. (k 1) Compute OS-SART: y k = OS-SART(f k−1) Fast Weighted: tk+1 =
1+
(6)
1 + 4t2k
(7) 2 tk − 1 f k = yk + (y k − y k−1 ) (8) tk+1 In each of the main loop, the OS-SART is used to enforce data consistency and the fasted weighted method is used to speed up the convergence. The two steps are applied iteratively until the stopping criteria are satisfied, which are a maximum number of iterations and an error threshold in the projection domain. 3. Results To verify and compare the performances of AIM-, DDM- and IDDM-based methods, a physical phantom experiment was performed on a GE Discovery CT750 HD scanner at Wake Forest University Health Sciences with a circular scanning trajectory. After appropriate pre-processing, we obtained a sinogram of the central slice in typical equiangular fan-beam geometry. The radius of the scanning trajectory was 538.5 mm. Over a 360◦ range, 984 projections were uniformly acquired. For each projection, 888 detector cells were equiangularly distributed, which defines a field of view of 249.2 mm in radius and an iso-center spatial resolution of 584 μm. There are 888 detector cells used in the original projection data. We combined two, four and eight detector cells into one and obtained the other three sets of projection data with 444, 222 and 111 detector cells to simulate low-resolution projections, respectively. Then we
C. Miao et al. / An improved distance-driven method for projection and backprojection
9
Fig. 7. The full view of the phantom. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST130405)
reconstructed images with the four sets of projection data using the AIM-, DDM- and IDDM-based methods. For different projection models, the same parameters were used in the OS-SART algorithms. The initial image was set to zero and the size of all the subsets of OS-SART was set to 41. All the reconstructed image sizes are 512 × 512 to cover the whole field of view (FOV), and each pixel covered an area of 973.3 × 973.3 μm. For the original projection data with 888 detector cells, the pixel size and the detector cell size are comparable. For the other three sets of projection data, the less number of detector cells, the bigger the detector cell size is. Therefore, we can use the other three sets of projections to test the capability of high-resolution reconstruction. Because the required spatial resolution is always comparable or higher than the detector resolution in practical applications, here we will not evaluate the cases when the reconstructed image spatial resolution is smaller than detector resolution. The full view of the phantom is shown in Fig. 7. Using the aforementioned sinograms, the three models were evaluated and compared quantitatively according to the following criteria. 3.1. Computational cost The algorithms were implemented in Visual C++ and tested on a platform of PC (4.0 GB memory, 3.2 GHz CPU). With the four sets of projections, the computational cost of AIM-, DDM- and IDDMbased methods in one iteration were recorded. The results are shown in Fig. 8. We can see that with the number of detector cells increase, the computational cost of three methods in one iteration increase linearly. The slopes of DDM- (blue line) and IDDM-based (green line) methods are almost same, which indicate the computational cost of DDM- and IDDM-based methods are very similar. But the slope of the AIM-based method (red line) is bigger than the DDM- (blue line) and IDDM-based (green line) methods. Specifically, in the 111 detector cells case, the DDM-, IDDM- and AIM-based methods need ∼ 8.74, ∼ 9.86 and ∼ 112.74 seconds in one iteration, respectively. The computational cost of AIM-based method is about 13 and 12 times more than the DDM- and IDDM-based methods, respectively. For 888 detector cells, the DDM-, IDDM- and AIM-based methods need ∼ 38.34, ∼ 40.38 and ∼ 242.34 seconds in one iteration, respectively.
C. Miao et al. / An improved distance-driven method for projection and backprojection
Computational Cost (sec)
10
Number of Detectors Fig. 8. The computational cost of AIM- and DDM- and IDDM-based methods in one iteration for 512 × 512 image sizes with different number of detector cells in projection data. (Colours are visible in the online version of the article; http://dx.doi.org/ 10.3233/XST-130405)
Noise Level (HU)
888
222
444
111
Iteration Fig. 9. The image noise (HU) versus the iteration numbers using four sets of projections with different detector cell numbers. Upper left: 888 detector cells; Upper right: 444 detector cells; Lower left: 222 detector cells and Lower right: 111 detector cells. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
3.2. Image noise The standard variance of the pixel values within a homogenous region were computed to measure the image noises using Eq. (9) B 1 σ=
(fj − fm )2 , (9) B j=1
where B is the total number of pixels in the selected flat region, fm is the mean of the region. The experimental results are shown in Fig. 9. We can see that, for the case of 888 detector cells (upper left), the image noise of three methods are very similar before 30 iterations, but the IDDM-based has a little bit smaller image noise than the
C. Miao et al. / An improved distance-driven method for projection and backprojection
11
Fig. 10. Reconstructed images from the projections with 888 detector cells. The first and second rows are the results after 4 (minimum noise) and 40 iterations, respectively. The horizontal profiles along the white line in Fig. 7 are plotted, where the green square regions are magnified. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
DDM- and AIM-based methods. After that, the image noises begin to level off and the DDM- and IDDM-based methods have the lowest and highest image noises. Actually, we can get a satisfactory reconstructed image with 20 iterations. We can also see that with the increase of the iteration number, the image noise of three methods decrease and then increase and level off finally. At the beginning, the low frequency components are reconstructed and result in smoother reconstructions. Then, the high frequency components are reconstructed, and the noise image will be reconstructed from the noise in the projection data. Therefore, after certain number of iterations (4 iterations), the noise in the reconstructed images begin to increase and level off finally. For the cases of 444 (upper right) and 222 (lower left) detector cells, the image noise performance are very similar with the first one. But the image noises begin to level off later than the first one (not shown in this figure). It is becomes more apparent that the IDDMbased method has a smaller image noise before level off. For the case of 111 detector cells, the image noise decrease to reach to the minimum (3 iterations) and then increase and reach to the local maximum, and decrease again and reach the second minimum, and then increase. The noise performance pattern for 111 detector cells is different from others. This may due to three reasons. First, this case represents the ultra-high spatial resolution compared to the detector element size. Second, the model errors of the DDM and IDDM are more obvious than other cases. Third, the sources of the reconstructed image noise are from not only the projection noise but also the model error. In the near future, we will perform a more systematic evaluation to analyze this phenomenon. For a qualitative comparison, the reconstructed images and profiles along the white line in Fig. 7 are shown in Figs 10–13. For the case of 888 detector cells, as indicated in Fig. 10, the reconstructed images with 4 (minimum image noise) and 40 iterations are shown in the first and second rows, respectively. There is no detectable difference between the reconstructed images in the same row visually. However, we can see that, the three images in the first row are smoother than the second row. The first row images are mostly composed by low frequency components and have smaller image noise. On the other hand, there are more high frequency components in the second row images, and we can see more details of the phantom. From the magnified region shown in the first row, we can barely recognize the difference, because the reconstructed images by three methods with 4 iterations have similar image noise. From the magnified region in the second row, we can see that the magnitudes of IDDM-based method are a little bit larger than AIM and DDM. The DDM-based method has the smallest magnitudes. However,
12
C. Miao et al. / An improved distance-driven method for projection and backprojection
Fig. 11. Same as Fig. 10 but reconstructed from the projections with 444 detector cells. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
Fig. 12. Same as Fig. 10 but reconstructed from the projections with 222 detector cells. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
the difference is not so obvious. This is because the image noises of three methods begin to level off at 40 iterations and the IDDM based methods have a little bit larger image noise (after level off). For the high resolution reconstructions, Figs 11 and 12 showed the experimental results of 444 and 222 detector cells. The magnitudes of three methods are similar at 4 and 40 iterations. For the case of 111 detector cells, as indicated in Fig. 13, the first row showed the reconstructed images with 3 iterations (minimum image noise). From the magnified region of the profiles in Fig. 13, we can see that reconstructed images of the DDM- and IDDM-based methods have similar magnitudes. But the magnitude of the reconstructed image of the AIM-based method is a little bit smaller than DDM- and IDDM-based methods in this case. This difference is more apparent than the case of 888 detector cells. However, the second row of Fig. 13 shows that the magnitudes of the three methods are similar. From Figs 10–13, we can conclude that the image noises of three methods are similar in terms of obtaining the satisfactory reconstructed images. 3.3. Spatial resolution Full-width-of-half-maximums (FWHMs) were calculated along the edge in a red square indicated in
C. Miao et al. / An improved distance-driven method for projection and backprojection
13
Fig. 13. Same as Fig. 10 but reconstructed from the projections with 111 detector cells and the iteration number for the first row is 3 (minimum noise). (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
Spatial Resolution (mm)
888
222
444
111
Iteration Fig. 14. The measured spatial resolution (mm) versus the iteration number for the four sets of projections with different detector cell numbers. Upper left: 888 detector cells; Upper right: 444 detector cells; Lower left: 222 detector cells and Lower right: 111 detector cells. (Colours are visible in the online version of the article; http://dx.doi.org/10.3233/XST-130405)
Fig. 7 to compare the spatial resolution [17]. As shown in Fig. 14, for the case of 888 detector cells (upper left), with the increase of the iteration number, the spatial resolutions of three methods become better and reach the minimum at about 19 iterations. After that, the spatial resolution decrease a little bit, which might be caused by the noise in the projection data. Finally, the spatial resolution will be level off. Overall, the spatial resolution of three methods is very similar. In this case, the best spatial resolution of DDM-, IDDM- and AIM-based methods are achieved at 19 iterations (1.5530 mm, 1.4632 mm and 1.4985 mm).
14
C. Miao et al. / An improved distance-driven method for projection and backprojection
Fig. 15. Magnifications of the square region indicated in Fig. 7. From left to right columns, the detector cell numbers are 888, 444, 222 and 111, respectively. The first three row images are the magnifications of DDM-, IDDM- and AIM-based methods after 7 iterations, the last three row images are same as the first three rows but after 17 iterations.
In the high resolution reconstructions, for the case of 444 detector cells (upper right), the behavior of spatial resolution is similar to the 888 detector cells case. The spatial resolutions of three methods reach the minimum at about 23 iterations. Overall, before reaching the minimum, the AIM-based method performs better than the DDM- and IDDM-based methods, and the IDDM-based method performs better than DDM-based method. In this case, the best spatial resolution of DDM-, IDDM- and AIM-based
C. Miao et al. / An improved distance-driven method for projection and backprojection
15
Table 1 Measured spatial resolution of DDM-, IDDM and AIM-based methods Cell No. Iterations 7 DDM IDDM AIM 17 DDM IDDM AIM
888
444
222
111
3.2231 mm 3.1907 mm 3.1580 mm
3.2017 mm 3.1256 mm 3.0770 mm
3.3114 mm 3.2706 mm 3.1106 mm
4.8435 mm 4.7889 mm 4.6442 mm
1.5618 mm 1.4884 mm 1.5155 mm
1.6385 mm 1.6314 mm 1.6169 mm
2.2067 mm 2.1955 mm 2.1903 mm
3.8904 mm 3.8229 mm 3.7630 mm
methods are 1.5574 mm (21 iteration), 1.4970 mm (25 iteration) and 1.5375 mm (23 iteration), respectively. For the case of 222 (lower left) and 111 (lower right) detector cells, with the increase of iteration number, the spatial resolutions of three methods becomes better and better. Overall, the AIM-based method performs better than the DDM- and IDDM-based methods, and the IDDM-based method performs better than the DDM-based method. This phenomenon is more apparent as decrease of the detector cell number (e.g. 111 detector cells). Similar to the cases of 888 and 444 detector cells, the IDDM-based method can obtain the best spatial resolution at some iteration among the three methods. The magnifications of the square region and the corresponding measured spatial resolution are shown in Fig. 15 and Table 1. We can see that the spatial resolutions of three methods become better with the increase of the iteration number (e.g. the same columns of Fig. 15 or Table 1). With the decrease of the detector cell number, the spatial resolution becomes worse under the same number of iterations (e.g. in the same rows of Fig. 15 or Table 1). This is because the detector cell size becomes larger and results in a worse nominal spatial resolution. Specially, for the case of 888 and 111 detector cells, the measured best spatial resolutions among the three methods are 1.4632 mm (IDDM, 19 iteration) and 2.7511 mm (IDDM, 71 iteration), respectively. However, comparing to the corresponding iso-center spatial resolutions of 0.6 mm and 4.7 mm, higher resolutions are achieved for larger detector cell size. In other words, higher spatial resolutions are achieved using finer image grids. From the aforementioned experimental results, we can conclude that the AIM- and IDDM-based methods can reconstruct a better spatial resolution than the DDM-based method especially for the high resolution reconstruction (e.g. 111 detector cells). The AIM-based method performs better than IDDM-based method only for the high resolution reconstruction cases (e.g. 444 and/or 222 detector cells). Among all the cases, the IDDM-based method obtains the best spatial resolution at some certain iteration number. For a given projection set, the finer the image grid is, the better the reconstructed spatial resolution, and the more apparent the advantages of the AIM- and IDDM-based methods are (comparing to the DDMbased method). This conclusion is consistent with the result of image noise analysis because high spatial resolution always means high image noise. 3.4. Structural SIMilarity (SSIM) The SSIM index is used to quantitatively evaluate the image quality. The SSIM was designed to improve the traditional methods such as the peak signal-to-noise ratio (PSNR) and mean squared error (MSE). It has been shown to be consistent with human visual perception [14]. The SSIM index is a wellestablished metric to measure the image quality relative to the reference image. The SSIM index can be viewed as a quality measure for the images being compared, providing the other image is regarded as of
16
C. Miao et al. / An improved distance-driven method for projection and backprojection Table 2 SSIM index of DDM-, IDDM and AIM-based methods Cell No. Iterations 7 DDM IDDM AIM 17 DDM IDDM AIM
888
444
222
111
0.9452 0.9406 0.9451
0.9219 0.9195 0.9226
0.8854 0.8852 0.8868
0.8313 0.8319 0.8325
0.9924 0.9977 0.9967
0.9551 0.9575 0.9569
0.8964 0.8998 0.8984
0.8435 0.8432 0.8440
perfect quality. If the measured image is exactly same with the reference image, the SSIM index will be equal to 1. The best spatial resolutions are achieved by the DDM-, IDDM- and AIM-based methods after 19 iterations from the projection data with 888 detector cells, respectively. The corresponding images of each method with best spatial resolution and image quality are averaged to obtain a reference image. The SSIM indexes are computed for the same images listed in Table 1 and the results are in Table 2. From Table 2 we can conclude that, the AIM-based method is better than the DDM- and IDDM-based methods for 7 iterations. For 17 iterations, the AIM- and IDDM-based methods outperform the DDMbased method. With the decrease of the detector cell number, the image quality becomes worse after the same number of iterations, which is consistent with Fig. 15. 4. Conclusion In conclusion, we have numerically evaluated the AIM-, DDM- and IDDM-based methods in an OSSART framework. The proposed improved distance-driven model (IDDM) has a computational cost as low as the DDM and a spatial resolution as high as the AIM. Because the AIM-based method requires higher computational cost to compute the system matrix W analytically, the DDM- and IDDM-based methods are faster than the AIM-based method. While three methods have similar image noise performance, the AIM-based method has a little bit better performance in terms of spatial resolution and SSIM and the IDDM-based method can obtain the best spatial resolution at some certain iteration among the three methods. The IDDM-based method is the most promising one because it can meet the requirements of higher spatial resolution with low computational cost. Acknowledgments This work was partially supported by the NSF CAREER Award CBET-1149679 and the NSF collaborative project DMS-1210967. This work was also partially supported by China Scholarship. The authors are grateful to Dr. Bruno De Man for his constructive comments and discussions. References [1] [2]
P. Suetens, Fundamentals of Medical Imaging, Cambridge: Cambridge University Press, 2002. X. Tang et al., Enhancement of in-plane spatial resolution in volumetric computed tomography with focal spot wobbling – Overcoming the constraint on number of projection views per gantry rotation, Journal of X-ray Science and Technology 18(3) (2010), 251–265.
C. Miao et al. / An improved distance-driven method for projection and backprojection [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
17
S. Tang et al., CT gradient image reconstruction directly from projections, Journal of X-ray Science and Technology 19(2) (2011), 173–198. S.K. Alia and J.B. Thomas, Successive binary algebraic reconstruction technique: an algorithm for reconstruction from limited angle and limited number of projections decomposed into individual components, Journal of X-ray Science and Technology 21(1) (2013), 9–24. S. Zhao, K. Yang and X. Yang, Reconstruction from truncated projections using mixed extrapolations of exponential and quadratic functions, Journal of X-ray Science and Technology 19(2) (2011), 155–172. B. De Man and S. Basu, Distance-driven projection and backprojection in three dimensions, Phys Med Biol 49(11) (2004), 2463–2475. G.T. Herman, Image Reconstruction from projections, Orlando: Academic, 1980. T.M. Peters, Algorithms for Fast Back- and Re-Projection in Computed Tomography, IEEE Transactions on Nuclear Science 28(4) (1981), 3641-3647. W. Zhuang, S. Gopal and T.J. Hebert, Numerical evaluation of methods for computing tomographic projections, IEEE Transactions on Nuclear Science 41(4) (1994), 1660–1665. G. Zeng and G. Gullberg, A ray-driven backprojector for backprojection filtering and filtered backprojection algorithms, IEEE Nuclear Science Symp. Medical Imaging Conf, San Francisco, 1993, pp. 1199–1201. B. De Man and S. Basu, Distance-driven projection and backprojection, in: IEEE Nuclear Science Symp Medical Imaging Conf, Norfolk, 2002. B. De Man and S. Basu, 3D distance-driven projection and backprojection, in: Proc 7th Int Conf on Fully 3D Reconstruction in Radiology and Nuclear Medicine, Saint Malo, 2003. H. Yu and G. Wang, Finite detector based projection model for high spatial resolution, Journal of X-ray Science and Technology 20(2) (2012), 229–238. A.C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, New York: IEEE Press, 1999. G. Wang and M. Jiang, Ordered-subset simultaneous algebraic reconstruction techniques (OS-SART), Journal of X-ray Science and Technology 12(3) (2004), 169–177. A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences 2(1) (2009), 20. F.J. Schlueter et al., Longitudinal image deblurring in spiral CT, Radiology 193(2) (1994), 413–418.
Appendix: Distance-driven kernel The distance-driven kernel uses the length of overlap between each source and each destination to perform a weighted sum of the source values [6]. As illustrated in Fig. A.1, suppose that the source signal is defined as a set of sample values s1 , s2 , . . . , sM and sample locations y1 , y2 , . . . , yM , yM +1 . Suppose the destination signal is defined as a set of sample values d1 , d2 , . . . , dN and sample locations x1 , x2 , . . . , xN , xN +1 . In order to determine the destination signal values, the key step is to assign the weighing coefficients to the related sample values. According to the distance driven kernel operation, the destination value is calculated as: dm = (xn+1 − ym ) · sn + (ym+1 − xn+1 ) · sn+1
Fig. A.1. Distance-driven kernel.
(A. 1)
18
C. Miao et al. / An improved distance-driven method for projection and backprojection
The kernel weights can also be normalized by the width of the source or destination intervals, which results in the normalized distance driven kernel operation, dm =
xn+1 − ym ym+1 − xn+1 sn + sn+1 ym+1 − ym ym+1 − ym
(A. 2)
Denoting this operation by the system matrix W = {ωm,n }, we have, ωm,n =
xn+1 − ym ym+1 − ym
(A. 3)