A Comparison between Compressed Sensing Algorithms in Electrical ...

2 downloads 0 Views 640KB Size Report
Abstract— Electrical Impedance Tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements.
32nd Annual International Conference of the IEEE EMBS Buenos Aires, Argentina, August 31 - September 4, 2010

A comparison between compressed sensing algorithms in Electrical Impedance Tomography Joubin Nasehi Tehrani, Student Member, IEEE, Craig Jin, Alistair McEwan, and André van Schaik, Senior Members, IEEE Abstract— Electrical Impedance Tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. Conventional EIT reconstruction methods solve a linear model by minimizing the least squares error, i.e., the Euclidian or L2-norm, with regularization. Compressed sensing provides unique advantages in Magnetic Resonance Imaging (MRI) [1] when the images are transformed to a sparse basis. EIT images are generally sparser than MRI images due to their lower spatial resolution. This leads us to investigate ability of compressed sensing algorithms currently applied to MRI in EIT without transformation to a new basis. In particular, we examine four new iterative algorithms for L1 and L0 minimization with applications to compressed sensing and compare these with current EIT inverse L1-norm regularization methods. The four compressed sensing methods are as follows: (1) an interior point method for solving L1-regularized least squares problems (L1-LS); (2) total variation using a Lagrangian multiplier method (TVAL3); (3) a two-step iterative shrinkage / thresholding method (TWIST) for solving the L0-regularized least squares problem; (4) The Least Absolute Shrinkage and Selection Operator (LASSO) with tracing the Pareto curve, which estimates the least squares parameters subject to a L1-norm constraint. In our investigation, using 1600 elements, we found all four CS algorithms provided an improvement over the best conventional EIT reconstruction method, Total Variation, in three important areas: robustness to noise, increased computational speed of at least 40× and a visually apparent improvement in spatial resolution. Out of the four CS algorithms we found TWIST was the fastest with at least a 100× speed increase.

I. INTRODUCTION

E

lectrical Impedance Tomography (EIT) is an imaging method using applied current and voltage measurement around the medium of interest. The internal impedance changes are found by an image reconstruction algorithm that takes into account prior information such as electrode locations, external and internal boundaries, and contact impedance. However, even with this information the images are limited in spatial resolution due to the ill-posed nature of the problem, i.e., the large number of unknowns compared to the number of independent measurements. Most EIT reconstruction methods consider a linear model of the form:

b = Jx + n

(1)

Manuscript received April 1, 2010. This work was supported by an Australian government Postgraduate Award (APA). J. Nasehi Tehrani, C. Jin, A. McEwan, and A. van Schaik are with the School of Electrical and Information Engineering, The University of Sydney, Australia, NSW 2006, e-mail: ([email protected]).

978-1-4244-4124-2/10/$25.00 ©2010 IEEE

For EIT, x ∈ ℜ N is a vector of values representing the change in the conductivity within the medium, b ∈ℜM is the vector of observed voltage measurements, J ∈ ℜ M × N is the Jacobian matrix, relating voltage changes to conductivity changes via the reciprocity principle [2], and n is the noise in the measurements. The noise arises from multiple sources, such as RF coupling onto signal wires, electrode malfunction, and subject movement artifacts [3]. In this paper the noise is modeled as Gaussian noise despite the fact that it may not be exactly Gaussian distributed. In practice, noise sources can introduce more outliers than a Gaussian model would predict. The medium is commonly modeled as a finite element mesh (FEM). There are N=1600 impedance elements in our FEM model and M is the number of voltage measurements. In our 16 electrode system two adjacent electrodes are used for current injection. With the remaining 14 electrodes we measure 13 voltage differences between adjacent electrodes. The current injection site sequentially changes from one electrode position to the next, so M is equal to 16 × 13 = 208 . As M 0 is the regularization parameter. Conventional EIT uses gradient descent, steepest descent, Newton, or Quasi-Newton methods to minimize the L2-norm. These methods result in smooth EIT images. The application of non-smooth reconstruction techniques is important for medical imaging because one aims to extract discontinuous profiles [5-6]. One technique to permit image regularization without imposing smoothing is the Total Variation (TV) formulation of the least square problem which has been solved with a regularized L1-norm but not expressed as compressed sensing [7]. Here, we present work that investigates the ability of recent compressed sensing algorithms in EIT inverse solving. These algorithms are as follows: (1) a specialized interior-point method for solving a large-scale L1-regularized least squares problem (LSP) that uses the preconditioned conjugate gradients (PCG) algorithm to compute the search direction [8]. This method, termed L1LS has been shown to be useful for magnetic resonance imaging [9-10]; (2) the TV method with a recent algorithm

3109

that uses minimization by augmented Lagrangian and an alternating search direction algorithm [11-12]; (3) a new iterative shrinkage / thresholding [13] algorithm for L0-norm regularized LSP (TWIST)[14]; and (4) the least absolute shrinkage and selection operator (LASSO) [15] and the related method of basis pursuit denoising (BPDN) [16]. For optimizing the LASSO solution we use the Pareto curve which defines the optimal trade-off between the two-norm of the residual and the one-norm of the solution.

B. Augmented Lagrangian and Alternating direction Algorithm (TVAL3) This algorithm is based on the isotropic TV model:

min

min || x ||0 subject to b = Jx

(3) where, || . ||0 is the L0 semi-norm which counts the number of nonzero elements of the vector. This idea can be generalized to the case in which x is sparse under a given basis P, so that there is a sparse vector s such that the reconstructed signal is x = Ps [18]. When the maximum number of nonzero elements in s is known to equal k, equation (3) can be described as:

min || b − JPs ||22 subject to ||s||0 ≤ k .

(4) The goal of CS algorithms is to minimize the L0-norm, or equivalently maximise the number of zero coefficients. However, this is an NP-hard or computationally difficult problem for large datasets. So the L1-norm, which can be computed more easily, is often used instead to produce comparable results to the L0-norm. In this section we briefly discuss four recent iterative regularization algorithms which are used in compressed sensing. The Matlab function calls used to simulate these algorithms are listed in Appendix A. A. Newton Interior-Point Method (L1-LS) The objective function shown in equation (3) is convex, but not differentiable, making it difficult to solve directly. However, equation (3) can be transformed to a convex quadratic problem with linear inequality constraints as follows: n

min || Jx − b || + λ ∑ ui 2 2

i =1

subject to − ui ≤ x ≤ ui , i ∈ [1, n] ,

(6)

i

m in ∑ || w i ||1 subject to Jx = b i and D i x = wi ∀ i .

(7)

The problem as stated above can be reformulated as an augmented Lagrangian problem:

min ∑ (|| wi ||1 − viT ( Di x − wi ) + i

β 2

|| D i x − wi ||22 ) − λ T ( Jx − b ) +

μ 2

|| Jx − b ||22

(8)

In equation (8), β is the secondary penalty parameter; λ and vi are multipliers of the augmented Lagrangian problem that updated in each iteration [11-12]. C. Two step iterative shrinkage thresholding (TWIST) The two step iterative shrinkage/thresholding (TWIST) algorithm has been proposed to handle a class of convex unconstrained optimization problems for applications where the observation operator is ill-posed or ill-conditioned. This has recently been used in EIT with the L2-norm [19].Unlike the previous two methods, however, TWIST can actually solve the L0 regularized LSP which can be expressed as:

min

1 || J x − b ||22 + λ || x ||0 , 2

(9)

where λ is the regularization parameter and || x ||0 is the L0norm of the conductivity changes[14]. D. Lasso and BPDN algorithms Lasso seeks a minimum L1-norm solution of an underdetermined least squares problem, which can be formulated as:

min ||Jx − b||22 subject to ||x ||1 ≤ τ

(10)

The related algorithm of Basis Pursuit Denoising (BPDN) swaps the objective function and the constraints to give:

min ||x ||1 subject to ||Jx − b ||22 ≤ σ

(5)

where the new variable u ∈ ℜn provides constraints on x. This equivalent quadratic program (QP) can be solved by standard convex optimization methods such as the interiorpoint methods. The search direction for minimization is computed as the exact solution to the Newton system using PCG [8-9].

2

|| Jx − b ||22 +∑ || Di x ||1 ,

where Di x is the discrete gradient vector of x at position i and µ is the penalty parameter for the TV model. This problem is equivalent to:

II. COMPRESSED SENSING ALGORITHMS Compressed sensing is usually implemented with random sampling to ensure a second condition of incoherence. However this would require a significant hardware change in EIT. Recent works have shown that compressed sensing still offers advantages when these conditions are not strictly met [17]. Compressed sensing (CS) focuses on the role of sparsity in reducing the number of measurements needed to represent a finite dimensional vector x ∈ ℜ N .

μ

(11)

The positive parameters σ and τ are estimate of noise level and || x ||1 , respectively. We used root-finding algorithm for finding arbitrary points on the Pareto curve [20]. This curve traces the optimal trade-off between || Jx − b ||2 and || x ||1 for a specific pair of J and b in equation 1.

3110

E. Total Variation in EIDORS The Electrical Impedance Tomography, Diffuse Optical Tomography Reconstruction Software (EIDORS) is a linear finite element solver written in Matlab [21]. In EIDORS the conventional TV algorithm is described as:

min || Jx − b ||22 +α 2 ∑ || Di x ||1

(12)

i

Where α is the hyper-parameter used to control the regularization. EIDORS provides two solvers of the TV regularization. The “aa_inv_total_var” function which uses the inverse covariance of the measurements to set the hyperparameter, and the “ab_tv_lagged_diff” function, which uses the lagged diffusivity method and the primal dual–interior point method [7], [21].

-100 dB noise level, but shows some loss in spatial resolution when the noise is increased to -60dB. The hyperparameter value for these algorithms needed to be adapted with noise level (hp=0.001 for -100db and hp=0.1 for -40dB) to obtain a reasonable results. The new iterative algorithms are more robust to noise with hardly any change noticeable with the increase in noise (Figure 2). Additionally for the CS algorithms the initial settings did not need to be adjusted when the noise level increased. Figure 2 shows that the region of the heart with TVAL3 is slightly better defined than with L1-LS and SPGL1. The TWIST algorithm in row 6 of figure 2 with L0-norm regularization shows similar spatial resolution but with more pronounced shrinking of the region of the lungs and the heart. Figure 3 shows the results for four CS algorithms when the noise level is increased to

III. MATERIALS AND METHODS The electric field was calculated at each node in the mesh using EIDORS. We modified EIDORS to use the CS solvers of the algorithms introduced in section II, in addition to a EIDORS TV solver. A two dimensional FEM of the thorax was created by the EIDORS function “mk_common_model” (Figure 1.)

Fig.1. Left: forward solution mesh (f2t2), 2304 elements. Right: inverse solution mesh (e2t2), 1600 elements. The small circles on the perimeters show ideal electrode positions. These are not identical in the two meshes due to the different resolutions.

For evaluation, we simulated the lungs and the heart (Figure 2) and used the CS algorithms described in Section II which are freely available as Matlab toolboxes (L1-LS [10], TVAL3 [12], TWIST [14], SPGL1 [20]). We used a simulated applied current of 1 mA at 50 kHz, an often used and safe current level for EIT at this frequency. Conductivity values for the lungs, heart and other thorax tissue were modeled at 50 kHz [22], a desirable imaging frequency for fast frame rates, low contact impedance, and high signal to noise of acquired signals [4]. Three levels of Gaussian noise (-100dB, -60dB, and -40dB) were simulated. Computation times were recorded. IV. RESULTS In Figure 2 simulation results are shown for two noise levels for each algorithm. Rows 2 and 3 show results generated with the following built-in EIDORS algorithms: Solver “aa_inv_total_var” with prior “laplace_image_prior”. Solver “ab_tv_lagged_diff” with prior “ab_calc_tv_prior”. Rows four to seven show the simulation results for the CS algorithms. The EIDORS functions “aa_inv_total_var”, “ab_tv_lagged_diff” (Figure 2) result in a clear image at the 3111

Fig.2. Comparison between different algorithms. The noise level in the left column is -60dB and -100dB in the right column.

x = zeros(1600,1); x0=J\b; % Sample the Pareto frontier at 100 points τ = linspace(0, norm(x0,1),100); phi = zeros(size(τ)); opts = spgSetParms('iterations',1000,'verbosity',0); for i=1:length(τ) [x,r] =spgl1 (J, b, τ(i), [], x, opts);% r is the residual r=b-Jx phi (i) = norm(r); if ~mod(i,10), fprintf('...%i',i); end end

REFERENCES Fig.3. Results with -40dB noise added. Top rightL1-LS, top left TVAL3, bottom left TWIST, and bottom right SPGL1.

[1]

-40 dB. All the CS algorithms showed separated lungs, whereas the built-in EIDORS algorithms fused the lungs. The edges of the objects are more clearly defined with the CS algorithms. TWIST was found to converge much faster than the other CS methods. For this algorithm the calculation time on a standard PC for 1600 components and 234 iterations is just 0.3594 second while the other CS algorithms computed in about 1s. The EIDORS algorithms needed about 40 seconds to solve for the same number of components. V.

CONCLUSION

The CS algorithms improve the result of image reconstruction compared with previous TV algorithms with improved spatial resolution, edge detection, and calculation time. With conventional methods the hyper-parameter needs to be optimized with respect to noise level while the CS methods did not need such optimization. They also showed better spatial resolution. One of the main improvements is a decrease calculation time of at least a factor 40. This speedup is useful for online medical imaging and monitoring APPENDIX A. MATLAB FUNCTION CALLS

[2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

The function call of the Matlab package L1-LS [10] and the parameter values that we used for our simulation in this paper were as follows:

[14]

λ =1e-3; rel_tol=1e-3; [x,status]=l1_ls (J,b,λ,rel_tol);

The function call of the Matlab package TVAL3 [12] and the parameter values that we used were as follows:

[15] [16]

µ= 2^20; β= 2^3;

[x,out] = TVAL3 (A, b, no element, dimension, optional); The function call of the Matlab package TWIST [14] and the parameter values that we used were as follows:

[17] [18] [19]

λ=0.5e-10; hR = @(x) J*x; hRt = @(x) J'*x;% linear operator handles Psi = @(x,th) hard(x,th); % denoising function Phi = @(x) l0norm(x); % regularizer tolA = 1e-3; % stopping threshold [x_twist,x_debias_twist,obj_twist,...times_twist,debias_start_twist,mse]= ...TwIST(b,hR,λ,...'Psi',Psi,...'Phi',Phi,...'AT', hRt, ... 'Initialization',0,...'Monotone', 1, ... 'StopCriterion',1,...'ToleranceA',tolA,...'Verbose', 1);

[20] [21] [22]

The function call of the Matlab package SPGL1 [20] and the parameter values that we used were as follows: 3112

M. Lustig, D. Donoho, and J.M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182, 2007. B. Brandstatter et al, “Jacobian calculation for electrical impedance tomography based on the reciprocity principle”. IEEE transactions on magnetic 39. pp. 1309-1312. 2003 A. McEwan, G. Cusick and D.S. Holder, “A review of errors in multifrequency EIT instrumentation”.Physiol. Meas., 28, pp.197-215, 2007. M.Vauhkonen, D.Vadasz, “Tikhonov regularization and prior information in electrical impedance tomography” ”, IEEE Trans. Med. Imaging., vol.17, no.2, pp.285-293, 1998. L. Horesh, M. Schweiger, S.R. Arridge and D.S. Holder, “Large-Scale Non-Linear 3D Reconstruction Algorithms for Electrical Impedance Tomography of the Human Head”, IFMBE, pp. 3862-3865, 2006. T. Kriz and J. Dedkova, “A New Algorithm for Electrical Impedance Tomography Inverse problem”, Progress In Electromagnetic Research Symposium, Beijing, China, pp. 127-131, March 2009. A. Borsic, B. M. Graham, A. Adler, and W. R. B. Lionheart, “In Vivo Impedance Imaging With Total Variation Regularization”, IEEE Trans. Medical imaging., vol. 29, No. 1, pp. 44-54, Jan. 2010. C. T. Kelley, “Iterative methods for linear and nonlinear equations,”Frontiers in Applied Mathematics SIAM, vol. 16, 1995. S. Kim, K. Koh, M. Lustig, S. Boyd, “An Interior-Point Method for Large-Scale L1-Regularized Least Squares”, IEEE Journal of selected topics in signal processing, vol. 1, No. 4, pp.606-617, 2007. Kwangmoo Koh, [online software] Available: http://www.stanford.edu/~boyd/l1_ls/,2008 Li. Chengbo, Yin.Wotao,” TV minimization by Augmented Lagrangian and alternating direction algorithms”, [user guide], 2009. C. Li, W. Yin [online software] Available: http://www.caam.rice.edu/~optimization/L1/TVAL3/2009. R. Nowak and M. Figueiredo, “Fast wavelet-based image deconvolution using the EM algorithm”, Proc. 35th Asilomar Conf. on Signals, Systems,and Computers, vol. 1, pp. 371–375, 2001. J. Bioucas-Dias and M. Figueiredo, "A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration", IEEE Transactions on Image processing, vol.16, pp. 2992-3004, 2007. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani ,”Least angle regression”. Annals of Statistics, 32(2):407–499, 2004. Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. “Atomic decomposition by basis pursuit”. SIAM Journal on Scientific Computing, 20(1):33–61, 1999. Emmanuel. J. Candès, Y. Eldar and D.Needell. “Compressed sensing with coherent and redundant dictionaries”,CoRR abs/1005.2613,2010. E.J. Candès, M.B. Wakin, “An Introduction to Compressive Sampling”, IEEE Signal Processing, vol.21, pp.21-30, 2008. Fei Yang, Robert P.Patterson,” A novel impedance-based tomography approach for stenotic plaque detection: A simulation study”, Elsevier, vol. 25, pp. 147-150, 2009. E. V. Denberg, M. P. Friedlander, “Probing the pareto frontier for basis pursuit solutions”, SIAM J. Sci. Comput. vol. 31, No.2, pp.890912, 2008. A. Adler, W. R B Lionheart,”Uses and abuses of EIDORS: an extensible software base for EIT”. Physiol. Meas. 27 pp. 25-42, 2006. Institute for Applied Physics, Italian National Research Council, “Dielectric properties of body tissue,” Jun. 2007 [Online].Available: http://niremf.ifac.cnr.it/tissprop.

Suggest Documents