Feature Enhancement in Low Quality Images with Application to

0 downloads 0 Views 1012KB Size Report
resonance imagery (MRI) and ultrasound B-scan images. Recently ... finding acoustic boundaries in 2D and 2D+T echogram sequences has been proposed ... Inspection of cardiac B-mode ultrasound images reveals features of a number of.
Feature Enhancement in Low Quality Images with Application to Echocardiography Djamal Boukerroui, J. Alison Noble and J. Michael Brady Medical Vision Laboratory, Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, UK. {djamal,noble,jmb}@robots.ox.ac.uk

Abstract. In this paper we propose a novel approach to feature enhancement to enhance the quality of noisy images. Our approach is based on a phase-based feature detection algorithm, followed by sparse surface interpolation and subsequent nonlinear post processing. We first exploit the intensity-invariant property of phase-based acoustic feature detection to select a set of relevant image features in the data. Then, an approximation to the low frequency components of the sparse set of selected features is obtained using a fast surface interpolation algorithm. Finally, a non-linear post-processing step is applied. Results of applying the method to echocardiographic sequences (2D+T) are presented. We show that our correction is consistent over time and does not introduce any artefacts. An evaluation protocol is proposed in the case of echocardiographic data and quantitative results are presented.

1

Introduction

The first step toward automatic analysis or evaluation of images frequently consists of feature detection and segmentation. In the field of medical image analysis, robustness, accuracy and reproducibility are critical. Traditionally, in early (lowlevel) processing and analysis methods, the data is used directly without feature enhancement or bias field correction. The most important pre-processing step investigated and applied is denoising which improves the signal-to-noise ratio (SNR). However, denoising is only a partial solution as it cannot correct the image from artefacts introduced by the imaging system. Indeed, it often depends on the imaged object which finally leads to non-homogeneous regions in the image. Intensity inhomogeneities are a well-studied problem in the area of image analysis of magnetic resonance imagery (MRI) and ultrasound B-scan images. Recently, following the publication of the work by Wells et al. [1] on bias field correction of MR data, several authors have investigated this problem and several approaches have been proposed for MR images [2]. While bias field correction is often necessary for good segmentation, many approaches have exploited the idea that a good segmentation also helps estimation of the bias field [3-6]. By contrast, intensity inhomogeneity correction for ultrasound images has received little attention; possibly because of the high noise level of B-mode images. Some recent intensity-based adaptive segmentation approaches, which intrinsically take into account the non-uniformity of the tissue classes, have yielded promising results [7-9]. More recently, a novel technique for finding acoustic boundaries in 2D and 2D+T echogram sequences has been proposed [10, 11]. The most important advantage of this technique is its intensity-independent

aspect. However, as the noise rejection in this method involves an intensity-based noise threshold the method is not truly intensity invariant and is susceptible to noise. However the 2D+T version of the published technique takes advantage of temporal continuity in order to improve its robustness to noise and detect only relevant and continuous features over time. The authors have discovered that spatio-temporal estimation is insufficient for low frame-rate sequences and that there are a number of localisation problems because of the non-uniformity of wall velocity during the cardiac cycle [11]. This underlines the need for development of a feature enhancement approach to correct the image. To our knowledge, the first attempt to adapt bias field correction to B-scan ultrasound data is of Wells et al. method proposed in [6]. Results shown for breast and cardiac ultrasound images demonstrate that it can successfully remove intensity inhomogeneities and significant improvement is achieved in tissue contrast and the resulting image segmentation. The approach is promising. However, it still requires user interaction to set the image model parameters. In this paper we propose a novel approach to feature enhancement. Our approach is based on a phase-based feature detection algorithm, followed by sparse surface interpolation and subsequent nonlinear post processing. We first exploit the intensityinvariant property of phase-based acoustic feature detection to select the relevant features in the data. Then, an approximation to the low frequency components of the sparse set of selected features is obtained using a surface interpolation algorithm. Finally, a non-linear post-processing step with one control parameter is applied. The paper is organised as follows. Section 3 briefly describes the mathematical framework underlying the sparse surface interpolation algorithm [12]. The Feature Asymmetry (FA) measure for 2D acoustic boundary detection [11] is reviewed in section 4. The non-linear post-processing stage is presented in section 5. The proposed evaluation protocol in the special case of echocardiographic data, as well as quantitative results using the approach, are presented and discussed in section 6. The paper's conclusions are summarised in section 7.

2

Overview of algorithm

A block diagram of our new feature enhancement method is shown in Fig.1. First, features are detected (described in Section 4) in the image I(x,y). This provides a normalised likelihood image FA2D(x,y) where the intensity value at a position (x,y) is proportional to the significance of the detected features. The FA2D(x,y) measure varies from a maximum of 1 (indicating a very significant feature) down to 0 (indicating no significance). The feature detector that we use is based on phase congruency (PC) [13, 14] since it provides a single unified theory that is capable of detecting a wide range of features, rather than being specialised for a single feature type such as intensity steps. Inspection of cardiac B-mode ultrasound images reveals features of a number of

Original Data

Features Detection

Fast Surface Interpolation

Non-linear Post-Processing

Corrected Data

Fig. 1. Block diagram of the proposed feature enhancement method.

quite distinctive kinds, for example the intensity ridge corresponding to the myocardial wall and the mitral valve. A further advantage of PC is that it is invariant to brightness and contrast, hence it is, in principle, robust against typical variations in image formation. The disadvantage of PC is a direct result contrast invariance, namely its high sensitivity to noise. The poor SNR of B-mode ultrasound, including heavy speckle, means that this is a problem that has particularly to be addressed when applying PC to cardiac ultrasound. Following feature detection, the sparse data at features location is then interpolated by a fast sparse surface interpolation (Section 3) using the likelihoods to estimate the degradation field; and then a novel nonlinear processing method using the degradation field (described in section 5) is applied to the original data to enhance or deemphasise feature values. The enhanced image is evaluated on simulated data and then on a range of cardiac ultrasound sequences.

3

2D Sparse Surface Interpolation

In this section we review the method we employ for fast sparse surface interpolation. This is based on [12] to which the reader is referred for further details. Surface interpolation from a sparse set of noisy measured points is an ill-posed problem since an infinite set of surfaces can satisfy any given set of constraints. Hence, a regularisation procedure, taking into account the visual relevance and computing efficiency is usually applied, so that the interpolation problem becomes a minimisation of an energy functional of the form: 

( f ) = Ud ( f , d) + 

r

(1)

(f).

The first term is a measure of faithfulness to the measured data and is called the "cost or constraint functional". The second term is the regularisation functional; λ is a nonnegative (Lagrange multiplier) parameter controlling the amount to which the data is to be considered (piecewise) smooth. A commonly used cost functional is the sum of squares:

Ud ( f , d ) =



wi ( f ( xi , yi ) − d i ) , 2

i

(2)

which measures the difference between the measured field d = {(xi , yi , di)} and the approximating surface f ( x i , yi ) ; and where w = {0 ≤ wi ≤ 1} is the corresponding set of weights for the measured field, reflecting the confidence of the measured information at each position ( wi = 0 means the absence of information at (xi , yi) ). Regarding the regularisation term, a common approach has been to use a variational functional to constrain the solution, often expressed as a thin plate energy otherwise known as the quadratic variation [15]: 2  ∂ 2 f  2  ∂2 f   ∂2 f   U r ( f ) = ∫∫  2  + 2  +  2  ∂x   ∂ x∂y   ∂y

  

2

  dxdy . 

(3)

In general, obtaining an analytic solution of the Euler-Lagrange equations resulting from the above optimisation problem is difficult. Therefore, an approximation of the

continuous problem using discrete operators is used. This leads to a numerical solution. Suppose that the data d is defined on a regular rectangular lattice G = {( xi , y j ), 1 ≤ i, j ≤ N } , and that a discrete representation of the surface is defined using a set of nodal variables v = {vi , j = f ( xi , y j )} . The discrete representation of the cost functional (2) is: U d ( v, d ) =

∑w

i, j

(vi , j − d i , j ) 2 .

(4)

i, j

By concatenating all the nodal variables vi , j and the data d i , j respectively into column vectors v and d, we obtain the usual matrix representation of equation (4), U d (v, d ) = (v − d ) T Aw (v − d ) ,

Aw = diag({wi , j }) .

(5)

Regarding the regularisation term, the finite element method provides a continuous surface approximation which is a good means of converting the continuous expression for the energy and leads to a tractable discrete problem that has a numerical solution. The discrete form of the thin plate is given by: (vi +1, j − 2vi , j + vi −1, j ) 2     U r (v) = ∑ + 2 (vi +1, j +1 − vi , j +1 − vi +1, j + vi , j ) 2  , i, j î  + (vi , j +1 − 2vi , j + vi , j −1 ) 2

(6)

which can be reorganised as follows: U r (v ) ∝

∑∑v

i, j

ai , j , m , nvm , n ,

(7)

i, j m, n

where the coefficients a i , j , m , n describe the relations between the nodal variables vi , j and vm , n and are called relation coefficients. Hence, the discrete version of the continuous regularisation term given by equation (3) is U r (v) ∝ v T (λ Ar )v .

(8)

The coefficient Ar, ai , j , m , n , is N 2 × N 2 sparse matrix and contains at most 9 non-zero elements per line. Finally, by adding equations (5) and (8), we obtain the corresponding discrete version of the functional (1): U (v) = v T Av − 2v T b + c , where

A = λAr + Aw and

b = Aw d ,

(9)

and c is a constant. The resulting energy function has a minimum at v = v * , solution of the linear system Av = b , with a very sparse system matrix A. 3.1

Resolution in the Wavelets Space

The surface interpolation problem described in the last section leads to the solution of a large linear system with a sparse system matrix A. Therefore, the equation system is nearly singular and results in poor convergence when simple iterative methods are used. To obtain fast surface interpolation, an adequate scheme is needed which can improve the numerical conditioning. The multigrid [16] and hierarchical basis techniques [17] have been applied successfully to speed up the convergence. Both of

these techniques use a multiresolution approach. However, the frequency domain property of the interpolation problem is not effectively utilized in both cases. Recently, a more tractable approach in terms of simplicity and efficiency has been proposed [12]. The approach utilizes the concept of preconditioning in a wavelet transform space. Wavelet theory is a powerful tool for simultaneously filtering data in space and scale and is easily implemented using filter banks [18]. Yaou and Chang use vector space decomposition reconstruction of the wavelet transform to analyse the multiresolution representation of the surface interpolation problem. The authors exploit the fact that, usually the high-frequency component of the interpolation problem converges much faster than the low-frequency component in surface interpolation. The use of the Discrete Wavelet Transform (DWT) allows the lowfrequency component and the high-frequency component of the interpolation problem to be solved separately. In other words, the minimisation is carried out in a wavelet space using an asynchronous iterative computation and a biorthogonal spline wavelet basis for the preconditioning step [12]. The DWT preconditioning transfers the equation system to ~ an equivalent one with a new nodal variables v~ and a new system matrix, A much denser than the original one A. This implies that a more global connection between the interpolation nodes can be made which considerably improves the convergence rate of the iterative solution. Refer to [12] for more details about fast surface interpolation using the multiresolution wavelet transform, and to [18-20] regarding the biorthogonal wavelet basis and the associated filter banks.

4

Phase-based Feature Detection

Phase-based feature detection has been investigated extensively following the publication of the Local Energy Model of feature detection [13]. This model postulates that features can be defined and classified using their phase signatures (or their phase congruency (PC)). These observations have led to the development of a number of phase-based feature detection algorithms ([10, 11, 14] and references therein). In particular, measures based on phase information seem to be more appropriate for acoustic feature detection as ultrasound images are characterized by a low SNR due to the presence of speckle and to the high range of imaging artefact causing the alteration of the intensity magnitude of equally significant features in the data. Strictly speaking, the concept of PC is only defined in one dimension and its definition involves the Hilbert transform. Typically, the computation of PC (and the related concept of local energy) uses a pair of Quadrature filters [21], normally logGabor filters. A series of orientable 2D filters can be constructed by 'spreading' a logGabor function into 2D. In this way, an extension to two-dimensions of the 1D phase measure is obtained [14]. In our work we have used the 2D Feature Asymmetry (FA) measure used in [10, 11] for feature detection. This measure provides good detection of asymmetric image features such as step edges and has the advantage of being intensity invariant. The 2D FA measure is defined by:

FA2 D ( x, y ) =



o

m

( x, y ) − em ( x, y ) − Tm 

om2 ( x, y ) + em2 ( x, y ) + ε

m

,

(10)

which is a summation over m orientations of a normalised measure of the difference between odd om(x,y) and the even em(x,y) filters responses. Here,  denotes zeroing of negative values, ε is a small positive number to avoid division by zero and Tm is an orientation-dependent noise threshold, defined by [11]:

[

]

Tm = k ⋅ std om ( x, y ) − em ( x, y ) ,

(11)

where k is a positive factor controlling the noise threshold. For more details about the implementation and the setting parameters of the log-Gabor filters and the spreading functions, see [10, 11, 14]. The result of the FA measure is a normalised likelihood image [0, 1] where the intensity values can be interpreted as a confidence measure of detecting feature.

5

The New Feature Enhancement Algorithm

Having reviewed the two principal algorithms that are used in our new feature detection method we now address how these are used for feature enhancement. Specifically our method involves reconstructing an approximation of the intensity inhomogeneities (a “bias field”) which can be subtracted from the original corrupted region. Starting from the principle that the intensity of equally relevant features in the data should be the same, an estimation of the low frequency components of an intensity data field can be built by taking the intensities values of the image only at the location of the relevant features. This enables us to estimate the “bias field” which has corrupted the data. An estimation of the base frequency of this degradation can be found using the fast surface interpolation algorithm as follows. We define the set of nodal variables v and the corresponding weighting field w, by: v = {vi , j = max I ( x, y ) if Bi , j

FA2 D ( xi , y j ) > 0; 1 ≤ i, j ≤ N }

w = {wi , j = FA2 D ( xi , y j ); 1 ≤ i, j ≤ N }

(12)

Here Bi,j is a little window centred at pixel position (xi , yj) and N is the size of the data. Taking the maximum intensity value in a window centred on the feature position guarantees that we always take the highest value of the step edge. Hence, equation (12) defines the two inputs for the surface interpolation stage, which provide us an estimate of the low frequency image degradation. Now, the question is, how are we going to use this information to correct the image? To give an answer to this question we have first to analyse the degradation model. A mathematical model for the intensity inhomogeneity in ultrasound images was developed in [6]. The authors showed that the degradation model is similar to that of the bias field in MRI, known as a multiplicative field. Therefore we define our correction equation as: I c ( x, y ) =

I ( x, y ) / max( I ( x, y )) . v* ( x, y ) / max(v * ( x, y )) + γ

(13)

Here, v (x,y) is the interpolated surface and γ is a positive control parameter that ensures that Ic(x,y) ∝ I(x,y) for γ >> 1 and the maximum correction is obtained when γ

Suggest Documents