Importance Sampling Kalman Filter for Image Estimation - EE@IITM

0 downloads 0 Views 1MB Size Report
The authors are with the Image Processing and Computer Vision Labora- tory, Department of ... Digital Object Identifier 10.1109/LSP.2006.891345 .... Importance sampling: P is the PDF whose moments are to be estimated while Q is the .... [13] A. K. Jain, Fundamentals of Digital Image Processing. , India: Pren- tice-Hall ...
IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 7, JULY 2007

453

Importance Sampling Kalman Filter for Image Estimation G. R. K. S. Subrahmanyam, A. N. Rajagopalan, and R. Aravind

Abstract—This paper presents discontinuity adaptive image estimation within the Kalman filter framework by non-Gaussian modeling of the image prior. A generalized methodology is proposed for specifying state-dynamics using the conditional density of the state given its neighbors, without explicitly defining the state equation. The novelty of our approach lies in directly obtaining the predicted mean and variance of the non-Gaussian state conditional density by importance sampling and incorporating them in the update step of the Kalman filter. Experimental results are given to demonstrate the effectiveness of the proposed method in preserving edges. Index Terms—Discontinuity adaptive prior, image estimation, importance sampling, Kalman filter, Markov random fields, non-Gaussian image modelling, state space models.

I. INTRODUCTION

T

HE problem of image estimation involves recovering an original image from its noisy version. This can be cast as a problem of state estimation from noisy measurements. When the state transition and measurement equations are both linear, and the state and measurement noises are independent and additive Gaussian, the Kalman filter is optimal and gives the minimum mean square error (MMSE) estimate of the state. Extension of the 1-D recursive KF to 2-D was first proposed by Woods and Radewan [1] and is referred to as the reduced-update Kalman filter (RUKF). The reduced-order model Kalman filter (ROMKF) [2] achieves equivalent performance but with reduced computational complexity. Effects of distortion resulting from noise can be reduced by Kalman filtering, provided the image parameters such as autoregressive (AR) coefficients are known. In general, however, these parameters are a priori unknown, and can vary spatially as a function of the image coordinates. In [3] and [4], spatially varying 2-D AR model parameters are estimated at each pixel by windowing the observed image. Jeng and Woods [5] propose an inhomogeneous AR image model using local statistics. In [6] and [7], recursive estimation of images with non-Gaussian error residuals is proposed. Nonrecursive approaches to image estimation also exist. In [8], a compound Gauss Markov random field (GMRF) model is proposed for the image and its maximum a posteriori probability (MAP) estimate is obtained by simulated annealing. Edge-preserving image recovery using discontinuity-adaptive finite Markov random fields has been considered in [9]. A combination of homogeneous and inhomogeManuscript received August 21, 2006; revised November 15, 2006. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Vince Calhoun. The authors are with the Image Processing and Computer Vision Laboratory, Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai 600 036, India (e-mail: [email protected]; [email protected]; [email protected]). Digital Object Identifier 10.1109/LSP.2006.891345

neous conditional densities has been used in [10] for Bayesian estimation of images. There is increasing interest in handling non-Gaussian situations using Monte-Carlo sampling methods [11], [12]. A primary issue with image estimation methods is about how they handle noise reduction versus edge preservation, since the two requirements are contradictory in nature. We propose an interesting extension to the traditional Kalman filter for handling edges in image estimation by modeling the original image with a non-Gaussian prior. A sampling-based approach is used to predict the statistical parameters required for the update step of the Kalman filter. Experimental results are given to validate the proposed method. The problem of image estimation is described in Section II. Non-Gaussian image modeling and importance sampling (IS) are discussed in Section III. In Section IV, we propose a discontinuity adaptive Kalman filter. Experimental results are given in Section V and Section VI concludes the paper. II. IMAGE ESTIMATION The observation model for an image degraded by additive noise is given by (1) where is the original image, is the observation, is white Gaussian noise independent of . and given . This is a The problem is to determine difficult task because one must preserve the discontinuities in while simultaneously filtering out the noise. The image estimation problem can be posed as a dynamic state-estimation problem. Typically, the original image is modeled as a 2-D autoregressive (AR) process. This is due to the fact that in the absence of any a priori constraints, the solution can be very noisy. The corresponding AR equation [1], [13] can be written as a state transition equation in the form (2) For example, if we consider a first-order causal support (commonly used) for the AR model, then and , where denotes the current pixel position. Matrix contains the AR coefficients of the image which can be computed using Yule-Walker equations. The vector where is process noise and is assumed to be independent and white Gaussian. Based on relation (1), the measurement equation can be formulated as

1070-9908/$25.00 © 2007 IEEE

(3)

454

IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 7, JULY 2007

Here, is the given observation and . is assumed to be independent of . Noise Image estimation boils down to estimating the state given the observation . Relations (2) and (3) can be used to formulate a recursive 2-D auto-regressive Kalman filter (ARKF) and the well-known Kalman filter equations [1] can be used to estimate the state. The ARKF is optimal only under linear state and observation models and in the presence of additive white Gaussian noise. III. DISCONTINUITY ADAPTIVE PRIOR To reduce the effect of noise, it is important to incorporate as much knowledge as possible about the original image in the estimation process. The AR model is one way of imposing smoothness constraint through the state equation to regularize the solution. However, linear dependence is a strong constraint and arbitrary application of smoothness will result in loss of edges. Markov random fields (MRFs) [14] provide better flexibility in incorporating prior knowledge about images statistically. An MRF possesses Markovian property i.e., the value of a pixel depends only on the values of its neighboring pixels and on no other pixel. This is a statistical way of incorporating smoothness. But it is important to realize that at discontinuities or edges, the notion of neighborhood dependency is violated. How to incorporate the smoothness constraint in MRFs while simultaneously preserving the edges is a challenging problem. deConsider a 1-D signal that is to be estimated. Let derivative of . A potential function note the quantifies the penalty against the irregularity in and corresponds to prior clique (a set of connected pixels) potentials . The clique potential is in MRF models [14]. Let and usually chosen to satisfy two properties: 1) , 2) the derivative of must be expressible as where is a function which determines the interaction between is the neighboring pixels. The magnitude strength with which the regularizer performs smoothing. A necessary condition for any regularization model to be adaptive to discontinuities [14] is (4) is a constant. The above condition with where completely prohibits smoothing at discontinuities as whereas with it allows limited (bounded) smoothing. and hence For a standard quadratic regularizer, , i.e., the smoothness strength increases linearly with . An MRF with such a quadratic regularizer is referred to as Gaussian MRF (GMRF). For a first-order nonsymmetric half plane (NSHP) support, where corresponds to the pixel to be estimated, have been previously estimated, and conpixels , , trols the variation between neighboring pixel values [10]. The can be shown to be equivastate PDF, in this case, and the lent to a Gaussian PDF with mean . The quadratic regularizer imconditional variance poses smoothness constraint everywhere, which inevitably leads to over-smoothing of edges. A better way to handle the situation is to use a non-Gaussian conditional density function to model the original image. Geman and Geman [15] propose the use of line fields in which smoothness is switched off at points where

Fig. 1. Plot shows how smoothing strength of DAMRF varies as a function of .

Fig. 2. Non-Gaussian DAMRF distributions.

the magnitude of the signal derivative exceeds a threshold. However, the resulting potential function becomes discontinuous. Following [14], we propose to use the interaction function where is as defined before. For , the smoothing strength increases this choice of . monotonically as increases within a band Outside the band, smoothing decreases and becomes zero as . Since this enables to preserve image discontinuities, it is also called a discontinuity adaptive MRF (DAMRF). The resulting state conditional density is of the form where is the corresponding potential function. Fig. 1 shows how the smoothing strength of the DAMRF varies as a function of while Fig. 2 illustrates the corresponding non-Gaussian nature of DAMRF distributions. A. Importance Sampling (IS) It is analytically not possible to compute the mean and variance of the non-Gaussian PDF corresponding to the DAMRF model. IS is a Monte Carlo method to determine the estimates of a target PDF, provided its functional form is known up to a multiwhose plication constant [16]. Let us consider such a PDF moments are diffecult to estimate. However, from the functional form, we can estimate its support. Consider a different distriwhich is known up to a multiplicative constant, is bution easy to sample, and is such that the (nonzero) support of includes the support of . Such a density is called a

SUBRAHMANYAM et al.: IS KALMAN FILTER FOR IMAGE ESTIMATION

455

from a Gaussian sampler. For a first-order suphas mean port, the sampler , (high enough to include the support of and variance in ). The samples are weighted by . and variance of are computed as The mean

3) The predicted mean and error covariance are fed to the update stage of the Kalman filter as Fig. 3. Importance sampling: P is the PDF whose moments are to be estimated while Q is the sampler density.

sampler density. A typical plot showing the PDFs of (dashed line) and (solid line) is given in Fig. 3. Our aim is to determine the first two central moments of the PDF . Since it is difficult to draw samples from the nonfrom the samGaussian PDF , we draw samples, pler PDF . If these were under , we can determine the moments of . In order to use these samples to determine the estimates of the moments of , we proceed as follows. When to determine any estimates under , we use samples from in the regions where is greater than , the estimates are over-represented. In the regions where is less than , they are under-represented. To account for this, we use correction in determining the estimates weights under . For example, to find the mean of the distribution we use . As , the estimate tends to the actual mean value of . IV. THE PROPOSED FILTER We now present a recursive algorithm for estimating the original image. In the proposed strategy, knowledge of only the conditional PDF is required and its parameters are a function of the already estimated pixels and the values of and as discussed in Section III. This implicitly generalizes the state transition equation and does not restrict it to be linear. The steps in our method are as follows. 1) At each pixel, construct the state conditional PDF using the past pixels in the NSHP support. From the discussions in Section III, we have for the DAMRF model

(5) where

and where ,

is the order

of the NSHP support of AR model, and where . For example, if , then the pixels considered , and in the NSHP support are . 2) Obtain the mean and covariance of the above PDF using importance sampling. Draw samples

where and are one-step forward predicted mean and error covariances, respectively. Kalman gain

Updated as

In the above equations, is the measurement noise variand is scalar observation. ance, ; go to step 1 This gives the estimated mean and repeat. For the ARKF, the AR coefficients are usually assumed to be known accurately. In real situations, this can be difficult because the original image is not available. In contrast, for the proposed and are obtained by method, the statistical parameters importance sampling. V. EXPERIMENTAL RESULTS We give results on image estimation using the proposed importance sampling-based Kalman filter (ISKF). We also give comparisons with the auto-regressive Kalman filter (ARKF) to demonstrate the improvement obtained by incorporating non-Gaussian prior in the Kalman filter. As a quantitative measure of the accuracy of the estimates, we use the improvement-in-signal-to-noise-ratio (ISNR) defined as where , , and represent the original image, the degraded observation, and the estimated image, respectively, and the summation is taken over the entire image. 200 In Fig. 4(a), an original flower image of size 200 is shown. The image obtained after degradation by additive is given in Fig. 4(b). white Gaussian noise of The image estimated by ARKF and the proposed approach are shown in Fig. 4(c) and (d), respectively. Note that the petals come out sharply with the proposed scheme. The image estimated by ISKF not only has finer details but has less noise compared to the output of the ARKF as reflected in the ISNR values. The distribution corresponding to the DAMRF model has been plotted in Fig. 2 at three different pixel locations

456

IEEE SIGNAL PROCESSING LETTERS, VOL. 14, NO. 7, JULY 2007

Fig. 4. (a) Original image. (b) Degraded image (SN R = 10 dB), image estimated by (c) ARKF (ISN R = 2:64 dB), and (d) the proposed method (ISNR = 3:36 dB, 3 = 50, = 1:5).

[(30, 50), (80, 30), and (160, 180)] in the image to illustrate its non-Gaussian nature. Fig. 5(a) shows a brick image while its degraded version is given in Fig. 5(b). The images recovered using ARKF and the proposed method are shown in Fig. 5(c) and (d), respectively. We again observe that the performance of the proposed method is superior. In the attempt to capture edges, the ARKF yields a noisy output. On the other hand, the ISKF not only captures the edges well, but also effectively filters out the noise resulting in an output Fig. 5(d) which is much closer to the original image. As our prediction stage works by drawing samples, it is computationally intensive. While the ARKF took 5 s, the proposed algorithm took about 27 s to estimate a 200 200 image. VI. CONCLUSIONS We have proposed a novel discontinuity adaptive Kalman filter for image estimation. Instead of using the state-transition equation, we use importance sampling to predict the mean and error covariance of an edge-preserving non-Gaussian MRF image prior. The estimated statistics are used in the update equations to obtain the a posteriori intensity. Experimental results validate the effectiveness of the proposed method. REFERENCES [1] J. W. Woods and C. H. Radewan, “Kalman filter in two dimensions,” IEEE Trans. Inf. Theory, vol. IT-23, no. 4, pp. 473–482, Jul. 1977. [2] D. Angwin and H. Kaufman, “Image restoration using a reduced-order model Kalman filter,” IEEE Trans. Signal Process., vol. 16, no. 1, pp. 21–28, Jan. 1989. [3] H. Kaufman, J. W. Woods, D. Subrahmanyam, and A. M. Tekalp, “Estimation and identification of two-dimensional images,” IEEE Trans. Autom. Control, vol. AC-28, no. 7, pp. 745–756, Jul. 1983.

Fig. 5. (a) Original image. (b) Degraded image (SNR = 10 dB). Image estimated using (c) ARKF (ISNR = 0:7 dB), and (d) the proposed method (ISNR = 1:96 dB, 0 = 50, = 2).

[4] A. M. Tekalp, H. Kaufman, and J. W. Woods, “Fast recursive estimation of the parameters of a space-varying autoregressive image model,” IEEE Trans. Acoust. Speech, Signal Process., vol. 33, no. 2, pp. 469–472, Apr. 1985. [5] F. C. Jeng and J. W. Woods, “Inhomogeneous Gaussian image models for estimation and restoration,” IEEE Trans. Acoust. Speech, Signal Process., vol. 36, no. 8, pp. 1305–1312, Aug. 1988. [6] S. R. Kadaba, S. B. Gelfand, and R. L. Kashyap, “Recursive estimation of images using non-Gaussian autoregressive models,” IEEE Trans. Image Process., vol. 7, no. 10, pp. 1439–1452, Oct. 1998. [7] Y. C. Chang, S. R. Kadaba, P. C. Doerschuk, and S. B. Gelfand, “Image restoration using recursive Markov random field models driven by Cauchy distributed noise,” IEEE Signal Process. Lett., vol. 8, no. 3, pp. 65–66, Mar. 2001. [8] F. C. Jeng and J. W. Woods, “Compound Gauss-Markov random fields for image estimation,” IEEE Trans. Signal Process., vol. 39, no. 3, pp. 683–697, Mar. 1991. [9] M. Ceccarelli, “Fast edge-preserving picture recovery by finite markov random fields,” in Proc. ICIAP 2005, F. Roil and S. Vitulano, Eds. Berlin, Germany: Springer Verlag, 2005, pp. 277–286. [10] R. G. Aykroyd, “Bayesian estimation for homogeneous and inhomogeneous Gaussian random fields,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 5, pp. 533–539, May 1998. [11] M. S. Arulampalam, S. Maskell, N. Gorden, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE. Trans. Signal Process., vol. 50, no. 2, pp. 174–188, Feb. 2002. [12] Y. Rui and Y. Chen, “Better proposal distributions: object tracking using unscented particle filter,” in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’01), 2001, pp. 786–794. [13] A. K. Jain, Fundamentals of Digital Image Processing. , India: Prentice-Hall, 1989. [14] S. Z. Li, Markov Random Field Modeling in Computer Vision. New York: Springer Verlag, 1995. [15] S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-6, pp. 721–741, 1984. [16] D. J. C. Mackay, “Introduction to Monte Carlo methods,” in Learning in Graphical Models, ser. NATO Science Series, M. I. Jordan, Ed. Norwell, MA: Kluwer, 1998, pp. 175–204.

Suggest Documents