Blurred image restoration: A fast method of finding ... - Semantic Scholar

17 downloads 3017 Views 1MB Size Report
Mar 27, 2010 - a Dept. of Computer Science, Faculty of Science, Palacký University in ..... He collaborates with Florida Institute of Technology, Florida State ...
Digital Signal Processing 20 (2010) 1677–1686

Contents lists available at ScienceDirect

Digital Signal Processing www.elsevier.com/locate/dsp

Blurred image restoration: A fast method of finding the motion length and angle Michal Dobeš a , Libor Machala b,∗ , Tomáš Fürst c a b c

Dept. of Computer Science, Faculty of Science, Palacký University in Olomouc, Svobody 26, 771 46 Olomouc, Czech Republic Dept. of Experimental Physics, Faculty of Science, Palacký University in Olomouc, Svobody 26, 771 46 Olomouc, Czech Republic Dept. of Math. Analysis and Applications of Mathematics, Faculty of Science, Palacký University in Olomouc, Svobody 26, 771 46 Olomouc, Czech Republic

a r t i c l e

i n f o

a b s t r a c t

Article history: Available online 27 March 2010

Motion blur in photographic images is a result of camera movement or shake. Methods such as Blind Deconvolution are used when information about the direction and size of blur is not known. Restoration methods, such as Lucy and Richardson or Wiener reconstruction use information about the direction and size of the blur in the deconvolution kernel (called Point Spread Function — PSF). Correct and fast determination of the direction and size of blur improves the quality of restoration and it can substantially reduce the computational time. In this article, a fast method for finding the direction and size of the blur automatically is presented. The method is based on computation of the power spectrum of the image gradient in the frequency domain. The method has achieved good results on both types of images: artificially blurred and naturally blurred (by the camera shake). © 2010 Elsevier Inc. All rights reserved.

Keywords: Motion blur Deconvolution Image restoration Fast Fourier Transform

1. Introduction Motion blur that appears in images of any kind may be caused by camera movement or shake. Often, it is not easy or convenient to eliminate the blur technically. Mathematically, motion blur is usually modeled as a convolution of “Point Spread Function” (PSF) with the image represented by its intensities. Deblurring methods are then based on deconvolution methods such as iterative Lucy–Richardson or non-iterative Wiener algorithms [1] or more complex methods such as Bussgang algorithm [2–4]. As the original (sharp) image is not known, information about PSF is needed in order to reconstruct the blurred image. Many methods that estimate the PSF have been developed [5–15,19,20]. The manner of the PSF estimation significantly depends on the particular kind of images such as astronomy and astrophysics photographs [6,14], computed tomography images [9,10], fluorescence micrographs [16] or confocal microscope images [17]. A parametric method that uses cross validation criteria and auto-regressive model is described in [18]. The problem is said to be blind if there is no information about the blur kernel. Most of blind deconvolution methods assume that the PSF has a simple parametric form, such as a Gaussian or low-frequency Fourier components [20]. Up to now, several blind deconvolution algorithms were suggested. Colonnese et al. [2] recovered an image observed through one or more channels corrupted by additive noise. They applied an iterative Bussgang algorithm and collect information about image edges via Radon transform. The image was divided into blocks of 8 × 8 pixels and local Radon transform was calculated for each relevant block. This makes the method especially suitable for a small size blur. Caron et al. [21] applied a power law and smoothing function to the degraded data in the frequency domain. Jiang [8] suggested a non-iterative single channel blind deconvolution method that eliminates the need for multiple blur measurements, but still guarantees

*

Corresponding author. E-mail address: [email protected] (L. Machala).

1051-2004/$ – see front matter doi:10.1016/j.dsp.2010.03.012

©

2010 Elsevier Inc. All rights reserved.

1678

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

Fig. 1. The reconstruction process — main ideas.

an accurate estimation of the blur kernel, which is considered as radially symmetric. Krishnamurthi [16] accomplished real image reconstructions of fluorescence micrographs by using a blind-deconvolution algorithm based on maximum-likelihood estimation. Other deblurring algorithms with isotropic PSF [9] or a three-dimensional (3-D) separable Gaussian PSF [10] were tested on computed tomography (CT) images. Sakano [12] suggested an algorithm based on the Hough transform which can accurately and robustly estimate the motion blur PSF, even in low signal-to-noise ratio cases. Al Maki and Sugimoto [19] applied method of inverse filtering and discrete sine transform. As a non-blind method, Fergus [20] estimated the blur kernel that was most likely with respect to the distribution of a possible latent (original) image. However, the user must specify an image region without saturation effects. Sroubek et al. [25] used multiple photographs to estimate the blur kernel and to reconstruct the image. Here, we present a fast method of determining the angle and size of blur from a single photograph, which enables to construct a correct PSF kernel. 2. Theoretical foundations and the degradation model Deconvolution methods such as Lucy and Richardson were primarily aimed to deblur astronomical images. Methods such as blind deconvolution try to guess the blur kernel. Natural images have different distribution of intensity levels compared to astronomical images. Our premises are aimed at the reconstruction of natural photographs. As well as Fergus et al. [20], we assumed that the blur could be described as a single convolution using a properly designed blur kernel and that the blur is caused by uniform camera movement in one direction. Parts of the scene in the image are not moving relative to each other. Unlike Fergus [20], the user interaction is not required in our method. A digital representation of a photographic image consists of a matrix carrying information about intensity levels of the individual pixels. A blurred image is theoretically modeled by a convolution in the space domain

f = h  g + n, where  stands for the convolution operation, f is the blurred image, h is the convolution kernel, g is the ideal image (i.e. if the camera did not move), and n is noise (usually caused by a camera sensor). The ideal image g that is not affected by the blur is unfortunately not available in practice. In many cases, it is faster to perform image processing in the frequency domain where convolution corresponds to element-wise multiplication, i.e.

F = H · G + N. The capital letters denote the two-dimensional Discrete Fourier Transforms of the respective lower case letters and the operation “·” stands for the element-wise multiplication. The main idea of finding the blur kernel (that is determined by the length and angle of the camera motion) is based on the following considerations. If an image is blurred by camera motion then edges in the image are blurred too. They are shifted by a distance and angle that correspond to the length and angle of the blur. The changes in the edge positions can be accented by working with the image gradient because the gradient enhances higher frequencies. When the gradient of such a blurred image is transformed into the frequency domain by the Discrete Fourier Transform (usually by its fast version called the Fast Fourier Transform) a regular structure is contained within the power spectrum in the frequency domain. This regular structure has a relationship to the length and the direction of the blur. If the power spectrum is properly filtered so that noise and high frequencies are cut off, a regular pattern of parallel stripes appears. The distance between neighboring stripes is related to the length of the motion blur. The direction of the blur in the captured image is perpendicular to the direction of such stripes. Therefore, if we are able to find this regular pattern we should be able to infer the blur kernel. This kernel is then either modified or directly applied in the standard deconvolution process (such as Lucy and Richardson). The main ideas of the reconstruction process are illustrated in Fig. 1. The information about the direction and size of the blur contained in the kernel is inferred from the power spectrum structure that is obtained by transforming the image gradient into the frequency domain using our fast algorithm. In order

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

1679

Fig. 2. Logarithm of the power spectrum of the image gradient in the frequency domain and the filtered power spectrum.

to obtain correct information, the structure must be filtered properly. An example of such a structure before and after filtering is shown in Fig. 2. The whole process of deblurring can be briefly summarized into the following steps (see Fig. 1): (1) Computing the gradient of the original image and obtaining the power spectrum of the image gradient in the frequency domain using our fast algorithm. (2) Filtering the power spectrum of the image gradient to obtain an identifiable regular pattern. (Band-pass Butterworth filtering was applied in order to smooth the power spectrum and to remove noise and unwanted frequencies.) (3) Identifying parallel stripes (or lines) within the pattern and computing the direction of the stripes by means of the Radon transform. (4) Projecting the power spectrum in the correct direction estimated by means of the Radon transform in order to obtain the target function, and to compute the distance between neighboring stripes. (5) Inferring the kernel on the basis of the direction of the identified stripes and the distance between two neighboring stripes. (6) Deblurring the image using the computed kernel and the standard Lucy–Richardson algorithm. 2.1. Computation of the power spectrum of the image gradient in the frequency domain To find the blur parameters reasonably fast, some steps can be combined together. The computation of the gradient and subsequent finding of its power spectrum can be done in one step in the frequency domain by a manipulation of the Fourier transform of the original image. The gradient of a continuous image intensity function f (x, y ) of two variables is defined as a vector of partial derivatives:

 ∇f =

(∇ f )x

 ∂f 



(∇ f ) y

=

∂x ∂f ∂y

.

When working with discrete images, the computation of the gradient can be replaced by finite differences. Let F (u , v ) denote the Fourier transform of the continuous image intensity function f (x, y ) where u and v are the frequency domain variables:

F = F ( f ),

or





F (u , v ) = F f (x, y ) .

In the continuous case, the Fourier transform of the derivative in the x-direction corresponds to the Fourier transform of the image multiplied by the imaginary unit i and the frequency variable u:



F

 ∂f (x, y ) = iu F (u , v ). ∂x

Analogously, when differentiating in the y-direction, a similar relation applies:



F

 ∂f (x, y ) = i v F (u , v ). ∂y

In case of a discrete representation, this approach has to be modified to account for both the discrete representation of the derivative and the boundary effects. Let f (x, y ) denote the intensity of the image, where x = 1, . . . , N and y = 1, . . . , M. Thus, the matrix representing the original image has M rows and N columns. The two-dimensional Discrete Fourier Transform (DFT) of the matrix f is then defined:

1680

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

F (u , v ) =

M N  

f (x, y ) exp









ωN (x − 1)(u − 1) exp ωM ( y − 1)( v − 1) ,

x=1 y =1

where

ωM = −

2π i

ωN = −

and

M

2π i N

.

The matrix F carries information about the relative power of individual frequencies in the original image. Approximate the horizontal component of the gradient of f by central differences as follows:

(∇ f )x (x, y ) ∼ f dx (x, y ) = f (x + 1, y ) − f (x − 1, y ), where the boundaries are treated by assuming that

f (·, N + 1) = f (·, N ) and

f (·, 0) = f (·, 1).

Let F dx denote the two-dimensional DFT of the matrix f dx . The purpose is to show a natural fast method of computing the matrix F dx from the matrix F , without having to perform another two-dimensional DFT. (The DFT of the original image is usually needed anyway.) The DFT of the first column of the matrix f and the DFT of the last column of the matrix f are denoted as F L and F R , respectively. These transformations are one-dimensional. A plain computation reveals that



F dx (u , v ) = F (u , v ) exp



















ωM (1 − v ) − exp ωM ( v − 1) + exp ωM (1 − v ) + 1 F R (u ) − F L (u ) .

The DFT is computed by the Fast Fourier Transform algorithm (FFT). A similar relation can be used to compute the DFT of the vertical component of the gradient. However, it is easier to find the DFT of the vertical component by applying the same algorithm to the transpose of the matrix:

f dy (x, y ) = f (x, y + 1) − f (x, y − 1). 2.2. Filtering The spectrum of the image gradient contains the desired “stripe-structure” masked by noise and high frequency structures that should be removed in order to identify the parallel stripes caused by the motion blur, see Fig. 2. The spectrum needs to be smoothed and filtered first. It is obvious that the design of the filter depends on the length of the motion blur. Restoration of photographs spoiled by a motion blur is meaningful when the length of the blur does not exceed 1/10 of the size of the image (otherwise the image is usually “unwatchable”). On the other hand, if the blur is approximately one pixel or smaller, image sharpening techniques are usually sufficient, such as techniques based on the superposition of the Laplacian or gradient with the original image [22]. Based on these premises, we used a band-pass filter. The appropriate tuning and choice of the band-pass filter parameters was based on many experiments. We decided to use a Butterworth filter

H (u , v ) = 1 − where

D (u , v ) =



1 D (u , v ) W 1+[ 2 ]2n D (u , v )− D 20

,

(u − M /2)2 + ( v − N /2)2 .

The order n was set to 2, a reasonable value of radial distance from the center D 0 was found between 3 and 5 pixels and the band-width W was set to 30% of the original image size (see Fig. 3). Such parameters were found optimal for most of the real “camera shaken” images that were tested. This setting preserves information about the blur, while the unwanted noise and high frequency structures are mostly removed. The spectrum of the image degraded by the real camera motion and the result of filtering are shown in Fig. 2. 2.3. Blur parameters estimation Blur angle estimation is based on the following premise: When filtered properly, the power spectrum of the gradient contains a regular pattern of parallel stripes. Rotating such a matrix by the angle α and summing the columns of the result produces a vector, for each value of α . In such a vector, the crests and valleys of the original stripe structure usually cancel out, unless the angle α is very close to the angle of the direction of the stripes. The projection at this very angle has the largest span of values — the greatest difference between its largest value (sum of the crest-values) and its smallest value (sum of the valley-values). The estimate of the angle of the blur direction θ is perpendicular to the angle α , i.e.

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

1681

Fig. 3. Butterworth filter with parameters D 0 = 5 and n = 2.

Fig. 4. Illustration of the target function.

θ = α + 90 deg. To compute the projection of the gradient power spectrum at various angles in order to obtain θ , the standard Radon transform was used [1,23,24]. The projection (sum) of the (filtered) gradient power spectrum along the estimated angle α is called the target function. The target function, illustrated in Fig. 4, carries information about the distance between stripes in the gradient power spectrum which is related to the blur length. The method of inferring the blur length by identifying the dominant frequency contained within the target function by means of the one-dimensional Fourier transform was intended at the beginning. Unfortunately, due to the noise in real (camera shaken) images this approach seemed to be unreliable in approximately half of the cases. Therefore we decided to infer the information related to the length of the motion blur by computing the corresponding distance between two adjacent minima of the target function, see Fig. 4. The most pronounced distance is the one between the two minima closest to the center of the image. Therefore, we detected this central distance. Note that the distance between two minima closest to the center of the image is 2d, while the distance between any other two consecutive minima is d. The principle of computation of the blur length  is illustrated in Fig. 4. For the sake of simplicity, let the size of the blurred image be M × M in the space domain (as well as in the frequency domain). Let d be the distance between two consecutive minima in the frequency domain. Then the length  of the blur expressed in pixels in the space domain is

 = M /d. A deblurring kernel is assembled using the estimated values of the angle θ and the length . The kernel is designed to perform a uniform weighted sum of the values in the blurred image in the direction of the angle θ and the size of the estimated length . 3. Results In this section, we present the results of the image restoration using the parameters estimated by our fast algorithm. The tests were performed on a private collection of images as well as on blurred images that were kindly provided by the authors of [25]. The results are also briefly compared to the approach published in [19] where the inverse filtering and discrete sine transform were applied.

1682

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

Fig. 5. Example: Test image blurred, L = 7 pixels, θ = 45 degrees (left) and the corresponding result of restoration using the parameters estimated by our algorithm (20 iterations) (right).

Table 1 MSE for the results of restoration with/without a gradient computation. True L

True Θ

Found L

Found Θ

MSE

5 7 9 11 13 15

90 90 90 90 90 90

5/5 7/7 9/9 11/11 13/12 15/13

91/86 91/89 91/90 91/91 91/91 91/91

2.23E−06/8.35E−06 4.97E−06/4.97E−06 9.11E−06/7.77E−06 1.45E−05/1.45E−05 2.29E−05/4.30E−05 3.10E−05/2.21E−04

Fig. 6. The dependence of MSE for the results of restoration with/without a gradient computation.

We also experimented with a gradient-tensor approach described in Spyridonos et al. [26]. We found out that orientation of blur can be estimated by this manner. Unfortunately, it was difficult to estimate the length of blur — the number of stripes can be affected by the presence of undesirable lines in the pattern. The gradient-tensor approach can be considered in a future work. In order to evaluate the method we used both types of images: artificial (test) images as well as natural ones. Artificial images were introduced in order to measure the error quantitatively and to show how the algorithm works under different conditions: for different blur lengths, application of gradient and in the presence of noise. The experiments reveal, that determining the length and angle of blur is more accurate when the gradient of the image is used rather than when the blur parameters are estimated just from the Fourier spectra. The reason is that the minima are better identifiable, see the results in Table 1 and Fig. 6. The example of a blurred artificial test image with length of blur L = 7 pixels and Θ = 45 degrees and the result of restoration are in Fig. 5. Table 1 summarizes the results of restoration with and without a gradient computation for different blur lengths. The dependence of MSE for the results of the restoration with/without a gradient computation for different blur parameters (length L, Θ = 90 degrees) is shown in Fig. 6.

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

1683

Fig. 7. Typical characteristics of MSE dependence on the length of blur using 10, respectively 20 iterations of the Lucy–Richardson algorithm.

Fig. 8. Blurred image of Lena and result of restoration, blur parameters: L = 9, Θ = 45 degrees.

Fig. 9. MSE dependence in case of noise for SNR = 40 dB (Signal-to-Noise Ratio) and without noise using 10 iterations of the Lucy–Richardson algorithm.

Typical dependence of MSE (between the original and the restored image) on the size of blur for different numbers of iterations of the Lucy–Richardson algorithm is illustrated in Fig. 7. The example of the blurred and restored image of Lena (L = 9, Θ = 45 degrees) is in Fig. 8. The MSE dependence on blur length in case of noise is illustrated in Fig. 9. The Gaussian zero mean noise was applied in the image of Lena with SNR = 40 dB (Signal-to-Noise Ratio). The comparison of our approach versus the approach of Al Maki [19] is presented in Fig. 10. The advantage of applying the LR (Lucy–Richardson) deconvolution algorithm in our approach instead of inverse filtering [19] is visible in case of noise, see MSE in Fig. 10.

1684

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

Fig. 10. The advantage of our reconstruction (LR method) in comparison to inverse filtering method suggested by [19] in the presence of noise, SNR = 40 dB.

Fig. 11. Captured images blurred by camera shake (left) and the corresponding results of restoration using the parameters estimated by our algorithm (right).

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

1685

Fig. 12. PSF kernels corresponding to the above three images.

We also tested natural images with different blur lengths to show the algorithm efficiency. Notice that blur was significant in some images. Generally, very long blur causes visible degradation. If substantial information is missing in the original we are not able to reconstruct the image completely no matter what reconstruction algorithm is applied. Image restoration was performed under the same conditions as Fergus et al. [20] — using 10 iterations of the Lucy– Richardson algorithm [1]. The camera motion blurred images and the result of their restoration are shown in Fig. 11, and the PSF kernels corresponding to these three restored images are shown in Fig. 12. 4. Conclusion The algorithm that finds the size and direction of the linear motion blur proved to be successful. It is important to stress that the estimate of the blur parameters is made from a single photograph. The algorithm was developed under the same assumptions as Fergus et al. [20], i.e. the images that are to be restored are blurred by a uniform motion of the camera. For the scientifical fairness the same restoration LR method was used. We found that if the direction of blur was close to the direction of the x-axis (or y-axis) the estimation of the blur-length is more reliable when it is computed from the power spectrum p x of the x-component of the gradient (p y of the y-component, respectively). Otherwise, the estimate is computed from the sum p = p x + p y . Therefore, the algorithm uses three target functions corresponding to p x , p y , and p, and from these three target functions three estimates of the length of blur are computed. The estimate which is less than 1/10 of the image size is considered to be an acceptable value. In most of the cases, one correct length was returned. If more than one acceptable value exists, the result of restoration reveals the correct choice. We also performed tests with artificially blurred images where MSE was measured objectively for different situations: the reconstruction for different number of LR iterations, application of gradient, robustness of the method in case of noise. The advantage of our approach that uses the LR reconstruction algorithm compared to the Al Maki’s approach [19] where inverse filtering was used is obvious from Fig. 10. Lower MSE was achieved in our approach. Our method is especially suitable for bigger lengths of blur. In case of smaller lengths of blur, for example the method of Colonnese et al. [2] seems to work perfectly. Sometimes motion blur can be more complex and it can be composed of more types of non-linear movement. A nice example of restoration in such special cases was given by Sroubek et al. [25] but multiple photographs were required. This can be considered in a future work. In some cases, the stripes are hardly visible within the frequency spectra from which target functions are constructed or the stripes may be corrupted by other structures contained in the blurred image. In such cases, the value of the filter parameter D 0 can be slightly adjusted in order to obtain deeper minima of the target function. This could be achieved by adaptive filtering and it is to be considered in a future work. The aim that is finding the length and direction of blur and therefore the reconstruction kernel was fulfilled. It is comparable with other approaches [19,20] using the same reconstruction LR method. References [1] C. Gonzales, E. Woods, S. Eddins, Digital Image Processing Using Matlab, Prentice Hall, 2004. [2] S. Colonnese, P. Campisi, G. Panci, G. Scarano, Blind image deblurring driven by nonlinear processing in the edge domain, EURASIP J. Appl. Signal Process. 16 (2004) 2462–2475. [3] P. Campisi, S. Colonnese, G. Panci, G. Scarano, Multichannel Bussgang algorithm for blind restoration of natural images, in: Proc. IEEE International Conference on Image Processing, vol. 2, 2003, pp. 985–988. [4] G. Panci, P. Campisi, S. Colonnese, G. Scarano, Multichannel blind image deconvolution using the Bussgang algorithm: Spatial and multiresolution approaches, IEEE Trans. Image Process. 12 (11) (2003) 1324–1337. [5] S.Y. Fu, Y.C. Zhang, X.G. Zhao, Z.Z. Liang, Z.G. Hou, A.M. Zou, M. Tan, W.B. Ye, L. Bo, Computational Intelligence, Part 2, Lecture Notes in Artificial Intelligence, vol. 4114, 2006, pp. 866–871. [6] R. Vio, J. Nagy, A. Wamsteker, Multiple-image deblurring with spatially-variant point spread functions, Astron. Astrophys. 434 (2) (2005) 795. [7] A. Rav-Acha, S. Peleg, Motion-blurred images are better than one, Pattern Recogn. Lett. 26 (3) (2005) 311–317. [8] K.E. Jang, J.C. Ye, Single channel blind image deconvolution from radially symmetric blur kernels, Opt. Express 15 (7) (2007) 3791–3803. [9] M. Jiang, G. Wang, M.W. Skinner, J.T. Rubinstein, M.W. Vannier, Blind deblurring of spiral CT images, IEEE Trans. Medical Imaging 22 (7) (2003) 837– 845. [10] G. Wang, M.W. Vannier, M.W. Skinner, M.G.P. Cavalcanti, G.W. Harding, Spiral CT image deblurring for cochlear implantation, IEEE Trans. Medical Imaging 17 (2) (1998) 251–262. [11] M.A. Santiago, G. Cisneros, E. Bernues, Iterative desensitisation of image restoration filters under wrong PSF and noise estimates, EURASIP J. Adv. Signal Process. (2007), Art. No. 72658. [12] M. Sakano, N. Suetake, E. Uchino, A PSF estimation based on hough transform concerning gradient vector for noisy and motion blurred images, IEICE Trans. Inform. Syst. E90D (1) (2007) 182–190.

1686

M. Dobeš et al. / Digital Signal Processing 20 (2010) 1677–1686

[13] H.W. Zheng, O. Hellwich, Introducing dynamic prior knowledge to partially-blurred image restoration, in: Pattern Recognition, in: Lecture Notes in Comput. Sci., vol. 4174, 2006, pp. 111–121. [14] J. Teuber, R. Ostensen, R. Stabell, R. Florentinnielsen, Rotate-and-stare – A new method for PSF estimation, Astron. Astrophys. Suppl. Ser. 108 (3) (1994) 509–512. [15] R. Vio, J.G. Nagy, L. Tenorio, P. Andreani, C. Baccigalupi, W. Wamsteker, Digital deblurring of CMB maps: Performance and efficient implementation, Astron. Astrophys. 401 (1) (2003) 389–404. [16] V. Krishnamurthi, Y.H. Liu, S. Bhattacharyya, J.N. Turner, T.J. Holmes, Blind deconvolution of fluorescence micrographs by maximum-likelihoodestimation, Appl. Opt. 34 (29) (1995) 6633–6647. [17] F. Rooms, W. Philips, D.S. Lidke, Simultaneous degradation estimation and restoration of confocal images and performance evaluation by colocalization analysis, J. Microsc. 218 (2005) 22–36, Part 1. [18] S. Chardon, B. Vozel, K. Chehdi, Parametric blur estimation using the generalized cross-validation criterion and a smoothness constraint on the image, Multidimens. Syst. Signal Process. 10 (1999) 395–414. [19] W.F. Al Maki, S. Sugimoto, Blind deconvolution algorithm for spatially-invariant motion blurred images based on inverse filtering and DST, Int. J. Circuits Syst. Signal Process. 1 (1) (2007) 92–100. [20] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman, Removing camera shake from a single photograph, ACM Trans. Graph. 25 (3) (2006) 787–794. [21] J.N. Caron, N.M. Namazi, C.J. Rollins, Noniterative blind data restoration by use of an extracted filter function, Appl. Opt. 41 (32) (2002) 6884–6889. [22] C. Gonzales, E. Woods, Digital Image Processing, Prentice Hall, 2002. [23] R.N. Bracewell, Two-Dimensional Imaging, Prentice Hall, Englewood Cliffs, NJ, 1995, pp. 505–537. [24] J.S. Lim, Two-Dimensional Signal and Image Processing, Prentice Hall, Englewood Cliffs, NJ, 1990, pp. 42–45. [25] F. Sroubek, J. Flusser, Multichannel blind iterative image restoration, IEEE Trans. Image Process. 12 (9) (2003) 1094–1106. [26] P. Spyridonos, F. Vilarino, J. Vitria, F. Azpiroz, P. Radeva, Anisotropic Feature Extraction from Endoluminal Images for Detection of Intestinal Contractions, LNCS, vol. 4191, Springer, 2006, pp. 161–168.

Michal Dobeš received Ph.D. in Computer Science from Brno University of Technology in 2000, and M.Sc. in Electrical Engineering from the Czech Technical University Prague, Czech Republic, in 1987. He is an assistant professor at the Computer Science Department, Faculty of Science, Palacky University Olomouc, Czech Republic. He collaborated with institutions such as CSIC Madrid, Spain, and John Hopkins University, Center for Talented Youth, USA (2003–2005). His main interests are image processing, image restoration, and biometry. Libor Machala born 1974, received Ph.D. in the field of digital image analysis of eye’s iris at the Palacky University in Olomouc, Czech Republic, in 2002. He became an associate professor at the same university in 2010. His main interests are digital image processing, iron oxide based nanomaterials and Moessbauer spectroscopy. He collaborates with Florida Institute of Technology, Florida State University, and Eotvos Lorand University Budapest, Hungary. Tomáš Fürst born 1978, studied mathematical modeling at the Charles University in Prague, Czech Republic. He obtained his Ph.D. in 2005 at the Palacky University in Olomouc, Czech Republic, in the field of qualitative theory of multivalued differential equations. Since then, he works at the Palacky University in Olomouc as a research and teaching assistant. His main field of interest is mathematical modeling of natural phenomena, especially heat and mass transfer processes, and porous media flow.

Suggest Documents