Blind Image Deconvolution using a Space-variant ... - Semantic Scholar

1 downloads 0 Views 426KB Size Report
k l s. x i j. a x i k j l v i j. Λ. ∈. = −. − +. ∑. (4) where ˆ (, ). A. x i j represents output of the ( ), th. i j neuron in th. A block of the layer 2. L and. (, ). A. x i j represents ...
© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005

Blind Image Deconvolution using a Space-variant Neural Network Approach T. A. Cheema, I.M.Qureshi and A. Hussain A space variant neural network based on an autoregressive moving average (ARMA) process is proposed for blind image deconvolution. An extended cost function motivated by human visual perception is developed to simultaneously identify the blur and restore the image degraded by space variant non-causal blur and additive white Gaussian noise (AWGN). Since the blur affects various regions of the image differently, the image has been divided into blocks according to an assigned level of activity. This is shown to result in more effective enhancement of the textured regions while suppressing the noise in smoother backgrounds. Introduction: The blurred image is modeled as an auto-regressive moving average (ARMA) process, where the auto-regressive (AR) part determines the image model coefficients and the moving average (MA) part determines the blur function of the system. In this letter, we develop an algorithm that performs simultaneous image restoration and blur identification using artificial neural networks (ANN). The ANN weights are updated by minimizing an extended cost function. Moreover, since we are dealing with space-variant blur, we have divided the image into blocks and categorized them according to the activity, Λ ( A ) , defined as

Λ ( A) =

1

1

`even i , j s =−1 t =−1

{ y ( i, j ) − y ( i + s, j + t )}

2

A

A

(1)

Blocks are categorized as very high (VH) activity, high (H) activity, low (L) activity and very low (VL) activity. The noisy blurred images may thus be described by the following space variant state-space models: 1

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 x(i, j ) = ( k ,l )∈sa

y (i, j ) = ( m , n )∈Sh

akΛ,(l A) x(i − k , j − l ) + v1 (i, j )

(2)

hmΛ,(nA) x(i − m, j − n) + v2 (i, j )

(3)

where x ( i, j ) is the undegraded image and y ( i, j ) is the observed image distorted by a blurring function hmΛ,(nA) which is positive, symmetric and sum of its elements is equal to

one to conserve the energy. v1 (i, j ) and v2 (i, j ) are AWGN, mutually independent, zero mean and have variances σ v21 and σ v22 , respectively. {akΛ,(l A) } are four types of coefficients according to activity and with Sa as its support. Structure of the network: A 2-D, three-layer feed-forward ANN structure is given in Fig.

1. The operation between the first layer L1 and the second layer L2 can be given as: xˆ A (i, j ) =

( k ,l )∈sa

akΛ,(l A) x A (i − k , j − l ) + v1 (i, j )

where xˆ A (i, j ) represents output of the ( i, j ) x A (i, j ) represents the

( i, j )

th

th

(4)

neuron in Ath block of the layer L2 and

neuron value in Ath block of the layer L1 . After one

complete iteration, the values of the layer L1 are replaced by the values of L2 , i.e. xA (i, j ) = xˆ A (i, j ) . The operation between layers L2 and L3 defines the space variant MA

process and is given as yˆ A (i, j ) =

( m , n )∈S h

hmΛ,(nA) xˆ A (i − m, j − n)

(5)

The Cost Function: The hetero-associative errors E y ( w) consists of data fidelity measure EI ( w) in the layer L3 , image regularization error EII ( w) , blur domain regularization

error EIII ( w) , local standard deviation mean square error (LSMSE) of degraded with an

2

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 estimate of degraded image EIV ( w) , and local standard deviation mean square error (LSMSE) of restored and degraded image EV ( w) i.e., E y ( w) = EI ( w) + EII ( w) + EIII ( w) + EIV ( w) + EV ( w)

(6)

where weight vector w consists of {hm ,n } and {ak ,l } for all m, n, k , l . The terms involved in hetero associative error are given below:

EI ( w) = EII ( w) =

1 MN

( i , j )∈R3

{ y A (i, j ) − yˆ A (i, j )}2

λΛ ( A) MN

EIII ( w ) = 1 EIV ( w ) = MN EV ( w ) = MN

( i , j )∈R3 ( m , n )

{d

m, n

ϕ MN

( i , j )∈R3

( i , j )∈R3

( m,n ) ( q ,r )

{σ ( yˆ 2 A

{σ ( xˆ 2 A

A

A

xˆ A (i − m, j − n)}



h

q ,r m−q , n−r

}

(7) 2

(8)

2

(9)

( i, j ) ) − σ ( y A ( i, j ) )}

2

2 A

( i, j ) ) − σ A2 ( y A ( i, j ) )}

−2

(10) (11)

where σ A2 ( y A ( i, j ) ) is defined as [1]. The first three terms in hetero associative error are widely reported in literature [2] and are given in (7), (8) and (9). The last two terms are proposed from the motivation of human perception system. EIV ( w) is the mean square error between the standard deviations of degraded image and its estimate which should be minimized in order to obtain homogeneous statistical regions. The inverse of EV ( w) is the mean square error between the standard deviations of restored image and the degraded image which must be maximized. The second important error for the neural network is the autoassociative error Ex ( w) , which consists of the data fidelity measure and local standard deviation mean square error (LSMSE) between layer L1 and L2 , and is given as

3

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Ex ( w) =

1 MN

( i , j )∈R2

{x A (i, j ) − xˆ A (i, j )}2 +

( i , j )∈R3

{σ ( xˆ 2 A

A

( i, j ) ) − σ A2 ( xA ( i, j ) )}

2

(12)

Learning algorithm: The weights of the ANN are updated using the steepest gradient algorithm as follows: akΛ,(l A) ( new ) = akΛ,(l A) ( old ) − β ∂{E y ( w) + Ex ( w)} ∂akΛ,(l A)

(13)

hmΛ,(nA) ( new ) = hmΛ,(nA) ( old ) − α ∂E y ( w) ∂hmΛ,(nA)

(14)

where β and α are step sizes. Once convergence is achieved layer L2 represents the restored image. Simulation Studies: Improvement in the signal to noise ratio (ISNR) of the image and normalized mean square-error (NMSE) of the identified blur are the figures of merit used, and are defined as ISNR = 10 log10

{ y ( s, t ) − x ( s, t )}

2

s ,t

NMSE = 10 log10

m,n

{ xˆ ( s, t ) − x ( s, t )}

2

s ,t

{h ( ) − hˆ ( )} Λ A m, n

Λ A m, n

2 m,n

hmΛ,(nA)

(15)

(16)

The proposed algorithm was applied to the images ‘peppers’ and ‘lena’ given in Fig. 2. The original images were degraded by 5×5 space variant Gaussian blur, followed by a 30dB additive noise to form the degraded images in Fig. 3. The restored images are shown in Fig. 4. It is clear from Fig. 4 that the proposed algorithm is effective in restoring both images by providing clarity in the fine textured regions while suppressing the noise and ringing effects in the smooth backgrounds. The obtained ISNR values of 3.43 and 3.63 respectively, are higher than those produced by other image restoration algorithms reported by Yap [2] and Zhou [3]. The identified blur using the proposed algorithm resulted in NMSE values of 0.0399 and 0.0872 for the two images, which bear close 4

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 resemblance to the Gaussian blur and are similar to the figure reported by Yap as shown in Table 1. References:

1. Perry, S. W. and L. Guan, “Perception based adaptive image restoration”, Proc. of ICASSP’98, pp. 2893-2896. May 1998. 2. Yap, K. M., L. Guan, and W. Liu, “A recursive soft-decision approach to blind image deconvolution” IEEE Trans. on Signal Processing”, vol. 51, no. 2, pp. 515-526, Feb. 2003. 3. Zhou, Y. T., R. Chellappa, A. Vaid and B. K. Jenkins, “Image restoration using neural networks”, IEEE Trans. On Acoustics, Speech and Signal Processing, vol. 36, no. 7, pp. 1141-1151, 1988. Authors’ affiliations:

T. A. Cheema, I.M.Qureshi, M.A. Jinnah University, Blue Area, Jinnah Avenue, Islamabad, Pakistan. e-mail: [email protected] A. Hussain, University, of Stirling, UK e-mail: [email protected] (Corresponding Author)

5

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005

Figure captions: Fig. 1 The structure of the proposed artificial neural network based on ARMA model.

Fig. 2 Original images of ‘lena’ and ‘peppers’.

Fig. 3 Degraded images of ‘lena’ and ‘peppers’ by using space variant Gaussian blur with 30dB AWGN noise.

Fig. 4 Restored images of ‘lena’ and ‘peppers’ using proposed algorithm.

Table 1 ISNR restored images degraded by Gaussian blur and AWGN noise with different image restoration methods

6

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Figure 1

7

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Figure 2

(b) ‘lena’

(a) ‘peppers’

8

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Figure 3

9

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Figure 4

10

© IEE Electronics Letters, Vol.41, No.6, pp 308-309, March 2005 Table 1

Method ISNR (dB) MSE

Our proposed algorithm ‘Peppers’ ‘lena’ 3.43 3.627 0.0399 0.087

Hierarchical model based NN by Yap of ‘lena’ 2.95 0.032

11

Hopfield NN method by Zhou of ‘lena’

λ = 10−3

λ = 10−6

2.59 -

0.44 -

Suggest Documents