Recursive Algorithms for Bias and Gain Nonuniformity ... - SMT - UFRJ

1 downloads 0 Views 2MB Size Report
(11). With this definition, we have that: ∂ϵ. ∂b. = (. AMA. −1. )T. − I. (12). TABLE I. RLS ALGORITHM FOR BIAS CORRECTION. Do for k ≥ 0. ϵk = yk − AMkA−1.
4758

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

Recursive Algorithms for Bias and Gain Nonuniformity Correction in Infrared Videos Daniel R. Pipa, Student Member, IEEE, Eduardo A. B. da Silva, Senior Member, IEEE, Carla L. Pagliari, Senior Member, IEEE, and Paulo S. R. Diniz, Fellow, IEEE

Abstract— Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity. Index Terms— Adaptive filtering, fixed-pattern noise, infrared video, nonuniformity correction.

I. I NTRODUCTION

N

OWADAYS, most infrared imaging sensors use Infrared Focal Plane Arrays (IRFPA). Each IRFPA is formed by an array of infrared detectors aligned at the focal plane of the imaging system. Due to the fabrication process, each detector presents unequal responses under the same infrared (IR) stimulus [1]. This spatially nonuniform response produces corrupted images with a fixed-pattern noise (FPN) that has a slow and random drift requiring constant compensation [2]. Hence, the output signal of IR detectors needs to be corrected to produce an image with the quality required by the application. Figure 1 shows a real-life infrared image corrupted with real FPN. An accepted approach to FPN correction is to model the pixel responses as affine, that is, a multiplicative term added to a constant [3]; we thus define for each detector (pixel) an

Manuscript received March 10, 2010; revised June 19, 2012; accepted August 28, 2012. Date of publication September 13, 2012; date of current version November 14, 2012. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Farhan A. Baqai. D. R. Pipa is with the Universidade Federal do Rio de Janeiro, Rio de Janeiro 21945-970, Brazil, and also with Petrobras Research and Development Center, Rio de Janeiro 21945-970, Brazil (e-mail: [email protected]). E. A. B. da Silva and P. S. R. Diniz are with the Department of Electronics, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21945-970, Brazil (e-mail: [email protected]; [email protected]). C. L. Pagliari is with the Department of Electrical Engineering, Instituto Militar de Engenharia, Rio de Janeiro 22290270, Brazil (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2012.2218820

Fig. 1.

Image corrupted with real FPN.

offset, or bias, and a gain. By correcting these offsets and gains one aims to obtain a uniform response for the entire FPA. In addition, since these FPA parameters drift over time, such correction has to be performed periodically or even on a frame-by-frame basis. In most sensors, as the bias nonuniformity dominates the gain nonuniformity, many nonuniformity correction methods do not compensate for the latter [4]. However, better results are achieved when both parameters are corrected. This paper proposes new adaptive scene-based nonuniformity correction (NUC) algorithms that jointly compensate for bias and gain parameters on a frame-by-frame basis while progressively improving registration. The key contribution of this work is to show how to formulate the bias and gain corrections for NUC using the adaptive filtering framework, particularly those related to the RLS (Recursive Least Squares) and AP (Affine Projection) algorithms. The proposed solutions produce competing reduction in FPN in comparison to the available techniques, while generating perceptually better images. This paper is organized as follows. Section II provides a review of the nonuniformity problem on IRFPA’s, as well as the most used correction techniques. Section III is devoted to discuss the NUC techniques pointing out in which class of solution fall the proposed NUC methods. In Section IV, we briefly provide the signal description. Section V proposes the RLS solution to the NUC which is followed by the corre-

1057–7149/$31.00 © 2012 IEEE

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

4759

sponding solution employing the AP algorithm in Section VI. In Section VII the experimental results with real and synthetic infrared videos are presented, along with a comparison to other techniques. Section VIII contains the final remarks and conclusions. In this paper, we use the terms fixed-pattern noise and spatial nonuniformity interchangeably. II. IRFPA AND F IXED -PATTERN N OISE M ODELS Although the response of each pixel of an IRFPA is nonlinear, a largely used and accepted model for a FPA sensor is the bias-gain linear model [2]–[10], given by yk (i, j ) = a(i, j )x k (i, j ) + b(i, j )

(1)

where yk (i, j ) is the response (measured signal) of the (i, j )-th pixel of the IR camera at frame k, a(i, j ) is the gain associated to the (i, j )-th pixel, x k (i, j ) is the uncorrupted image, that is, the incident infrared radiation collected by the respective detector at pixel coordinates (i, j ) at frame k, b(i, j ) is the bias associated to the pixel at coordinates (i, j ), and k = 1, 2, . . . represents the frame number associated to its time instant. Nonuniformity correction (NUC) algorithms target to estimate the actual infrared radiation x k (i, j ) by estimating the gain and offset parameters from the readout values yk (i, j ). ˆ j ) and gain a(i, Once the bias b(i, ˆ j ) are estimated, an estimate of the real and uncorrupted infrared image is given by: xˆk (i, j ) =

ˆ j) yk (i, j ) − b(i, . a(i, ˆ j)

(2)

Note that, although bias and gain drift over time, we have dropped their dependency on frame k. This can be done because the drift presented by FPN varies rather slowly. This favors the use of time-invariant parameters modeled together with some tracking of their slow variation.

are classified as statistical and registration-based. Registrationbased techniques track pixel (or pixel-block) motion between frames, and calculate the associated parameters for the detectors related to the estimated displacements. Statistical algorithms rely on the assumption that all detectors in the array are exposed to the same range of irradiance (i.e. same statistics) within a sequence of frames. This assumption is valid only if the scene content does not vary significantly from frame to frame. Correction is achieved by adjusting gain and bias parameters of each pixel in order to obtain the same mean and variance for every pixel in the array. Statistical algorithms have been reported by Harris [11], Hayat [12], Torres [5], [6], Scribner [13] and others. Registration-based algorithms use the idea that each detector should have an identical response when observing the same scene point over time (i.e. same radiance). These algorithms often need a motion-estimation stage to align consecutive frames and compare the responses of two different pixels to the same radiance. Bias and gain are estimated so as the responses become similar. In this case, it is also assumed that the scene does not change considerably between consecutive frames. Registration-based algorithms have been proposed by Sakoglu et al. [10], Hardie [4], [7], Ratliff [2], [14], Averbuch [8], and others. Our methods differ from the previously proposed methods, e.g. Sakoglu et al. [10], because we consider both gain and bias jointly and use a more flexible and general motion model. A RLS NUC method was presented in [15] by Torres et al. As they point out, the validity of the method is based on the assumption that the scene is constantly moving with respect to the detector, which may not always be true. Differently, our RLS method assumes only global motion and does not make any assumption on how it varies. Rather, we estimate motion from the frames and use it explicitly when defining the error. Thus, our method can handle a wider class of IR videos. IV. P ROBLEM S TATEMENT

III. N ONUNIFORMITY C ORRECTION T ECHNIQUES If we write equation (1) for every pixel (i, j ) and two values of k, we can solve the system of equations and compute ˆ j ), as shown in equation (2). However, this a(i, ˆ j ) and b(i, solution requires the knowledge of x k (i, j ). The NUC methods can be categorized in two classes according to the way the values of a(i, j ) and b(i, j ) are estimated: calibration-based (or reference-based), and scene-based. Reference-based calibration methods for NUC use uniform infrared sources (blackbody radiators) so that x k (i, j ) is precisely known for all (i, j ). The most widespread technique is the Two-Point Calibration method [3], which employs two blackbody radiation sources at different temperatures to calculate both gain and bias parameters. Despite providing radiometrically accurate corrected imagery, such kind of method interrupts the normal operation of the system during the calibration stage, which is inconvenient in many applications. Scene-based NUC techniques can overcome this drawback by exploiting motion-related features in IR videos in order to estimate the gain and bias. In general, these techniques

As previously mentioned, the key idea is to estimate the bias and the gain associated to each pixel in the image, and then use equation (2) to estimate the real and uncorrupted image. First, we write equation (1) in vector notation as yk = Axk + b,

(3)

where yk is an N-dimensional vector representing the observed image at time k, A is an N × N diagonal matrix whose elements are the gain factors associated to the image pixels, xk is an N-dimensional vector representing the real image at time k and b is a vector representing the bias of the acquired data, with all vectors in a lexicographical order. N is the number of pixels in the image. The gain and the bias (offset) factors are considered time invariant due to their slow drift [6]. ˆ and bˆ are estimated values of the gain and bias, If A respectively, an estimation of the real image is given by:   ˆ −1 yk − bˆ . (4) xˆ k = A As this work proposes the estimation of the bias and gain parameters continuously, we model the variation of the frames

4760

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

in time using a motion equation between two consecutive frames of an IR image sequence as follows: xk = Mk xk−1 + ν k ,

(5)

where Mk is the matrix that implements the displacement between consecutive frames k − 1 and k, and ν k is the vector that models the next frame updates that cannot be obtained by a simple displacement. We suppose that the motion between two successive frames as being obtained by a motion estimation algorithm, and also that vector ν k containing the updates is negligible. In this work we perform motion estimation using the LIPSE algorithm described in Appendix VIII-A. For more detailed information on the LIPSE algorithm, the reader is referred to [2]. By combining equations (3)–(5), it is possible to write the estimation error vector of frame k based on frame k − 1, the shift matrix Mk , gain and bias estimates as:  k = yk − yˆ k   ˆ −1 yk−1 − bˆ − b, ˆ kA ˆ = yk − AM

(6)

where  k is the estimation error vector. The mean square error is given by εk =

N T k 1  [k (i )]2 = k , N N

(7)

i=1

where N is the total number of pixels in the image. V. RLS A LGORITHM

TABLE I RLS A LGORITHM FOR B IAS C ORRECTION Do for k ≥ 0

   k = yk − AMk A−1 yk−1 − bˆ k − bˆ k   T    Rk = AMk A−1 − I · AMk A−1 − I ˆ ˆ  = λH H k  k−1 + Rk  T uk = AMk A−1 − I  k

ˆ  , uk ) vk = BCGSTABL(H k ˆbk+1 = bˆ k − vk

The last term of equation (10) can be computed as: N ∂ 2ε 1  ∂2 = [(i )]2 T N ∂b∂bT ∂b∂b i=1

=

2 ∂ ∂ N ∂b ∂bT

(13)

ˆ = 2H ˆ  . The Since the term N2 is constant, we define H N complete RLS algorithm for bias correction is given by Table I. The BCGSTABL symbol from the algorithm in Table I ˆ  vk = uk by the represents the solution of the equation H k so-called BiCGstab() – Biconjugate Gradient Stabilized () ˆ  . This Method [20] in order to avoid the inversion of matrix H k method is widely used for solving large sparse unsymmetric linear systems and has provided good results in our experiments.

RLS algorithms aim to minimize a weighted sum of square errors [16], [17], that is ξkR L S

=

k 

B. Why BCGSTABL Instead of Matrix Inversion Lemma? λ

k−l

εl ,

(8)

l=0

where 0  λ ≤ 1 is referred to as forgetting factor. After some manipulation, it can be shown that the update equation for the RLS algorithm may be written as [16], [18]: ˆ −1 ∇b εk , bˆ k+1 = bˆ k − H k

(9)

ˆ k is an estimate of the Hessian matrix and ∇b εk is where H the a priori error gradient. The following relations hold for the Hessian matrix [19]: Hk  ∇b2 ξkR L S =

∂ 2 ξkRL S ∂b∂bT

ˆ k−1 + = λH

∂ 2 εk . ∂b∂bT

(10)

The above equations show how to update the Hessian matrix at each step. A. Bias Correction by RLS Method It can be shown that the term ∇b εk in equation (9) is given by (k index will be hidden for simplicity) 2 ∂ . N ∂b With this definition, we have that: T  ∂ = AMA−1 − I. ∂b ∇b ε =

(11)

(12)

Adaptive filtering literature [16], [17] suggests, for the computation of the Hessian inverse, the use of the matrix inversion lemma −1  DA−1 , [A + BCD]−1 = A−1 − A−1 B DA−1 B + C−1 (14) where A, B, C and D are matrices of appropriate dimensions, and A and C are nonsingular. Through this lemma it is ˆ  )−1 with O(N 2 ) multiplications instead possible to update (H k 3 ˆ. of O(N ) multiplications needed for direct inversion of H k However, this reduction in complexity is achieved when B and D are chosen to be vectors, i.e. B = DT = f and C α is chosen to be a scalar. In this case, the middle term =−1 −1 DA B + C−1 is easily inverted as it becomes a scalar,

−1 T −1 that is, f A f + α −1 . In our case, B and D have to be matrices, i.e. B = T DT = AMk A−1 − I and C = I. Thus, the middle

−1 term DA−1 B + C−1 in equation (14) is a matrix, which has to be inverted. Therefore, in this case, no complexity reduction is obtained. For this reason, we have chosen to use the BCGSTABL algorithm to avoid matrix inversion as it is suitable to solve large sparse unsymmetric linear systems and tends to converge in few steps (less than 30 iterations).

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

4761

C. Gain Correction by Tensorial-RLS Method

TABLE II T ENSORIAL -RLS A LGORITHM FOR G AIN C ORRECTION

If we apply here a similar procedure used for bias correction in subsection V-A, we can write the update equation as ˆ −1 ∇A εk , Ak+1 = Ak − G k

Do for k ≥ 0 zk = yk−1 − b ¯ k zk − b  k = y k − Ak M k A Do for 1 ≤ i ≤ N ˙¯ (i) z ˙ k (i)Mk A ¯ k + Ak M k A uk (i) = A k k ¨¯ (i) z ˙ (i)M A ¯˙ (i) + A M A w (i) = 2A

(15)

ˆ k is a Hessian matrix estimate and ∇A εk is the a priori where G error gradient [16]. As in equation (11), the error gradient is given by (k index hidden) 2 ∂ . (16) N ∂A The last equation needs the evaluation of the derivative of vector  with respect to matrix A. This operation is achieved by differentiating each element of vector  with respect to the full matrix A [19]. The result is a row-vector in which each element is a matrix, that is, a 3-dimensional tensor. As a result, we refer to the algorithm that we have developed to solve this problem as Tensorial-RLS algorithm. An approach to the development of these algorithms would be to use tensorial notation and define tensorial operations. Instead, in this work we opted to develop the gain correction by firstly deducing a pixel-by-pixel gain estimator. Then, in section V-C.1, we further develop the method by grouping pixel-by-pixel operations in vectors and matrices, resulting in a compact representation algorithm which is more easily implemented. The update equation for gain estimation of each pixel by Tensorial-RLS can be written as

k

∇A ε =

ˆ −1 ∇a(i) εk , aˆ k+1 (i ) = aˆ k (i ) − G k

(17)

ˆ k is a Hessian matrix estimate and ∇a(i) εk is the a where G priori error gradient [16]. The gains associated to the pixels of the image are lexicographically ordered and their individual values are accessed through index i (that is, a(i ) = [A]ii ). ∂εk Strictly speaking, ∇a(i) εk is a scalar, as it is defined as ∂a(i) . ˆ Consequently, Gk is not a matrix, but a simple scalar which represents the second-order partial derivative of εk with respect ˆ k. to a(i ). Thus, the symbol gk (i ) will be used instead of G When the gradient is applied to the error defined in equation (7), one gets the following (k index avoided for simplicity): ∂  ∂a(i ) T  ∂A−1 ∂A T −1 = −z , MA + AM ∂a(i ) ∂a(i )

∇a(i) ε =

(18)

where z = (y − b). The partial derivatives will be expressed in equations (22) and (23). The Hessian matrix is given by [19]: 2 Gk  ∇a(i) ξkR L S =

2 ∂ 2 ξkR L S ˆ k−1 + ∂ εk . = λ G ∂a 2 (i ) ∂a 2 (i )

(19)

The second order gradient of the error needed above is obtained as:

 T    2 ∂ 2ε ∂ ∂ 2 ∂ T = + . (20) ∂a 2 (i ) N ∂a 2 (i ) ∂a(i ) ∂a(i )

k

k k

k

k k

k

T v k (i) = wT k (i) k + uk (i)uk (i) gk (i) = λgk−1 (i) + v k (i) aˆ k+1 (i) = aˆ k (i) − gk−1 (i)uT k (i) k

Applying the second order gradient to the estimation error, one obtains:  T ∂ 2 T 2 ∂A M ∂A−1 + AM ∂ 2 A−1 = −z . (21) 2 ∂a(i) ∂a(i) ∂a (i) ∂a 2 (i ) In order to aid the visualization of the equations, we will ¯ = A−1 . Moreover, the higher order derivatives of the use A gain matrix A and its inverse are presented as: ⎡ ⎤ 0 ˙ ) = ∂A = ⎣ 1 ⎦ A(i (22) ∂a(i ) 0 ⎡ ⎤ 0 −1 ∂A ˙¯ ) = = ⎣ −a −2 (i ) ⎦ (23) A(i ∂a(i ) 0 and

⎡ ⎤ 0 2 A−1 ∂ ¨¯ ) = = ⎣ 2a −3 (i ) ⎦ A(i ∂a 2(i ) 0

(24)

where only the ii -th elements differ from zero. Then, equation (18) can be written more compactly as   ˙¯ ) T . ˙ )MA ¯ + AMA(i (25) ∇a(i) ε = −zT A(i The Tensorial-RLS algorithm for gain estimation is shown in Table II. The “tensorial” denomination comes from the fact that the solution for whole image needs tensorial notation, as the solution for individual pixels uses matrix-vector and matrix-matrix multiplications. 1) Vectorization of Tensorial-RLS Algorithm for Gain Correction: Due to sparse structure of matrices in the TensorialRLS algorithm, it is possible to group the calculations in order to transform the loop (in the Tensorial-RLS algorithm of table II) into matrix operations. This can lead to a significant improvement in the speed of the algorithms when implemented in a matrix-oriented programming language such as M ATLAB. First, we define a matrix Z = diag (z) and we redefine (k index hidden to aid visualization) ˙¯ = −diag a −2 (1), a −2 (2), . . . , a −2 (N) (26) A ¨¯ = 2diag a −3 (1), a −3 (2), . . . , a −3 (N) , A (27) which are calculated at each iteration. Let’s first analyze the term uk (i ) in table II. Its squared norm will be used to calculate v k (i ). However, we can compute

4762

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

TABLE III T ENSORIAL -RLS A LGORITHM FOR G AIN C ORRECTION (V ECTOR F ORM )

Do for k ≥ 0

Do for k ≥ 0 zk = yk−1 − b ¯ k zk − b  k = y k − Ak M k A Zk = diag (zk ) ˙¯ ¯ k zk − Zk Ak Mk A Uk = diag Mk A  k ˙ ¨ ¯ ¯ Wk = Zk 2Mk Ak − Ak Mk Ak 

T vk = Wk  k + i Uk ◦ Uk i j

   k = yk − AMk A−1 yk−1 − bˆ k − bˆ k    T   Rk = AMk A−1 − I · AMk A−1 − I

Keep Rk , Rk−1 , . . . , Rk−L−1 in memory. ˆ ˆ  = λH H k  k−1 + Rk − Rk−L−1  T uk = AMk A−1 − I  k ˆ  , uk ) vk = BCGSTABL(H k bˆ k+1 = bˆ k − vk

gk = λgk−1 + vk T α k = Uk  k gk Ak = Ak−1 + diag (α k )

uk (i ) for all i in one step, store the results in the columns of a matrix Uk and then compute their  squared norms. The latter operation will be represented by iN [Uk ◦ Uk ]i j , meaning that the norms of the columns are calculated and stored in a rowvector. The symbol “◦” represents Hadamard element-wise or product, that is, p = q ◦ r is given by p i = q i [r]i . The same idea can be applied to the term wk (i ) in table II. The results for all i will be calculated in one step and stored in the matrix Wk , which will be further multiplied by  k . It is important to note that the conception of matrices Uk and Wk is possible owing to the special sparsity of the matrices ˙¯ (i ) and A ¨¯ (i ). They have only the ii -th element ˙ k (i ), A A k k different from zero, thus a post-multiplication by a matrix ¯ k ) will conserve only the i -th row, whereas ˙ k (i )Mk A (e.g. A ˙¯ (i )) will keep only the i -th a pre-multiplication (e.g. Ak Mk A k column. The vector version of Tensorial-RLS algorithm for gain correction is shown in Table III, where Hadamard or element

q [q ] wise division p = r is given by p i = [r] i . i

VI. A FFINE P ROJECTION A LGORITHMS Affine projection (AP) is a class of adaptive-filtering algorithms which recycles the old data signal in order to improve the convergence as compared to stochastic gradient-type of algorithms. Also referred to as data-reusing algorithms, the AP algorithms are known to be viable alternatives to the RLS algorithms by achieving lower computational complexity in situations where the input signal is correlated. The penalty to be paid when increasing the number of data reuse is a slight increase in algorithm misadjustment [21]. Due to memory limitations in the implementation of AP algorithm, we introduce here a different approach from the one usually found in the literature [16], [22], [23]. We define the objective function as ξkA P =

k  i=k−L

εi =

k 

 Ti  i ,

TABLE IV AP A LGORITHM FOR B IAS C ORRECTION

(28)

i=k−L

where L corresponds to the amount of reused data. By minimizing (28) we minimize the estimation error squared over a window of size L. The main difference between RLS and AP algorithm is that the former considers the whole past

of errors weighted by the forgetting factor, whereas the latter considers only a window of past errors, giving the same weight to all errors. The AP algorithm usually requires less computational complexity than the RLS algorithm brought about by the reduction in the dimension of the information matrix that is inverted. In addition, the finite memory of the AP algorithm reduces the noise enhancement and the negative effects of the slow variations of the FPN, both inherent to the RLS algorithm, see [16] for details. Following a similar procedure to the one used in Section V, it can be shown that the Hessian matrix can be estimated by Hk  ∇b2 ξkA P =

∂ 2 ξkA P ∂b∂bT

2 2 ˆ k−1 + ∂ εk − ∂ εk−L−1 . =H ∂b∂bT ∂b∂bT

(29) (30)

ˆ k accumulates information about the last L The matrix H errors. When new information comes, the oldest error contribution has to be subtracted. In short, RLS and AP algorithms differ from the way the Hessian matrix is estimated. Apart from that, the algorithms are basically the same (e.g. a priori error gradient, etc).

A. Bias Correction by AP Algorithm The update equation for the Affine Projection algorithm is the same as equation (9), repeated here for convenience: ˆ −1 ∇b εk , bˆ k+1 = bˆ k − H k

(31)

ˆ k is an estimate of the Hessian matrix and ∇b εk is where H the a priori error gradient [16]. It is straightforward to show, by substituting equations (12) and (13) into equation (29), that ˆ  = λH ˆ  + Rk − Rk−L−1 , H k k−1

(32)

  T

where Rk = AMk A−1 − I · AMk A−1 − I . Past values of Rk , up to the (k − L − 1)-th, must be kept in memory. The complete affine projection bias correction algorithm is summarized in Table IV.

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

4763

TABLE V T ENSORIAL -AP A LGORITHM FOR G AIN C ORRECTION (V ECTOR F ORM ) Do for k ≥ 0 zk = yk−1 − b ¯ k zk − b  k = yk − Ak Mk A

adequately choosing {amin , amax } and {bmin , bmax } and simply ˆ j ) to lie within these ranges. forcing a(i, ˆ j ) and b(i, VII. R ESULTS This section presents the results obtained with the proposed algorithms and compares their performances to the state-ofthe-art NUC algorithms. First, the performance – meaning fidelity of estimated video to the uncorrupted video – of the algorithms is assessed through simulated data. Synthetic FPN and random noise are introduced in simulated infrared video obtained from a static image. Then, we apply the algorithms to real FPN-corrupted infrared video, where the performance of the methods is subjectively evaluated. In the experiments pixel values can only range from 0 to 255 (image dynamic range).

˙¯ Uk = diag(Mk Ak zk ) − Zk Ak Mk A k ˙ ¨ ¯ k − Ak M k A ¯k Wk = Zk 2Mk A 

T vk = Wk  k + i Uk ◦ Uk i j

Keep vk , vk−1 , . . . , vk−L−1 in memory. gk = λgk−1 + vk − vk−L−1 T α k = Uk  k gk Ak = Ak−1 + diag (α k )

B. Gain Correction by Tensorial-AP Algorithm For the gain estimation, the vector gk in Table III plays the role of Hessian matrix. Strictly speaking, as seen in Section V-C, the second-order partial derivatives become scalars and we have only to worry about one value per pixel. The values of gk can be regarded as variable step sizes for each pixel. Their update rule follows the same idea of Section VI-A: the newest vk of L values will be added to the accumulator gk , whereas the oldest (i.e. vk−L−1 ) will be subtracted from the accumulator. In the experiments described in Section VII, we use the vector form of the Tensorial-RLS algorithm (Table III) as a basis to Tensorial-AP algorithm. The difference is in how the vector vk is updated. The complete Tensorial-AP algorithm for gain correction is shown in Table V. C. Handling Dead Pixels, Leaking Pixels and Algorithm Breakdown In this section we address problematic conditions which can lead to an algorithmic breakdown. We base our analysis primarily on equations (1) and (2). A breakdown would occur if we could not use equation (2) for FPN correction, repeated here for convenience: xˆk (i, j ) =

ˆ j) yk (i, j ) − b(i, . a(i, ˆ j)

(33)

Suppose we apply our FPN correction method to an IR video sequence obtained by a camera with dead pixels, i.e. a(i, j ) = 0 for some {i, j }. The gain estimation algorithm would eventually converge to a(i, ˆ j ) = 0 and equation (33) could not be used. Thus, we could set a minimum accepted ˆ j ) to ensure that their gain, say amin , and monitor all a(i, values are at least amin . Another situation where the video acquisition does not agree with the observation model in (1), is the case of leaking pixels, i.e. yk (i, j ) = a(i, j )x(i, j ) + b(i, j ) + a(i + δi , j + δ j )x(i + δi , j + δ j ) + b(i + δi , j + δ j ) for some {i, j, δi , δ j }. In this ˆ j ) estimates may occur. case, divergence of a(i, ˆ j ) and b(i, Ideally, in order to take into account these particularities, the camera sensor should be studied and its model used to derive the algorithms. However, we can prevent breakdowns by

A. Simulation Results For image quality evaluation, we use two measures: PSNR1 , for its frequent use in image quality assessment, and SSIM (Structural SIMilarity) [24], for its good consistency with subjective assessment compared to other measures. Both PSNR and SSIM indicate how close the estimated image xˆ k is from the real uncorrupted image xk . When the two images are identical the PSNR valeu will be infinite, whereas SSIM will be one. Thus, the higher both measures are, the better is the image’s fidelity. We will use log10 (SSIM) to emphasize the numerical differences between the methods. We compared four algorithms: LMS-based NUC developed in [25], Kalman-Filter-based [8], Tensorial-RLS [9] (described in detail in Section V) and the proposed TensorialAP described in Section VI. We have chosen the KalmanFilter-based method described in [8] as the reference method for our comparisons since it provides state-of-the-art results without any assumptions about the scene content or motion behavior. Other methods may provide similar results, but they often rely on motion constraints (i.e. only 1D in [2] or nonsubpixel in [7]), which restricts the gamut of videos they can be applied to. Also, Zuo et al. [26] proposed a gradient descent NUC algorithm which considers motion explicitly. The method is similar to the LMS-based one developed in [25]. We generated 50 videos from portions of static images. An example can be seen in Figure 2. Each video contained 250 frames with resolution 128 × 128 pixels. Between consecutive frames there were translational shifts defined by random real numbers from −2 to 2. Synthetic FPN was inserted and corrupted all frames of the videos. We remind that FPN (fixedpattern noise), as the name suggests, is time-invariant. We inserted FPN according to equation (1) with gain standard deviation randomly selected from the interval 0 ≤ σA ≤ 0.1 and bias standard deviation randomly selected from the interval 0 ≤ σb ≤ 0.5. Additive noise, with standard deviation randomly selected from the interval 0 ≤ σn ≤ 0.05, was also added to each frame. We used normal distribution for the random selection of the bias and the gain. 2552 1 PSNR(x, y) = 10 log 10 MSE(x,y) .

4764

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

−0.1 −0.2 LMS Averbuch/Kalman Tensorial−RLS Tensorial−AP

log(SSIM)

−0.3 −0.4 −0.5

(a) Fig. 2.

−0.6

(b)

(a) Original image. (b) Image corrupted with synthetic FPN.

−0.7 5

37 36

Fig. 4.

10

15

20 25 30 Frame number

35

40

Mean SSIM obtained results from simulated data.

35 37

LMS Averbuch/Kalman Tensorial−RLS Tensorial−AP

33

1.4

36

1.2

35

32

True−motion−based Tens−AP

PSNR [dB]

31 30

1

LIPSE−based Tens−AP

34

Brox−based Tens−AP 0.8

33

MSE

PSNR [dB]

34

LIPSE shift MSE Brox shift MSE

0.6

32

29

0.4 31

28 0

50

100

150

200

250

Fig. 3.

0.2

30

Frame number

Mean PSNR obtained results from simulated data.

29 0

50

100

0 150

Frame number

TABLE VI AVERAGE R ESULTS FOR 50 S YNTHETICALLY FPN-C ORRUPTED V IDEOS . H IGHER VALUES OF PSNR AND SSIM D ENOTE B ETTER R ESULTS PNSR [dB]

log(SSIM)[×10−4 ]

LMS

34.7043

−0.1226 −0.1242

Kalman

34.9060

Tensorial-RLS

35.0844

−0.1234

Tensorial-AP

36.1221

−0.0989

We have used μ = 0.1 as step size in the LMS-based algorithm, λ = 0.999 as forgetting factor in the RLS algorithm and L = 3 as the number of reused inputs in the AP algorithm. The videos were processed with the four algorithms, and the fidelity of the reconstructed video in reference to the uncorrupted one was evaluated through PSNR and SSIM measures. Figures 3 and 4 show the average of the 50 generated videos. The frame number axis was not averaged in order to show the convergence of all methods. Table VI shows the average results. Additionally, the experiments have shown that the order that gain and bias FPN correction are performed (i.e. first bias then gain or first gain then bias) does not affect the final result. As at each iteration (i.e. each new frame) the bias and gain estimates

Fig. 5. Tensorial-AP performance using true motion information and shift estimation by the LIPSE algorithm and the Brox algorithm [27].

are only slightly refined, the final result does not depend on the order of corrections. B. Errors in Shift Estimation In this section, we assess the performance of the Tensorial-AP algorithm in terms of shift estimation errors and shift estimation algorithms. We generated 100 videos from portions of static images with known vertical and horizontal shifts. Then, we fed the Tensorial-AP algorithm with true motion information, motion estimated by LIPSE algorithm described in Appendix VIII-A and motion estimated by Brox’s algorithm described in [27]. Since the FPN estimation and removal improves according to the frame number, we applied a pre-correction before estimating shifts between each pair of frames. Therefore, shift estimation error also tends to improve with time for the noise level tends to decrease. Figure 5 shows this behavior and the evolution of image quality and motion estimation mean squared error (MSE) with time. As expected, the best performance is attained when true motion is available. However, LIPSE algorithm performed

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

4765

TABLE VII PARAMETERS U SED FOR E ACH A LGORITHM IN E XPERIMENTS W ITH R EAL I NFRARED V IDEOS LMS

Step Size

μ = 0.3

Kalman

Forgetting factor

λ = 0.9

Tensorial-RLS

Forgetting factor

λ = 0.9

Tensorial-AP

Reuse order & step size

L=2 &μ=2

(a)

better than Brox’s [27], which is one of a family of opticalflow-based algorithms [27]–[31]. Although optical-flow-based algorithms provide excellent results for most real videos, when strong fixed-pattern noise is present the averaging nature of LIPSE (see Appendix A attenuates the FPN and provides better results, mainly in the beginning of the simulations, when the FPN level is high, as argued in [2], [32]. Moreover, in our studies we have focused on pure translational motion, what justifies the use of LIPSE algorithm. For complex non-global motion, a differential coarse-to-fine motion estimation method should be used [27]–[31]. Figure 5 also shows the performance of Tensorial-AP algorithm in presence of errors in shift estimation. The convergence speed is slowed down when motion is not accurately estimated. However, since we use only one iteration of the recursive algorithm for each new incoming frame, each update is only slightly affected. Moreover, when the noise level lowers with time, the shift estimation tends to become more accurate.

(b)

Fig. 6. (a) Telephone-observed IR video. (b) FPN-corrected video by LMS algorithm.

(a)

(b)

Fig. 7. (a) Telephone-observed IR video. (b) FPN-corrected video by Averbuch/Kalman algorithm.

C. Real IR Videos Results We shot video sequences using a FLIR SYSTEMS model ThermaCAM P65 infrared camera, with a focal plane array uncooled microbolometer detector. Each infrared sequence consists of 200 frames with picture size of 320 × 240 pixels at 60 frames/second. The “Noise Reduction” option was switched off, as well as the “Shutter Period” option. The latter refers to the FPN correction provided by the camera manufacturer. When active, the camera shutter receives a periodic signal closure (varying from 3 to 15 minutes) to perform a two-point calibration. The FPN contamination was very clear in the acquired video. In order to assess the performance of each FPN reduction method, we applied the four algorithms under evaluation to the captured videos. The algorithms had some of their parameters empirically adjusted in order to achieve the best result from each method. The new parameters are shown in Table VII. Figures 6 to 9 show the 133-th frame of the observed video “telephone” alongside the corrected videos by each of the four algorithms under evaluation. Figures 10 to 13 show the 190-th frame of the observed video “tube segment” alongside the corrected videos by each of the four algorithms under evaluation. We also applied the Tensorial-AP algorithm to a infrared video sequence obtained on the internet (http://www.youtube.com/watch?v=lHw_JWLkqOo). The sequence shows aerial images of a truck and another vehicle. The

(a)

(b)

Fig. 8. (a) Telephone-observed IR video. (b) FPN-corrected video by Tensorial-RLS algorithm.

(a)

(b)

Fig. 9. (a) Telephone-observed IR video. (b) FPN-corrected video by Tensorial-AP algorithm.

images are corrupted with FPN, though one can notice that it is a different type of FPN. Specifically, the stripe pattern is horizontal rather vertical. The 69-second original frame (on the left) along with the processed image (on the right) are shown in Figure 14 as an example of the Tensorial-AP algorithm output. We can observe on the real images that the algorithms were able to remove the FPN more or less efficiently depending on each one’s characteristics. Interestingly, the video on Figure 14 showed originally overlaid helping information and target lines

4766

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

(a)

(b)

Fig. 10. (a) Tube segment-observed IR video. (b) FPN-corrected video by LMS algorithm.

(a)

(b)

Fig. 14. (a) Observed “truck” IR video. (b) FPN-corrected video by TensorialAP algorithm. TABLE VIII AVERAGE T IME ( IN S ECONDS ) OF E XECUTION FOR 50 S YNTHETICALLY FPN-C ORRUPTED V IDEOS . L OWER VALUES A RE B ETTER

(a)

(b)

Fig. 11. (a) Tube segment-observed IR video. (b) FPN-corrected video by Averbuch/Kalman algorithm.

LMS

Kalman

Tensorial-RLS

Tensorial-AP

26.7441

315.7993

334.2236

76.3300

As can be observed, the best perceptual results are achieved by the AP and RLS algorithms with the former having a much smaller computational complexity, as detailed in the next subsection. D. Computational Load

(a)

(b)

Fig. 12. (a) Tube segment-observed IR video. (b) FPN-corrected video by Tensorial-RLS algorithm.

(a)

(b)

Fig. 13. (a) Tube segment-observed IR video. (b) FPN-corrected video by Tensorial-AP algorithm.

which were also removed by the Tensorial-AP algorithm. Considering that those artifacts are constant throughout the entire sequence, they match the FPN definition and their removal is consistent. From a mathematical perspective, the results obtained are reasonable since we do not assume in our models any vertical or horizontal pattern for the FPN. Rather, each pixel has its own gain and bias and no correlation between pixels is imposed.

The computational complexity of each algorithm has also been assessed by measuring their execution times. Table VIII shows the average results in seconds for the 50 videos mentioned in Section VII-A. As expected, LMS-based algorithm was the fastest due to its simplicity. Tensorial-AP outperformed the others because BCGSTABL converged in less steps. VIII. C ONCLUSION This work presented two new algorithms for NUC (bias and gain nonuniformity correction) in infrared videos. The proposed methods, called Tensorial-RLS and Tensorial-AP, are based on Recursive Least-Squares and Affine Projection adaptive filters. They received the “Tensorial” denomination due to the fact that their derivation includes the concept of tensors. Although the notion of tensors was employed, it has not been necessary to use tensorial notation. Instead, a pixel-bypixel version was developed, which was further grouped into a compact vectorial version. This version is easier to implement and faster when matrix-oriented programs are used (e.g. in MATLAB, matrix multiplications are faster than loops). Section VII showed the results when comparing the proposed algorithms to state-of-the-art NUC proposed by Averbuch in [8]. Although Averbuch named his algorithm “Kalman-filter-based”, it is rather an RLS-based method, as Averbuch himself observed. In fact, when only bias FPN is present, the performances of Tensorial-RLS and Averbuch/Kalman algorithm are quite equivalent. However, when also gain FPN exists, Tensorial-RLS outperforms Kalman-based methods, as the former corrects both gain and bias FPN whereas the latter corrects bias only. The

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

4767

affine projection algorithm (Tensorial-AP), on the other hand, showed the best results of video quality for both image quality measures (PSNR and SSIM). Concerning speed of convergence and final misadjustment, the experiments showed the following: 1) LMS (only bias): slow convergence and low misadjustment. 2) Kalman (only bias): fast convergence and high misadjustment. 3) RLS (bias and gain): fast convergence and high misadjustment. 4) AP (bias and gain): fast convergence and low misadjustment. Both RLS and Kalman presented high misadjustment probably due to the additive random noise incorporated (i.e. convergence and final misadjustment trade-off). Affine projection algorithm showed good speed of convergence and low misadjustment compared with RLS and had the best combined results of the compared methods. Observing the results from the experiments using real infrared video, we can subjectively rank each method according to image quality as (informal subjective tests have been carried out): Tensorial-RLS and Tensorial-AP (best results), Kalman and LMS-based (worst result). The Tensorial-RLS and the Tensorial-AP algorithms were considered equivalent in terms of subjective quality.

A. LIPSE Motion Estimation Algorithm LIPSE stands for “linear interpolation projection-based shift estimator”. The advantages of projection-based algorithms are speed and noise robustness, specially the FPN which can severely affect motion estimation reliability [32]. For detail information on LIPSE, see [2], [32]. Let yk−1 and yk be two consecutive frames of a video presenting only translation shifts. Each pixel of the image is represented by yT (i, j ), and the projectionsof rows and N columns are respectively defined as y R ( j ) = N1 i=1 yT (i, j )  M 1 and yC (i ) = M j =1 yT (i, j ). The solution will be given only to the projection of columns (yC (i ), referred to as y(i ) from now on), since it is analogous to the projection of rows. Suppose that there is only subpixel motion between consecutive frames (as integer shifts are compensated – see algorithm below). Each element of the projection of the k-th frame is estimated by (34)

where 0 ≤ δk < 1 is the subpixel shift. The MSE (mean-square

2 1 M error) is defined as ϕk = M i=1 yk (i ) − yˆk (i ) . The value of δk which minimizes the MSE is such that ψk ∂ϕk ∂δk = 0 and the solution is given by δk = ζk , where ψk =

M−1  i=1



yk (i ) yk−1 (i ) − yk−1 (i + 1)

 + yk−1 (i ) yk−1 (i + 1) − yk−1 (i )

ζk =

M−1 

 2 2 (i ) − 2yk−1 (i )yk−1 (i + 1) + yk−1 (i + 1) . yk−1

i=1

(36) In summary, the LIPSE algorithm is given by the following steps: 1) Compute δk = ψζkk through equations (35) and (36) for all possible integer shifts k between the projections of consecutive frames y R ( j ) and yC ( j ). 2) Find MSE’s ϕk for all combination produced in the previous step. 3) Select the k which produces the smallest MSE ϕk . 4) Form the total shift estimate dk = k + δk . 5) Repeat previous steps for the projections of the rows y R ( j ) and then obtain the complete shifts (columns and rows) dk . ACKNOWLEDGMENT The authors would like to thank the Research and Development Center of Petrobras-CENPES, the Federal University of Rio de Janeiro-UFRJ, and its Graduate School and Research in Engineering-COPPE, the Brazilian Army Technological Center-CTEx, the Military Institute of Engineering-IME, and the Brazilian Innovation Agency-FINEP, Brazil. R EFERENCES

A PPENDIX A

yˆk (i ) = (1 − δk )yk−1 (i ) + δk yk−1 (i + 1),

and

(35)

[1] A. F. Milton, F. R. Barone, and M. R. Kruer, “Influence of nonuniformity on infrared focal plane array performance,” Opt. Eng., vol. 24, pp. 855– 862, Jan. 1985. [2] B. M. Ratliff, “A generalized algebraic scene-based nonuniformity correction algorithm,” Ph.D. dissertation, Dept. Electr. Comput. Eng., Univ. New Mexico, Albuquerque, 2004. [3] D. L. Perry and E. L. Dereniak, “Linear theory of nonuniformity correction in infrared staring sensors,” Opt. Eng., vol. 32, no. 8, pp. 1853–1859, 1993. [4] R. C. Hardie and D. R. Droege, “A map estimator for simultaneous superresolution and detector nonuniformity correction,” EURASIP J. Appl. Signal Process., vol. 2007, no. 1, p. 089354, May 2007. [5] S. N. Torres and M. M. Hayat, “Kalman filtering for adaptive nonuniformity correction in infrared focal plane arrays,” J. Opt. Soc. Amer., vol. 20, no. 3, pp. 470–480, 2003. [6] S. N. Torres, J. E. Pezoa, and M. M. Hayat, “Scene-based nonuniformity correction for focal plane arrays using the method of the inverse covariance form,” Appl. Opt., vol. 42, no. 29, pp. 5872–5881, Oct. 2003. [7] R. C. Hardie, M. M. Hayat, E. E. Armstrong, and B. J. Yasuda, “Scenebased nonuniformity correction with video sequences and registration,” Appl. Opt., vol. 39, no. 8, pp. 1241–1250, 2000. [8] A. Averbuch, G. Liron, and B. Z. Bobrovsky, “Scene based nonuniformity correction in thermal images using Kalman filter,” Image Vis. Comput., vol. 25, no. 6, pp. 833–851, 2007. [9] D. R. Pipa, E. A. B. da Silva, C. L. Pagliari, and M. de M. Perez, “Joint bias and gain nonuniformity correction of infrared videos using tensorial-RLS technique,” in Proc. 16th IEEE Int. Conf. Image Process., Nov. 2009, pp. 3897–3900. [10] U. Sako˘glu, R. C. Hardie, M. M. Hayat, B. M. Ratliff, and J. S. Tyo, “An algebraic restoration method for estimating fixed-pattern noise in infrared imagery from a video sequence,” Proc. SPIE, Appl. Digit. Image Process. XXVII, vol. 5558, pp. 69–79, Aug. 2004. [11] J. G. Harris and Y.-M. Chiang, “An analog implementation of the constant average statistics constraint for sensor calibration,” in Advances in Neural Information Processing Systems, vol. 9, M. C. Mozer, M. I. Jordan, and T. Petsche, Eds. Cambridge, MA: MIT Press, 1997, p. 699. [12] M. M. Hayat, S. N. Torres, E. Armstrong, S. C. Cain, and B. Yasuda, “Statistical algorithm for nonuniformity correction in focal-plane arrays,” Appl. Opt., vol. 38, no. 5, pp. 772–780, 1990.

4768

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 12, DECEMBER 2012

[13] D. Scribner, K. Sarkady, M. Kruer, J. Caulfield, J. Hunt, M. Colbert, and M. Descour, “Adaptive retina-like preprocessing for imaging detector arrays,” in Proc. IEEE Int. Conf. Neural Netw., vol. 3. Dec. 1993, pp. 1955–1960. [14] B. M. Ratliff, M. M. Hayat, and R. C. Hardie, “An algebraic algorithm for nonuniformity correction in focal plane arrays,” J. Opt. Soc. Amer., vol. 19, no. 9, pp. 1737–1747, 2002. [15] F. Torres, C. Martin, and S. Torres, “A RLS filter for nonuniformity and ghosting correction of infrared image sequences,” in Progress in Pattern Recognition, Image Analysis and Applications (Lecture Notes in Computer Science), vol. 4225, J. Martínez-Trinidad, J. C. Ochoa, and J. Kittler, Eds. Berlin, Germany: Springer-Verlag, 2006, pp. 446–454. [16] P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementations, 3rd ed. Boston, MA: Springer-Verlag, 2008. [17] S. Haykin, Adaptive Filter Theory, 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 1996. [18] V. K. Madisetti and D. B. Williams, The Digital Signal Processing Handbook. Boca Raton, FL: CRC Press, 1999. [19] J. Dattorro, Convex Optimization and Euclidean Distance Geometry. Palo Alto, CA: Meboo, 2008. [20] G. L. G. Sleijpen and D. R. Fokkema, “BiCGStab() for linear equations involving unsymmetric matrices with complex spectrum,” Electron. Trans. Numer. Anal., vol. 1, pp. 11–32, Sep. 1993. [21] S. Werner and P. S. R. Diniz, “Set-membership affine projection algorithm,” IEEE Signal Process. Lett., vol. 8, no. 8, pp. 231–235, Aug. 2001. [22] S. G. Sankaran and A. A. L. Beex, “Convergence behavior of affine projection algorithms,” IEEE Trans. Signal Process., vol. 48, no. 4, pp. 1086–1096, Apr. 2000. [23] R. A. Soni, K. A. Gallivan, and W. K. Jenkins, “Low-complexity data reusing methods in adaptive filtering,” IEEE Trans. Signal Process., vol. 48, no. 2, pp. 394–405, Feb. 2004. [24] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. [25] D. R. Pipa, “Correção de ruído de padrão fixo em vídeo infravermelho,” M.S. thesis, COPPE/UFRJ, Univ. Federal do Rio de Janeiro, Rio de Janeiro, Brazil, 2008. [26] C. Zuo, Q. Chen, G. Gu, and X. Sui, “Scene-based nonuniformity correction algorithm based on interframe registration,” J. Opt. Soc. Amer. A, vol. 28, no. 6, pp. 1164–1176, Jun. 2011. [27] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, “High accuracy optical flow estimation based on a theory for warping,” in Proc. 8th Eur. Conf. Comput. Vis., vol. 3024. Prague, Czech Republic, May 2004, pp. 25–36. [28] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. Int. Joint Conf. Artif. Intell., Apr. 1981, pp. 674–679. [29] P. Anandan, “A computational framework and an algorithm for the measurement of visual motion,” Int. J. Comput. Vis., vol. 2, no. 3, pp. 283–310, 1989. [30] E. P. Simoncelli, E. H. Adelson, and D. J. Heeger, “Probability distributions of optical flow,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Mauii, HI, Jun. 1991, pp. 310–315. [31] E. P. Simoncelli, “Coarse-to-fine estimation of visual motion,” in Proc. 8th Workshop Image Multidimensional Signal Process., Cannes, France, Sep. 1993, pp. 128–129. [32] S. C. Cain, M. M. Hayat, and E. E. Armstrong, “Projection-based image registration in the presence of fixed-pattern noise,” IEEE Trans. Image Process., vol. 10, no. 12, pp. 1860–1872, Dec. 2001.

Daniel R. Pipa (S’12) was born in Curitiba, Brazil. He received the Degree in electronics engineering from the Universidade Tecnológica Federal do Paraná (UTFPR), Parana, Brazil, in 2004, and the M.Sc. degree in electrical engineering from the Universidade Federal do Rio de Janeiro (COPPE/UFRJ), Rio de Janeiro, Brazil, in 2009, where he is currently pursuing the Ph.D. degree in electronics. He was a Visiting Researcher with the University of California, San Diego, in 2011, under the supervision of Prof. T. Q. Nguyen. Since 2004, he has been with the Research and Development Center, Petrobras (Petróleo Brasileiro S.A.), Rio de Janeiro, and researching signal, image, and video processing applications in equipment integrity.

Eduardo A. B. da Silva (M’95–SM’05) was born in Rio de Janeiro, Brazil. He received the Degree in electronics engineering from the Instituto Militar de Engenharia (IME), Rio de Janeiro, Brazil, the M.Sc. degree in electrical engineering from the Universidade Federal do Rio de Janeiro (COPPE/UFRJ), Rio de Janeiro, and the Ph.D. degree in electronics from the University of Essex, Colchester, U.K., in 1984, 1990, and 1995, respectively. He was with the Department of Electrical Engineering, Instituto Militar de Engenharia, from 1987 to 1988. Since 1989, he has been with the Department of Electronics Engineering, UFRJ. He has also been with the Graduate Department of Electrical Engineering, COPPE/UFRJ, since 1996. He was the Head of the Department of Electrical Engineering, COPPE/UFRJ, in 2002. In 2007, he was a Visiting Professor with the University of Nice Sophia-Antipolis, Nice, France. He has trained and consulted several Brazilian cable and satellite television companies on digital television. He was with the team involved in the development of the Brazilian Digital Television System. His current research interests include digital signal, image, and video processing, focusing on signal compression, digital television, wavelet transforms, mathematical morphology, and applications to telecommunications. He has authored or coauthored over 160 peer-reviewed papers. He has co-authored the book Digital Signal Processing: System Analysis and Design, (Cambridge University Press, 2002). Dr. da Silva was a recipient of the British Telecom Postgraduate Publication Prize in 1995, for the paper on aliasing cancellation in subband coding. He was an Associate Editor of the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS - PART I in 2002, 2003, 2008, and 2009, the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS - PART II in 2006 and 2007, and Multidimensional, Systems and Signal Processing (Springer) since 2006. He has been a Distinguished Lecturer of the IEEE Circuits and Systems Society in 2003 and 2004. He is a Technical Program Co-Chair of ISCAS in 2011. He is a member of the Board of Governors of the IEEE Circuits and Systems Society from 2012 to 2014. He is a Senior Member of the Brazilian Telecommunication Society and a member of the Brazilian Society of Television Engineering.

Carla L. Pagliari (S’92–M’04–SM’05) received the Ph.D. degree in electronic systems engineering from the University of Essex, Colchester, U.K., in 2000. She has been with the Department of Electrical Engineering, Military Institute of Engineering (IME), Rio de Janeiro, Brazil, since 1993. She was engaged in the team involved in the development of the Brazilian Digital Television System. Her current research interests include image processing, digital television, image and video coding, stereoscopic and multiview systems, and computer vision. Dr. Pagliari was the Local Arrangements Chair of the IEEE ISCAS of Rio de Janeiro, Brazil, in 2011. She is currently an Associate Editor of Multidimensional Systems and Signal Processing and a member of the Board of Teaching of the Brazilian Society of Television Engineering.

PIPA et al.: RECURSIVE ALGORITHMS FOR BIAS AND GAIN NUC

Paulo S. R. Diniz (S’80–M’81–SM’92–F’00) was born in Niterói, Brazil. He received the Degree in electronics engineering (cum laude) and the M.Sc. degree from the Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil, in 1978 and 1981, respectively, and the Ph.D. from Concordia University, Montreal, QC, Canada, in 1984, all in electrical engineering. He has been with the Graduate Program of Electrical Engineering, COPPE/UFRJ, since 1984, where he is currently a Professor and was the Undergraduate Course Coordinator and the Chairman of the Graduate Department. From 1991 to 1992, he was a Visiting Research Associate with the Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC, Canada. He was a Docent with Helsinki University of Technology, Espoo, Finland. From 2002 to 2002, he was a Melchor Chair Professor with the Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN. He has authored or co-authored several refereed papers, Adaptive Filtering: Algorithms and Practical Implementation, Fourth Edition (NY: Springer, 2013) and Digital Signal Processing: System Analysis and Design, Second Edition (Cambridge, U.K.: Cambridge University Press, 2010) (with E. A. B. da Silva and S. L. Netto), and the monograph Block Tranceivers: OFDM and Beyond (New York, NY: Morgan & Claypool, 2012) (W. A. Martins, and M. V. S. Lima). His current research interests include analog and digital signal processing, adaptive signal processing, digital communications, wireless communications, multirate systems, stochastic processes, and electronic circuits. Dr. Diniz was a recipient of the Rio de Janeiro State Scientist Award from the Governor of Rio de Janeiro, and the Education Award of the IEEE Circuits and Systems Society in 2004. He holds some Best Paper Awards from conferences and from an IEEE journal. He was the Technical Program Chair of the MWSCAS at Rio de Janeiro in 1995. He was the General Co-Chair of the IEEE ISCAS in 2011, and the Technical Program CoChair of the IEEE SPAWC in 2008. He is on the technical committee of several international conferences, including ISCAS, ICECS, EUSIPCO, and MWSCAS. He was the Vice President of region 9 and the Chairman of the DSP Technical Committee of the IEEE Circuits and Systems Society. He was an Associate Editor of the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS II: A NALOG AND D IGITAL S IGNAL P ROCESSING from 1996 to 1999, the IEEE T RANSACTIONS ON S IGNAL P ROCESSING from 1999 to 2002, and the Circuits, Systems, and Signal Processing Journal from 1998 to 2002. He was a Distinguished Lecturer of the IEEE Circuits and Systems Society from 2000 to 2001. In 2004, he was a Distinguished Lecturer of the IEEE Signal Processing Society.

4769

Suggest Documents