Adaptive image sequence resolution enhancement using ... - CiteSeerX

0 downloads 0 Views 258KB Size Report
high-resolution image processing applications such as digital HDTV, aerial photo, ..... A. K. Jain, Fundamentals of Digital Image Processing, Prentice- Hall, 1989.
Adaptive image sequence resolution enhancement using multiscale decomposition based image fusion Jeong Ho Shin, Jung Hoon Jung, Joon Ki Paik, and Mongi A. Abidi* Department of Image Engineering, Graduate School of Advanced Imaging Science, Multimedia, and Film, Chung-Ang University 221 Huksuk-Dong, Tongjak-Ku, Seoul 156-756, Korea *Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN, USA

ABSTRACT

This paper presents a regularized image sequence interpolation algorithm, which can restore high frequency details by fusing low-resolution frames. Image fusion algorithm gives the feasibility of using di erent data sets, which correspond to the same scene to get a better resolution and information of the scene than the one obtained using only one data set. Based on the mathematical model of image degradation, we can have an interpolated image which minimizes both residual between the high resolution and the interpolated images with a prior constraint. In addition, by using spatially adaptive regularization parameters, directional high frequency components are preserved with eciently suppressed noise. The proposed algorithm provides a better-interpolated image by fusing low-resolution frames. We provide experimental results which are classi ed into non-fusion and fusion algorithms. Based on the experimental results, the proposed algorithm provides a better interpolated image than the conventional interpolation algorithms in the sense of both subjective and objective criteria. More speci cally, the proposed algorithm has the advantage of preserving high frequency components and suppressing undesirable artifacts such as noise. Keywords: image interpolation, resolution enhancement, multiscale decomposition, regularization, image fusion.

1. INTRODUCTION

Images, in general, contain much greater amount of information than other types of signals. Therefore, correspondingly great amount of data is required for image processing, which often becomes an obstacle to ecient applications. In order to reduce the amount of data in image communications, many image compression standards, such as JPEG and MPEG, have been established. In the process of reducing image data, resolution drop is unavoidable. In this paper we mainly deal with image interpolation techniques to enhance the image quality in the sense of resolution. The terminology \resolution" represents the number of pixels in an image, which determines the physical size of the image, and at the same time it also represents the delity to high-frequency details in the image. By this reason, resolution is a fundamental issue in evaluating the quality of various image processing systems. Image interpolation is used to obtain a higher resolution image from a low resolution image, and therefore it is very important in multi-resolution or high-resolution image processing. For example, the spatial scalability function in MPEG-2 and wavelet-based image processing techniques require image interpolation techniques. On the other hand, high-resolution image processing applications such as digital HDTV, aerial photo, medical imaging, and military purpose images, need high-resolution image interpolation algorithms. Recently, it can also be used in changing the format of various types of images and videos, and in increasing the resolution of images. Conventional interpolation algorithms, such as zero-order or nearest neighbor, bilinear, cubic B-spline, and the DFT-based interpolation, can be classi ed by basis functions, and they focus on just enlargement of image.1{3 Those algorithms have been developed under assumption that there are no mixture among adjacent pixels in the imaging sensor, no motion blur due to nite shutter speed of the camera, no isotropic blur due to out-of-focus, and no aliasing Further author informationJ. H. Shin : E-mail: [email protected]; WWW:http://ms.cau.ac.kr/shinj J. H. Jung : E-mail: [email protected]; WWW:http://ms.cau.ac.kr/jhjung J. K. Paik : E-mail: [email protected]; WWW:http://cau.ac.kr/paikj M. A. Abidi : E-mail: [email protected]

in the process of sub-sampling. Since the above mentioned assumptions are not satis ed in general low-resolution imaging systems, it is not easy to restore the original high-resolution image by using the conventional interpolation algorithms. In order to improve the performance of the above mentioned algorithms, a spatially adaptive cubic interpolation method has been proposed in.4 Although it can preserve a number of directional edges in the interpolation process, it is not easy to restore original high frequency components which are lost in the sub-sampling process. As a alternative, multi-frame interpolation techniques which use sub-pixel motion information have been proposed in.5{8,10{14 It is well-known that image interpolation is an ill-posed problem. More speci cally, we regard a sub-sampling process as a general image degradation process. Then the regularized image interpolation is to nd the inverse solution de ned by the image degradation model subject to a priori constraints.7,8 Since the conventional regularized interpolation methods used isotropic smoothness as a prior constraints, their interpolation performance for images with various edges is limited. As another approach to the high-resolution image interpolation problem, Schultz and Stevenson have addressed a method for nonlinear single-channel image expansion which preserves the discontinuities of the original image based on the maximum a posteriori (MAP) estimation technique.9 They also proposed a video superresolution algorithm, which used MAP estimator with edge preserving Huber-Markov random eld (HMRF) prior.10 A similar approach to the superresolution problem was suggested by Hardie, Barnard, and Amstrong, which simultaneously estimates image registration parameters and the high-resolution image.12 However, all of these methods have similar Gibbs priors which represent non-adaptive smoothness constraints. A spatial image interpolation algorithm using projections onto convex sets (POCS) theory, which obtains high resolution images from low resolution image sequences, has been studied in.11,13,28,29 In general, POCS methods have a simple structure to implement, while theoretical and numerical shortcomings, such as non-uniqueness of solution and considerable computation, are included in these approaches. For regularized interpolated method, deterministic or statistical information about the delity to high-resolution image and statistical information about the smoothness prior are incorporated to obtain the feasible solution between constraint sets. Therefore, regularized method provide more feasible estimates than POCS. On the other hand, a hybrid algorithm combining optimized method, such as the maximum likelihood (ML), MAP, and POCS approaches, was presented by Elad and Feuer.14 They used the regularization weigh matrix with locally adaptive smoothness to obtain higher resolution image quality. Nevertheless, the estimated image may have no high frequency details along edges. In,15 modi ed edge-based line average (ELA) techniques for the scanning rate conversion is proposed. This paper proposes an adaptive image sequence resolution enhancement algorithm using multiscale decomposition (MSD) based image fusion. Restored high resolution image frames with high frequency components along the direction of edges can be obtained from low resolution image frames every frames. Data fusion algorithms are usually used in applications ranging from Earth resource monitoring, weather forecasting, and vehicular trac control to military target classi cation and tracking.16 In this paper, image fusion approaches are applied to obtain enhanced high resolution images and the fused images are used as the input images of adaptive regularization algorithms. This paper is organized as following. Section 2 de nes mathematical models of low resolution image formation system. In Section 3, we summarize the iterative algorithm with real-time structure brie y. We propose a spatially adaptive image interpolation algorithm based on image fusion in Section 4. Some experimental results for high resolution image interpolation are provided in Section 5. Finally, Section 6 concludes the paper.

2. DEGRADATION MODEL FOR A LOW RESOLUTION IMAGING SYSTEM

Let xc (p; q) represent a two-dimensional continuous image, and x(n1 ; n2 ) the corresponding digital image obtained by sampling xc (p; q), with size N  N , such as

x(m1 ; m2 ) = xc (m1 Tv ; m2 Th); for m1 ; m2 = 0; 1;    ; N , 1;

(1)

where Tv and Th respectively represent the vertical and the horizontal sampling intervals. We consider the N  N digital image, x(m1 ; m2 ), as a high resolution original image, and the Nq  Nq image, y(n1 ; n2 ), with q times lower

(a)

(b)

(c)

Relationships between two images with di erent resolutions: (a) Sampling grids for two di erent images; \ " for y 41 , and \" for x, (b) an array structure of photo-detectors for x(n1 ; n2 ), and (c) an array structure of photo-detectors for y 14 (m1 ; m2 ). Figure 1.

resolution in both horizontal and vertical directions can be represented as q ,1 X q ,1 X 1 y(n1 ; n2 ) = q2 x(qn1 + i; qn2 + j ); for n1 ; n2 = 0; 1;    ; Nq , 1: i=0 j =0

(2)

Two di erent sampling grids to obtain the image with 4 times lower resolution in both horizontal and vertical directions, which respectively correspond to images in (1) and (2), are shown in Fig. 1a. And the corresponding array structures of photo-detectors are shown in Figs. 1b and 1c, respectively. As shown in (1) and (2), a sub-sampling process from the high resolution image to the low resolution image can be given as

y(n1 ; n2 ) =

XX

m1 m2

x(m1 ; m2 )h(n1 ; n2 ; m1 ; m2 ) + (n1 ; n2 );

(3)

where y(n1 ; n2), x(m1 ; m2 ), and (n1 ; n2 ) respectively represent two-dimensional arrays of the low resolution image, the high resolution image, and additive noise. And h(n1 ; n2 ; m1 ; m2 ) represents the space-variant point spread function (PSF), which determines the relationship between the high resolution image and the low resolution image. In this paper we assume that the sub-sampling process is separable and space-invariant. Under the separability assumption, the sub-sampling process from the N  N high resolution image to the N=2  N=2 low resolution image is shown in Fig. 2. The corresponding discrete linear space-invariant degradation model can be expressed in matrix-vector form as y = Hx + ; (4) ,



where the N2 2  1 vectors, y and  respectively represent the lexicographically ordered low resolution image and ,  noise, and the N 2  1 vector, x, represents the original high resolution image. And H represents a N2 2  N 2 block Toeplitz matrix can be written as H = H1 H1 ; (5) N where represents the Kronecker product, and the 2  N matrix, H1 , represents the one dimensional (1D) low-pass ltering and sub-sampling by a factor of 2, for example, such as 2 3 1 1 0 0  0 0 6 0 0 1 1  0 0 7 H1 = 21 664 .. .. .. .. . . .. .. 775 : (6) . . . . . . . 0 0 0 0  1 1

x(m 1,m2)

y(n1,n2)

2

Horizontal LPF

N

N

Vertical LPF

N/2

N

N/2 Figure 2.

2

N/2

Sub-sampling process from the N  N high resolution image to the N=2  N=2 low resolution image.

In the right hand side of (5), the rst and the second H10 s respectively represent the low-pass ltering and subsampling process in the horizontal and the vertical directions. In Eq. (5) and (6), H represents a matrix which is composed of asymmetric structure. As shown in Fig. 1, one way to increase the spatial resolution is to increase the density of photo-detectors by reducing their sizes. However, the smaller the size of each photo-detector is, the less the amount of incoming light energy becomes. The decrease in incoming light energy degrades the picture quality since shot-noise is unavoidable, in principle. Therefore, in current sensor technology, there is limitation in reducing the size of the photo-detector in order to keep shot noise invisible, and its size limitation is known to be approximately 50m2.25 In26 it has been noted that current CCD technology has almost reached the bound. In other words, higher resolution over that bound should be obtained by using digital image interpolation techniques.

3. ITERATIVE REGULARIZED ALGORITHM WITH REAL-TIME STRUCTURE

In order to solve (4), the regularized iterative image interpolation algorithm tries to nd the estimate, x^, which satis es the following optimization problem. x^ =argxmin (7) i f (xi ); where, f (xi ) = jjyi , Hxi jj2 + jjCxi jj2 : (8) In Eq. (8), C represents a high-pass lter, and jjCxi jj represents a stabilizing functional whose minimization suppresses high frequency components due to noise ampli cation. And  represents the regularization parameter which controls the delity to the original image and smoothness of the restored image. The cost function given in (8) can be rewritten as f (xi ) = 12 xTi Txi , bTi xi ; (9) where T = H T H + C T C and bi = H T yi : (10) The best estimate which minimizes (9) is equal to the solution of following linear equation, Txi = bi : (11) In order to solve the above equation given in (11), the successive approximation equation describing the interpolated image, x, at the k + 1st iteration step, is given by xk+1 = xk + fH T yk , (H T H + C T C )xk g; (12) where yk represents the motion compensated image frame of the k-th frame yk , and the function that controls the convergence rate.21 In dealing with iterative algorithms, the convergence and the rate of convergence are very important. Unless convergence is guaranteed, the iterative algorithm cannot be used. Even if convergence is guaranteed, suciently high rate of the convergence is needed for practical applications. In-depth convergence analysis has been proposed in.22

4. SPATIALLY ADAPTIVE RESOLUTION ENHANCEMENT ALGORITHMS

The regularized iterative image interpolation algorithms presented in the previous section does not satisfy the human visual system. However, we introduce adaptive image interpolation algorithm which is adequate to human visual system and present its ecient implementation in this section. In addition, the pixel values of the fused low resolution images have dominant feature of MSD images, so it makes us possible to obtain better low resolution images, which can be able to make a high resolution image by using image fusion algorithms. The framework of the proposed resolution enhancement algorithm using image fusion is shown in Fig. 3. MSD-based image fusion processing accepts sensor data i.e. LR image sequence from LR imaging sensor, and fuses the data to obtain the LR frames with more information. Regularized image resolution enhancement processing acts on the results of MSD-based image fusion processing to interpolate LR images. This processing makes it to fuse data compatibility and smoothness constraints with feature of directional information. LR Image 1

LR Image 2

LR Image 3

LR Image N

Data Alignment(Registration)

Observation Preprocessing: DWT, DWF, and Steerable Pyramid, etc.

MSD-based Image Fusion (Pixel-level)

Fused LR Image 1

Fused LR Image 2

Fused LR Image 3

Fused LR Image N

Orientation Analysis of Edges

Adaptive Regularized Image Resolution Enhancement Algorithm

HR Image 1

Figure 3.

HR Image 2

HR Image 3

HR Image N

The framework of the proposed resolution enhancement algorithm using image fusion.

Decomposition by Discrte Wavelet Frame

Registration

Acitivity level measurement

Maximum selection rule

Consistancy Verification

Fusion Aspect Figure 4.

The structure of the proposed image fusion method.

4.1. Multiscale decomposition-based image fusion

The purpose of image interpolation is to recover or enhance the resolution of original image or scene. Fusion may be an alternative approach for image interpolation. The reason why we use fusion concept in this paper is that it can provide a better resolution and information of the scene than the one obtained using only one data set. Data fusion can be implemented at either the signal, the pixel, the feature or the symbolic level of representation. We use pixel-level fusion which is to merge multiple images on pixel-by-pixel basis to improve the performance of many image processing tasks.16,17 From this idea, we can apply it to our new interpolation algorithm, which uses the feature of MSD images as a fusion criterion. This processing provides more feasible input data by fusing LR image frames in pixel-level. There are two approaches to perform image fusion. One is spatial domain processing using gradient and local variance,23 and the other is transformed domain processing using discrete wavelet transforms (DWT), discrete wavelet frame (DWF), and steerable pyramid, etc. In this paper, the low-resolution image frames are decomposed by using DWF. Since DWT have a property of space-variant, DWF is used to solve misregistration problems. In order to apply the fusion algorithm to (12), the corresponding iteration process can be rewritten as

xk+1 = xk + fbkfused , (H T H + C T C )xk g;

(13)

where bkfused represents the fused image obtained from low-resolution frames. To obtain the k-th fused frame, bkfused in (13), we should de ne an appropriate image fusion scheme, which is described in Fig. 4. First, low-resolution frames have to be registered so that the corresponding pixels are aligned. Then, the low-resolution image frames are decomposed by using DWF. We perform the activity level measurement as preliminary to choice of DWF coecients. In our implementation we use the maximum absolute value within the window as an activity measure associated with the center pixel. As a result, we can determine the maximum value among DWF coecients by the maximum selection rule. A binary decision map of the same size of the DWF is then created to record the selection results based on a maximum selection rule. Finally, majority lter is applied to the binary decision map. To implement the ltering, we apply the median ltering of binary decision map which represents the origin of coecient values to provide a constraint in the process of consistency veri cation. By using the proposed fusion algorithm, we can de ne the fused frame as

bk

fused(i; j ) =



H T yk (i; j ); if binary decision map = 1 H T yk,1 (i; j ); if binary decision map = 0;

(14)

According to results from image fusion algorithm, we can choose H T yk (i; j ) as a proper pixel value, on condition that k-th binary decision map is equal to one as (14). After the image fusion process, we perform the interpolation process by using iterative regularized interpolation algorithm summarized in Sec. 3. In general, low resolution images have more lowpass components than original high resolution image frames. Based on this observation, high resolution images, interpolated by using these low resolution images, may be sub-optimal in the sense of resolution. In our proposed algorithm, however, low resolution images have maximum values at each pixel, so it enables us to obtain better low resolution observations for making higher resolution image. The proposed interpolation algorithm uses the current and previous low-resolution frames in order to obtain a high-resolution frame. It means that we can reuse the information of low-resolution frames without information loss.

4.2. Interpolation with spatially adaptive constraints

The human visual characteristics are partly revealed by psychophysical experiments. According to those experiments, human visual system is sensitive to noise in at regions, while it becomes less sensitive to noise at sharp transitions in image intensity. In conclusion, human visual system is less sensitive to noise in edges than in at regions .20 Based on the results of of the experiments, various ways to subjectively improve the quality of the restored image have been proposed in.18,27 There solution of (12) converges to the point between ultra-rough least squares solution and ultra smooth solution, and the location of the solution can be controlled by regularization parameters. In many non-adaptive versions of regularized image restoration, a two-dimensional (2D) isotropic high-pass lter has been used for C . The spaceinvariant high-pass lter, however, cannot eciently restore high frequency details in the image, and as a result, multi-frame interpolation methods have been proposed to solve that problem by utilizing the additional information of temporally adjacent frames.7,21 As an alternative to the nonadaptive interpolation, we propose a spatially adaptive interpolation algorithm by using a set of M di erent high-pass lters, Cj , for j = 1;    ; M , to selectively suppress the high frequency component along the corresponding edge direction. For example, each pixel in an image can be classi ed into one of monotone, horizontal edge, vertical edge, and two diagonal edges. In this case M is equal to 5, and each Cj represents a high-pass lter along the given direction. As a result of applying the proposed spatially adaptive constraints, the k-th regularized iteration step can be given as

xk+1 = xk + (b , where

M X i=1

Ii Ti xk );

(15)

b = H T y and Ti = H T H + CiT Ci ;

(16) and Ij represents a diagonal matrix with diagonal elements either zero or one. More speci cally, diagonal element in Ij , which has one-to-one correspondence with each pixel in the image, is equal to one if it is on the corresponding edge, or zero otherwise. The properties of Ii can simply be summarized as

Ii Ij = 0; for i 6= j; and where I represents an N 2  1 identity matrix.

M X i=1

Ii = I;

(17)

5. EXPERIMENTAL RESULTS

In order to demonstrate the performance of the proposed algorithm, we make an image sequence with twenty 130  90 image frames with subpixel motion. Fig. 6a shows the 520  360 high resolution ghter image. In addition, Fig. 6b shows twenty 130  90 subsampled images of Fig. 6a by using the degradation model given in (3) and is the arti cially shifted version of Fig. 6a. The 3rd and 4th image frames in the 20 LR images of Fig. 6b have di erently focused object and background. The 11th and 12th image frames are occluded by synthetic clouds. Resulting interpolated images using zero-order interpolation, bilinear interpolation, and cubic B-spline interpolation are shown in Figs. 6c, 6d, and 6e, respectively. Cubic B-spline based interpolated image of Fig. 6e is shown using the `congrid(,/cubic)' function of IDL. Figs. 7 and 8 respectively show interpolated images by using the conventional non-fusion based algorithm and the proposed fusion-based adaptive regularized interpolation algorithms. For both interpolated images, only one iteration is performed per each image frame. Therefore, almost real-time interpolation can be implemented based on the proposed algorithm. As ve di erent constraints used in (16), 2 3 2 3 2 3 0 , 1 0 0 0 0 0 , 1 0 C1 = 41 4 ,1 4 ,1 5 ; C2 = 41 4 ,1 2 ,1 5 ; C3 = 14 4 0 2 0 5 ; 0 ,1 0 0 0 0 0 ,1 0 2

C4 = 41 4

3

2

3

0 0 ,1 ,1 0 0 0 2 0 5 ; C5 = 1 4 0 2 0 5 ; 4 0 0 ,1 ,1 0 0

(18)

adjacent pixels of Xi,j

|Xi-1,j-Xi+1,j | >M

no

|Xi,j-1-Xi,j+1 | >M

no

|Xi-1,j-1 -Xi+1,j+1 | >M

yes

yes

yes

vertical edge

horizontal edge

45oedge

Figure 5.

no

|Xi+1,j-1 -Xi-1,j+1 | >M

no monotone

yes 135oedge

The process of determining the direction of edges

are used. In determining Ii , di erences along the four directions are compared at each pixel, and at each iteration. This algorithm is summarized in Fig. 5. To compare the performance of algorithms, the magni ed images of the cockpit are shown in Figs. 9a-9e. As we know from the experimental results, the proposed fusion-based adaptive algorithm gives better interpolation results in the sense of both subjective and objective measures.

6. CONCLUSIONS

In this paper we proposed a spatially adaptive image interpolation algorithm using image fusion. To obtain a higher resolution image, the fused images from low resolution image sequence are used as input of iterative regularization algorithm. In addition, the pixel values of the fused low resolution images have dominant feature of MSD images, so it makes us possible to obtain better low resolution images, which can be able to make a high resolution image by using image fusion algorithms. In this paper, the low-resolution image frames are decomposed by using DWF among MSD methods. Since DWT have a property of space-variant, DWF is used to solve misregistration problems. According to the observation model, we propose a regularization-based spatially adaptive interpolation algorithm by using ve di erent constraints. From experimental results, the proposed adaptive algorithm has shown to be able to restore high frequency details, and at the same time to give higher PSNR value than the algorithms without image fusion.

Acknowledgments

This research was supported by the National Research Lab. Project of Korea.

REFERENCES

1. A. K. Jain, Fundamentals of Digital Image Processing, Prentice- Hall, 1989. 2. M. Unser, A. Aldroubi, and M. Eden, \Fast B-spline transforms for continuous image representation and interpolation," IEEE Trans. Pattern Analysis, Machine Intelligence, vol. 13, no. 3, pp. 277-285, March 1991. 3. J. A. Parker, R. V. Kenyon, and D. E. Troxel, \Comparison of interpolating methods for image resampling," IEEE Trans. Med. Imaging, vol. 2, no. 1, pp. 31-39, March 1983. 4. K. P. Hong, J. K. Paik, H. J. Kim, and C. H. Lee, \An edge-preserving image interpolation system for a digital camcoder," IEEE Trans. Consumer Electronics, vol. 42, no. 3, pp. 279-284, August 1996. 5. S. P. Kim, H. K. Bose, and H. M. Valenzuela, \Recursive reconstruction of high-resolution image from noisy undersampled frames," IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1013-1027, June 1990. 6. A. Patti, M. I. Sezan, and A. M. Tekalp, \High-resolution image reconstruction from a low-resolution image sequence in the presence of time varying motion blur," Proc. 1994 Int. Conf. Image Processing, November 1994. 7. M. C. Hong, M. G. Kang, and A. K. Katsaggelos, \An iterative weighted regularized algorithm for improving the resolution of video sequences," Proc. 1997 Int. Conf. Image Processing, vol. 2, pp. 474-477, October 1997. 8. B. C. Tom and A. K. Katsaggelos, \An iterative algorithm for improving the resolution of video sequences," Proc, SPIE Visual Comm., Image Proc., pp. 1430-1438, March 1996. 9. R. R. Schultz and R. L. Stevenson, \A bayesian approach to image expansion for improved de nition," IEEE Trans. Image Processing, vol. 3, no. 3, pp. 233-242, May 1994.

10. R. R. Schultz and R. L. Stevenson, \Extraction of high-resolution frames form video sequences," IEEE Trans. Image Processing, vol. 5, no. 6, pp. 996-1011, June 1996. 11. J. H. Shin, J. H. Jung, and J. K. Paik, "Spatial interpolation of image sequences using truncated projections onto convex sets," IEICE Trans. Fundamentals of Electronics, Communications, Computer Sciences, to appear, June 1999. 12. R. C. Hardie, K. J. Barnard, and E. E. Armstrong, \Joint MAP registration and high-resolution image estimation using a sequence of undersampled images," IEEE Trans. Image Processing, vol. 6, no. 12, pp. 1621-1633, December 1997. 13. A. J. Patti, M. I. Sezan, and A. M. Tekalp, \High-resolution standards conversion of low resolution video," Proc. 1995 Int. Conf. Acoust., Speech, Signal Processing, pp. 2197-2200, 1995. 14. M. Elad and A. Feuer, \Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images," IEEE Trans. Image Processing, vol. 6, no. 12, pp. 1646-1658, December 1997. 15. C. J. Kuo, C. Liao, and C. C. Lin, \Adaptive interpolation technique for scanning rate conversion," IEEE Trans. Circuit, System, Video Technology, vol. 6, No. 3, pp. 317-321, June, 1996. 16. L. A. Klein, Sensor and Data Fusion Concepts and Applications, SPIE Optical Engineering Press, 1999. 17. D. L. Hall, Mathematical Techniques in Multisensor Data Fusion, Artech House, 1992. 18. A. K. Katsaggelos, \Iterative image restoration algorithms," Optical Engineering, vol. 28, pp. 735-748, 1989. 19. R. W. Schafer, R. M. Mersereau, and M. A. Richards, \Constrained iterative restoration algorithms," Proc. IEEE, vol. 69, no. 4, pp. 432-450, April 1981. 20. G. L. Anderson and A. N. Netravali, \Image restoration based on a subjective criterion," IEEE Trans. Sys., Man, Cybern., vol. SMC-6, no. 12, pp. 845-853, December 1976. 21. J. H. Shin, Y. C. Choung, and J. K. Paik, \A general framework of image sequence interpolation," Proc, SPIE Visual Comm., Image Proc., vol. 3309, part. 1, pp. 297-304, January 1998. 22. J. H. Shin, J. S. Yoon, J. K. Paik, and M. A. Abidi, \Fast superresolution for image sequence using motion adaptive relaxtion parameters," Proc. 1999 Int. Conf. Image Processing, vol. 3, pp. 676-680, October 1999. 23. J. H. Shin, J. S. Yoon, and J. K. Paik, \Image fusion-based adaptive regularization for image expansion," Proc. SPIE Image, Video Comm., Proc., January 2000. 24. J. S. Lim, Two-Dimensional Signal and Image Processing, Prentice-Hall, pp. 495-497, 1990. 25. K. Aizawa, T. Komatsu, and T. Saito,\A scheme for acquiring very high resolution images using multiple cameras,", Proc. 1992 Int. Conf. Acoust., Speech, Signal Processing, vol. 3, pp. 289-292, 1992. 26. T. Ando, \Trend of high-resolution and high-perfomance solid state imaging technology," J. ITE Japan, vol. 44, no. 2, pp.105-109, Feburuay 1992. 27. A. K. Katsaggelos, J. Biemond, R. W. Schafer, R. M. Mersereau, \A regularized Iterative image restoration algorithms," IEEE Trans. Signal Processing, vol. 39, no. 4, pp. 914-929, 1991. 28. H. J. Trussell and M. R. Civanlar, \The feasible solution in signal restoration," IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-32, no. 2, pp. 201-212, April 1984. 29. D. C. Youla and H. Webb, \Image restoration by the method of convex projections: Part 1-theory," IEEE Trans. Medical Imaging, vol. MI-1, no. 2, pp. 81-94, October 1982. 30. W. T. Freeman and E. H. Adelson, \The design and use of steerable lters," IEEE Trans. Pattern Analysis, Machine Intelligence, vol. 13, no. 9, pp. 891-906, September 1991. 31. Z. Zhang and R. C. Blum, "A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application," Proc. IEEE, vol. 87, pp. 1315-1326, August 1999.

differently focused

1st frame 2nd frame 3rd frame 4th frame

11th frame 12th frame with occlusion 19th frame

(a)

(b)

(c)

(d)

(e) (a) Original image, (b) 20 synthetically subsampled images with di erently focused and occluded objects, (c) Zero-order interpolated image (PSNR=27.60[dB]), (d) Bilinear interpolated image (PSNR=26.40[dB]), and (e) Cubic B-spline interpolated image (PSNR=26.37[dB]) Figure 6.

Figure 7.

Figure 8.

Regularized interpolated image without fusion algorithm (PSNR=29.17[dB])

Adaptively regularized interpolated image with fusion algorithm (PSNR=29.69[dB])

(a) the cockpit of Fig.6c (PSNR=27.60[dB])

(b) the cockpit of Fig.6d (PSNR=26.40[dB])

(c) the cockpit of Fig.6e (PSNR=26.37[dB]))

(d) the cockpit of Fig.7 (PSNR=29.17[dB])

(e) the cockpit of Fig.8 (PSNR=29.69[dB]) Figure 9.

Magni ed Results