Super Resolution for Fast Transfer of Graphics over Internet - CiteSeerX

8 downloads 1054 Views 1MB Size Report
Internet Explorer 6 has taken steps to improve Web browser .... into two parts: Fb(u) is the portion of the spectrum below ..... shifted pictures”, CVGIP: Graph.
JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

71

Super Resolution for Fast Transfer of Graphics over Internet Dr. Varsha H. Patil Professor, University of Pune, India, [email protected] Dr. Gajanan K. Kharate, Principal, MCREC Nashik

Dr. Dattatraya S. Bormane and Snehal S. Kamlapur [email protected],[email protected],[email protected] Abstract—The Internet has become an indispensable component of today’s transacting world. Though a powerful medium, the Internet cannot always quickly transfer web page having image in its current form. Such web pages not only take long time to reach their destination but sometimes completely slow down or block other traffic on the network. To improve response time, in this paper we propose real time technique as an efficient way out for. We propose to transmit low resolution (LR) image at the transmitter, and show high resolution (HR) image at receiver. At transmitting end, special filter is applied to HR image to yield LR image and high frequency components. These high frequency components are used to build a basis function, which is optimal in size, and is sent along with LR image. HR image is reconstructed using these two at receivers end. For proposed approach the quality of image is better and time for transmission is less than that of conventional approaches. Simulation results show the validity of our approach. Index Terms— Extrapolation, Internet, High Resolution, Super Resolution, Thumbnail, Wavelet

T

I. INTRODUCTION

ODAY there is an imminent need of performanceoptimized and reliable delivery of HTTP content by all application users. The result is the rapid growth and deployment of web-based applications. With increased distributed applications, performance has become a key issue in IT management. It is one of the quickest and most cost effective methods of sharing data in real time to ensure everyone have access to the latest relevant information. Image is one of the important contents of almost all information. An array of network acceleration technologies has been deployed in the applications data path by enterprises to address performance issues. Most application acceleration products rely heavily on basic technologies that are short-term solutions and end up increasing complexity in the long term, thus leaving the core problem un-addressed. For image embedded web pages, a flexible solution that can cut down web traffic across multiple protocols and optimize applications performance is the urgent need. Image compression is one of the solutions for the fast transfer of contents across the web. Most of the © 2010 ACADEMY PUBLISHER doi:10.4304/jmm.5.1.71-78

compression techniques including JPEG2000 are lossy. Quantization removes some of the information for encoding, which leads to quantization error. Low frequency data is preserved while high frequency data is coarsely quantized [3,4,5]. This removes high frequency components of edge and brings in strong ringing artifact at the decoder. Decompression cannot recover high frequency components lost [4,5]. By any means downloading images on web creates sure compromises. The tradeoff is between resolution/ quality and faster download time. In case of medical images these factors are crucial. As the resolution increases the download time increases. For medical education and patient care the high resolution photographic quality electronic images play key role. These images on Internet are typically in Graphics Interchange Format (GIF) or the Joint Photographic Experts Groups (JPEG) format files. These formats suffer from a major drawback that there is considerable data loss in both color depth and image resolution. JPEG is used as a general purpose digital image compression standard for continuous-tone still images. The compressed file presents an edge effect with perceptible data loss [5]. Also these images are available in single resolution and have no capacity to adjust resolution as needed by user. The photo Compact disc (PCD) is format having higher resolution but suffer from larger size of file leading to more download time. The FlashFix format (FPX) offers advantages over GIF, JPEG and PCD formats for display of HR images over web[17]. A java applet can be easily downloaded for viewing FPX images. We have different approach that is better than these formats. The section VI is devoted for the same. One of the solutions is that to cut down the response time of browsing web pages, images are often shown in low-resolution form known as ‘thumbnail images’. An enlarged, high-resolution image is only shown if the user clicks on the related thumbnail. This approach still requires the high-resolution image to be stored on the web server and downloaded to the user’s client machine on insist. One of the solutions is to enlarge image at client side. While enlarging thumbnail using digital imaging software such as Adobe Photoshop at client machine

72

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

instead of downloading from a server, if the specified size does not produce the same aspect ratio as the input image, the output image will be distorted [4]. Even with low pass filtering; resizing an image can introduce artifacts, because information is always lost when you reduce the size of an image. Internet Explorer 6 has taken steps to improve Web browser flexibility. This innovative capability that offers you more Web options is Automatic Image Resizing. If pictures are too large to display in your browser window, automatic resizing makes them fit. You no longer need to scroll horizontally or vertically to view large pictures. This resizing is for image zoom out, not suitable for images zoom in Image interpolation is widely used in resizing the image, which results in blurred image with blocking effects when zoomed with change in aspect ratio. To save storage space and communication bandwidth that is, download time and retain image quality, it would be advantageous if low resolution image that would be of optimized size as that of original is sent across the web and the high-resolution image were constructed from low-resolution image on the user’s machine. Having recognized these needs, we provide technique to meet performance requirements for enhancing the resolution of image. We propose a technique that is based on extrapolation based super resolution theory. Superresolution (SR), also known as High-resolution, means that pixel density within an image is high. The resolution enhancement technique we propose, with new technology capabilities today will surely a far more elegant and efficient approach that recognizes the image embedded web pages. This approach is programmed to send the low resolution image along with the basis function and on clients demand high resolution image is reconstructed. This groundbreaking advancement will be the representative of the speed at which applications performance is evolving to set a trend in today’s business eco-system. The remainder of this paper is organized as follows. Section II describes concepts of super resolution, Section III covers proposed technique, and Implementation is described in Section IV. Section V focuses on experimental results and conclusion along with future scope are listed in section VII. Section VI is devoted for comparison with existing approaches. II. THE SUPER RESOLUTION IMAGE RECONSTRUCTION Super-resolution imaging is the technique of generating a high-resolution image from one or more low-resolution images. Algorithmic advances in superresolution technology in tandem with hardware development can meet the demand for such highresolution images. As there are limitations on hardwarebased solutions for SR imaging, one promising approach is to use signal-processing techniques to obtain an HR image (or sequence) from observed low-resolution (LR)

© 2010 ACADEMY PUBLISHER

image(s)[12,13]. Hence, a resolution enhancement using super-resolution image reconstruction using multiple shifted low resolution images approach using computational, mathematical, and statistical techniques has received a great deal of attention recently. The major advantage of this approach is that it may cost less and the existing LR imaging systems can still be utilized. While most methods have been proposed by researchers for super-resolution based on multiple lowresolution images of the same scene, some of the research work has been on generating a high-resolution image from a single low-resolution image, with the help of a set of one or more training images from scenes of the same or different types [12-16]. Commonly this is referred as the single-image super-resolution problem. More recently, different researchers have proposed some learning-based methods. Some methods make use of a training set of images, while others do not require training set but require strong image priors that are either hypothesized or learned from data. Simple resolution enhancement methods based on smoothing and interpolation techniques for noise reduction have been commonly used in image processing. Smoothing is usually achieved by applying various spatial filters such as Gaussian, Wiener, and median filters. Commonly used interpolation methods include Bicubic interpolation and cubic Spline interpolation [6,7,9]. Interpolation methods usually give better performance than simple smoothing methods. However, both methods are based on generic smoothness priors and hence are indiscriminate since they smooth edges as well as regions with little variations, causing blurring problems and checkerboard effect. [8,9] III. PROPOSED TECHNIQUE Extrapolation of the spectrum of an object beyond the diffraction limit of the imaging system is called superresolution [1]. Extrapolation means extending a signal outside a known interval. We use single image based super resolution technique based on extrapolation, which can be the most suitable for web based image transfer. Our aim is to transmit a low-resolution image across the web instead of an original high resolution image. It needs decomposing the high-resolution image into two components: low resolution image and a basis function. Here the low-resolution image is obtained by properly applying a filter. The output of filtering is a LR image and the filtered out component is used to build the basis function. Our approach follows subsequent steps: Step 1: Apply filter to high-resolution image to yield low-resolution image and high frequency components to construct basis function. Step 2: Transmit low-resolution image as a thumbnail image to client along with the basis function. Step 3: Reconstruct high-resolution image using lowresolution image and basis function by a JavaScript that runs within the user’s web browser. The foundation upon

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

which the proposed approach is build is discussed ahead. Extrapolation The general goal of extrapolation is to estimate a signal, f(t), from a limited known segment, g(t). If there is a transform that can describe g(t) from f(t), g(t) = T{f(t)}, then its inverse could estimate f(t) from g(t), f(t) = T1 {g(t)} [1,2]. Extrapolation in the spatial coordinates could improve the spectral resolution of an image, whereas frequency domain extrapolation could improve the spatial resolution [2]. Such problems arise in power spectrum estimation, resolution of closely spaced objects in radio astronomy, radar target detection and geological exploration and many more like these. Bands limited signal f (x) can be determined completely from the knowledge of it over an arbitrary finite interval [-α, α]. This follows from the fact that a band limited function is an analytic function because its Taylor series:

f ( x + ∆ ) = f ( x) +



∆ n f ( x) n=1 n! dx n



(1) This series is convergent for all x and ∆. By letting x ∈ [-α, α] and x + ∆ > α, (1) can be used to extrapolate f (x) anywhere outside the interval [-α, α]. The forgoing ideas can also be applied to a space limited function that is, f (x)= 0 for |x| >α, whose Fourier transform is given over a finite frequency band. This means, theoretically, that a finite object imaged by diffraction limited system can be perfectly resolved by extrapolation in the Fourier domain. Super-resolution imaging is based on extrapolation of the spectrum of an object beyond the diffraction limit of the imaging system [2]. The formation of an image is given by the equation:

g ( x) =

∫ h(ξ

− x) f (ξ ) dξ

(2) Where g is the image, h is the point spread function (the inverse Fourier transform of the Optical Transfer Function) and f is the original object. From this equation we have the Fourier description:

G (u ) = H (u ) F (u )

(3) Equation (3) implies a division by zero to compute F from the quotient of G and H. Since H is zero beyond the diffraction limit cut-off, the reconstruction of any information about F beyond the cut-off is impossible. On this basis, the argument goes; super-resolution can be dismissed as either a theoretical or practical concept. Consider the imaging in incoherent light of a compact object, that is, an object that must be positive and wholly contained within some finite interval. The object has the properties:

f ( x) > 0, x ∈ X f ( x) = 0, x ∉ X

(4) Here X is usually referred to as the region of support. The description of (4) can also be expressed more succinctly as:

© 2010 ACADEMY PUBLISHER

73

f ( x )rect ( x / X )

(5) Where we have assumed the support X coincides with the standard definition of the rect function, with no loss of generality. We now divide the Fourier spectrum of f(x) into two parts: Fb(u) is the portion of the spectrum below the diffraction limit cut-off, and Fa(u) is the portion above the cut-off. From simple Fourier theorems and the multiplication by the rect function we have:

F (u ) = [ Fa (u ) + Fb (u )] * sinc( Xu )

(6) The important characteristic of this equation is the convolution of the two portions of the spectrum of F with the sinc function, which is the Fourier transform of the rect function. Since the sinc function is infinite in extent, clearly there will be components of the spectrum portion above the diffraction limit cut-off that are introduced into the spectrum below the cut-off by the convolution with the sinc function. In other words, the nature of the compact object causes information about that object to be on hand in the region of the spectrum that is passed by the optical transfer function. Clearly, if we can find a way to use that information, then there is a basis to realize super-resolution. Finally the image formation process takes place, and from (3) we have the Fourier description of the image:

G (u ) = H (u ){[ Fa (u ) + Fb (u )] * sinc ( Xu )}

(7) Here the Fa(u) serves as a basis for constructing a high resolution image at client machine which is sent along with the low resolution image. IV.

IMPLEMENTATION

In this section we provide detail implementation of each of the three steps of our approach. Filtering Considering the basis for super resolution, we reverse formulate the problem for the said application. If we omit noise, the low-resolution image construction from highresolution image is decomposition of image into a LR image and High frequency component-using filter. We used wavelet transform for the same. We designed filter using the mother wavelet db25. After the filtering, we get a low resolution image represented by (8) and the basis function to represent the component Fa(u) constructed using high frequency component which represents a the portion of the spectrum above the diffraction limit cut-off for (6).

G (u ) = H (u ){[ Fb (u )] * sinc ( Xu )}

(8)

The characteristic of (7) is that convolution of the two portions of the spectrum of F with the sinc function, which is the Fourier transform of the rect function. Since the sinc function is infinite in extent, clearly Fa(u) represents the components of the spectrum portion above the diffraction limit cut-off that can be introduced into the spectrum below the cut-off by the convolution with the sinc function. In other words, the nature of the

74

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

low resolution image causes information about that object to be on hand in the region of the spectrum that is passed by the low pass filter. We can use Fa(u) as basis function to realize super-resolution. Finally the image formation process takes place from (3) and the Fourier description of the image as in (7). The high frequency component is used to extrapolate the spectrum of an object beyond the diffraction limit. Actual image data is stored at web server, and decomposed into LR image and basis function. The database can be designed to save image in this decomposed format that may lead to storage optimization. The Wavelet We have developed an excellent method that is well suited for increasing resolution of images based on wavelet transform. Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. The basic idea of the wavelet transform is to represent any arbitrary signal ‘S’ as a superposition of a set of such wavelets or basis functions. These basis functions are obtained from a single photo type wavelet called the mother wavelet by dilation (scaling) and translation (shifts). The discrete wavelet transform for one-dimensional signal can be defined as follows.

c ( a, b) =

∫ s(t ) R

1 a

Ψ

(t − b) dt a

(9)

The indexes c (a, b) are called wavelet coefficients of signal s(t), a as dilation and b is translation, Ψ(t) is the transforming function, the mother wavelet. It is so called because the wavelet derived from it analyzes signal at different resolutions (1/a). Low frequencies are examined with low temporal resolution while high frequencies with more temporal resolution. A wavelet transform combines both low pass and high pass filtering in Spectral decomposition of signals. The wavelet transform has been identified as an effective tool for time-frequency representation of signals. It can decompose a digital image into some frequency sub-images, each represented with proportional frequency resolution.

Fig 1 (a) Wavelet Decomposition and Reconstruction

The resulting band-pass representation provides that the solution space of many image processing problems and can be decomposed into its lower frequency subspace and higher frequency subspaces. We decompose image using wavelet. Our technique employs spectral frequency signals to increase resolution of the image. This method © 2010 ACADEMY PUBLISHER

is illustrated in Fig. 1(a-c) where X represents the HR image and LL0 is the available LR image. It should be noted that, in this method, the synthesis wavelet filter pairs achieve the interpolative reconstruction and, as a consequence, the selection of a mother wavelet, which better models, the regularity of natural images yields better results. In wavelet analysis, a signal is split into an approximation and a detail. The approximation is then itself split into a second-level approximation and detail, and the process is repeated. For n-level decomposition, there are n+1 possible ways to decompose or encode the signal. The filtering part of the reconstruction process also bears some discussion, because it is the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The downsampling of the signal components performed during the decomposition phase introduces a distortion called aliasing.

The Basis Fig.1 (b) Function Filters in Wavelet Decomp. & Reconstruction The thumbnail is constructed using only low frequency components. The remaining components are high frequency components. These are used to construct basis function. We use concepts similar to compression here. These contents represent a smooth image with high entropy. It is observed that for smooth images the basis function size is negligible as compared to images, which has more high frequency components. We transform them using wavelet transform and encode. The basis function is function representing the high frequency components. These are transmitted across web. In current advanced applications delivery environment, the system directs clients request to a back-end web server, determined by an administrator-selected load-balancing algorithm or layer 7 content switching policies. On client requests, server transmits the thumbnail that is low resolution image and basis function along with of JavaScript that runs within the user’s web browser. Reconstruction of High Resolution Image At client system the user gets the high-resolution image reconstructed within a short span of time using Java script. The reconstruction is based on (7) which is convolution of LR image (Thumb nail) and basis function. Initially the approximate contents from thumbnail are used to construct the image of original resolution. Such image has approximate contents spread over the image and no high frequency details. Later the

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

75

basis is used to add high frequency contents in image. In case of compression, which is mostly lossy, the high frequency components that are lost during compression are not overcome in decompression process. For our proposed process, we add high frequency components using convolution in LR image to reconstruct the original HR image. We intelligently extend the high frequency components to make edges look sharper. It turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related. The low and high-pass decomposition filters (L and H), and their associated reconstruction filters (L' and H').

Fig. 2(b) Decomposition of image

Using approximate component the thumbnail is constructed as shown in fig.3 for woman2 image in fig.1

Fig 1 (c) Reconstruction using decomposed components

The wavelet coefficients of natural images have two important properties: persistence and non-Gaussianity. Persistence refers to the observation that the magnitudes of wavelet coefficients corresponding to the same spatial location tend to propagate from lower resolution scales through to higher resolution scales (Fig 1 (c)). V.

EXPERIMENTAL RESULTS

We implemented the proposed approach. It is tested on standard 256 × 256 color images; Lena, Woman2, Peppers, Airplane, Sailboat, and Text images.

Fig. 2 (a) Original Image

Fig. 1 is one of the test images; original woman2 image. In Fig.2, decomposition of image into approx. & detail components is shown. Left upper corner is low frequency component used to construct the thumbnail image and remaining are high frequency component used to build basis function to be used to reconstruct resultant image as in fig.4.

© 2010 ACADEMY PUBLISHER

Fig. 3 Thumbnail of size 53 x 53

Each of these three high frequency components: horizontal, vertical and diagonal are used to build basis function. Basis function is expressed in terms of these wavelet coefficients, taking advantage of the well-known sparsity of wavelet representation of filtered high frequency components. For image quality measure there are two commonly used techniques: Objective evaluation and Subjective evaluation. We have used both objective (MSE, PSNR, MSSIM) and subjective (Mean Opinion Square-MOS) for image quality measure. The Peak Signal to Noise to Ratio (PSNR) is adopted as error metric for subjective measure. PSNR is defined as: 255 2 PSNR = 10log 10 N −1 2 1 (10) x i − ~x i ∑ N i =0 Here x is the original image and ~x is the reconstructed HR image, and N is the total number of x or ~x. The PSNR for test images for interpolation, JPEG and proposed approach is given in table 1. For objective test the original and reconstructed images were shown to set of observers and grades were obtained. The test was conducted for three approaches; by interpolation at client side, conventional compression approach and proposed approach. For images reconstructed with proposed approach received the grade 1, that is, excellent from 90% of the observers (Table 2).

76

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

TABLE 1 OBJECTIVE TEST RESULTS

PSNRs (dB) comparison Image

Compression

Bilinear

Bicubic

Proposed

Lenna Woman2 Peppers Airplane Sailboat Text

25.55 29.23 26.21 24.24 24.33 27.22

26.68 30.28 28.00 26.33 25.04 28.13

27.06 30.76 28.35 26.79 25.39 28.45

31.74 33.46 34.07 30.95 29.94 32.05

(Palo Alto, CA), Eastman Kodak, and Microsoft (Redmond, WA). FlashPix offers a better solution for work with large photographic quality images [17, 20]. In our work, we compared our technique with the FlashPix technology to a built-in Java PCD applet for posting the high-resolution photographic-quality images on the World Wide Web (WWW). Results are provided in table 3. Both implementations display photographic images with similar on-line toolbars to adjust display features interactively. [21].

We found it the most suitable for all images except text image. It is based on Human Visual System, which allow a better correlation with the response of the human observer. We conducted experiment by taking 50 as observers. They evaluated image quality of both images with grades as 1- Excellent, 2- Good, 3- Fair, 4- Poor and 5- Bad as in table 2. TABLE 2 SUBJECTIVE TEST RESULTS Method Image Lenna Woman2 Peppers Airplane Sailboat Text

Bicubic 2 2 3 2 2 3

Bilinear 3 3 3 3 3 4

Compression 2 2 3 2 2 3

Proposed 1 1 2 1 1 2

While most of the super resolution techniques proposed are based on multiple low-resolution images of the same scene [17,18], the focus of our technique is generating a high-resolution image from a single lowresolution image. Experiments have shown that proposed approach is competitive with the state of the art in resolution enhancement with O(n log n) time complexity of convolution operation. For conventional web approach the HR image is downloaded on demand to client’s machine from server and obviously the time required to download image is more. While for our approach the size of thumbnail plus basis is on an average one eighth of original one; the time required to transfer the image across web is approximately sixth times faster as that of conventional approach. VI. STUDY OF SOME EXISTING APPROACHES There are three solutions nearby to exhibit photoquality image account on the Internet. The initial is to use a PCD-aware Internet browser, such as the newest Mosaic for Windows downloaded from the National Center for Supercomputing Applications Web site. The second is to use PCD on the Web that includes a Java PCD applet2 on the Web page to be interpreted by Javaenabled Internet browsers (such as Netscape Navigator 3.0 and later versions and Microsoft Internet Explorer 3.0 and later versions). The third way utilizes FlashPix technology, an inventive creation from Live Picture (Campbell, CA) in collaboration with Hewlett-Packard

© 2010 ACADEMY PUBLISHER

Fig. 4 Reconstructed Image

Using FlashPix technology, a photographic image can be converted to a format well suited for display and retrieval on the Web. Smaller low-resolution images in GIF or JPEG format can be created as an online preview album that is linked to FPX images. This method reduces the long loading times associated with viewing highresolution images on the Web. It allows the delivery of high-resolution photographic images to Internet browsers with a FlashPix plug-in at about the same speed as lower resolution GIF or JPEG images. The users can zoom in the images on a region of interest for greater details, a feature that is lacking in the GIF and JPG formats[20,21]. TABLE 3 COMPARISON WITH OTHER METHODS [17,20,21] Features PCD FPX GIF/JPEG Functional Feature File Size Color Capacity Bandwidth Performance Zooming Control

PCD Applet Large Rich Slow Better Yes

FPX Plug in Medium Medium Fast Good Yes

Our

No Applet

Method Applet

Small Low Medium Slow No

All Rich Medium Best Yes

The Java PCD applet allows the downloading of PCD formatted images without any file conversion. However, the complicated installation of CGI software on the Web server makes it difficult for novices to create Web-based applications. Because of its consistent functions over platforms, the Java applet is the dominant trend for the next generation of applications on the Internet. To enhance processing speed, Java requires code

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

optimization, powerful computers, and faster data transmission protocols. In the meantime, a PCD file size needs to be reduced in order to speed up the download time. The FlashPix technology provides a highly efficient means to transfer high-resolution medical images across networks. After installing a software plug-in tool into Internet browsers, it improves data transmission rate for high-resolution images. It allows Web authors greater flexibility in the types of large high-quality images to be included in on-line catalogs and albums. The FlashPix technology enhances medical education by making available high resolution images that were previously available only in JPEG and GIF formats. VII. CONCLUSION In this paper, we have proposed real time faster approach for transfer of image across web. We model the high resolution image as composition of low-resolution image and a basis function. Simulation results have shown that it is better as compared to conventional approach with respect to both quality and transfer time. The low-resolution image transfer over web reduces the size of image being requested by the user and the number of packets required to complete the transmission by considerable amount. This reduces the load on all parts of network between application host site and the end user, delivering the significant increase in performance since user receives image data, in turn the web page at a faster rate. Our approach avoids the download time and consequently limits the network traffic inherent in current image embedded web page exploring. It is expected that this breakthrough innovation could be an integrated component of a comprehensive solution that optimizes and accelerates all aspects of applications delivery. This proposed new technique offers powerful performance benefits for all users, especially those on limited bandwidth link. We are currently working on two extensions of our work: incorporating more details in the image by increasing resolution of the image to higher factor using further decomposition of wavelet components. And extending our method to work better for situations where more than one low-resolution image is available. ACKNOWLEDGEMENT This work is supported in part by the University of Pune under Grant BCUD/578. We would like to acknowledge the technical support by K. K. Wagh Institute of Engineering Education, Nashik and Bharati Vidyapeeth, Pune (MS) India. We thank Dr. Gajanan Kharate for guidance. REFERENCES [1] P. Sementilli, B. Hunt, M. Nadar, “Analysis of the limit to super-resolution in incoherent imaging,” J Opt. Soc. Amer.-A, vol 10, pp. 2265-2276, 1993.

© 2010 ACADEMY PUBLISHER

77

[2] M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy and undersampled measured images”, IEEE Transactions on Image Processing, vol. 6, no. 12, pp. 1646–1658, 1997. [3] A M Darwish, MS Bedair, and S I Saheen, “ Adaptive resampling algorithm for image zooming.”, IEE Proc-Vis. Image SP, vol, 144, No 4, aug 1997, pages 207-212. [4] M. Antonini, M Barlaud, P Mathieu, and I daubechies, “Image coding using wavelet transform”, IEEE Trans. Image proc, vol.1, no.2, Apr 1992, pages 205-220. [5] Averbuch, D Lazar, and m Israeli, “ Image Compression using wavelet transform and multi resolution decompositions”, vol.5,no.1, Jan 1996, pages 4-15. [6] V. R. Algazi, G. E. Ford, and R. Potharlanka, “Directional interpolation of images based on visual properties and rank order filtering,” vol. 4, 1991, pp.3005–3008. [7] S. W. Lee and J. K. Paik, “Image interpolation using adaptive fast B-spline filtering,” in Proc. IEEE Int. Conf. Acoustics, Speech, SP, vol. 5, 1993, pp. 177–180. [8] S. Carrato, G. Ramponi, and S. Marsi, “A simple edgesensitive image interpolation filter,” in Proc. IEEE Int. Conf. Image Processing, vol. 3, 1996, pp. 711–714. [9] K. Ratakonda & Ahuja, “POCS based adaptive image magnification,” in Proc. IEEE Int. Conf. Image Proc, vol. 3, 1998, pp.203–207. [10] D. Calle and A. Montanvert, “Superresolution inducing of an image,” in Proc. IEEE Int. Conf. Image Proc, vol. 3, 1998, pp. 232–235. [11] Maria Grazia Albanesia,”Wavelets and Human Visual Perception in Image Compression”, Unversity of Pavia, Via Ferrata-1, I-27100 Pavia, Italy. [12] Maria Teresa Merino and Jorge Nunez in “SuperResolution of Remotely Sensed Images With VariablePixel Linear Reconstruction” IEEE Transactions On Geoscience And Remote Sensing, VOL. 45, NO. 5, MAY 2007, pp-1446 -1457 [13] Giannis K. Chantas, Nikolaos P. Galatsanos, and Nathan A. Woods, “Super-Resolution Based on Fast Registration and Maximum a Posteriori Reconstruction” IEEE Transactions On IP, VOL. 16, NO. 7, JULY 2007 [14] Patrick Vandewalle, Sabine Susstrunk and Martin Vetterli “A Frequency Domain Approach to Registration of Aliased Images with Application to Super-resolution”, Hindawi Publishing Corporation EURASIP Journal on Applied Signal Processing Volume 2006, Article ID 71459, Pages 1–14 DOI 10.1155/ASP/2006/71459 [15] Guilherme Holsbach Costa, and José Carlos Moreira Bermudez, “Statistical Analysis of the LMS Algorithm Applied to Super-Resolution Image Reconstruction”, IEEE Transactions On SP, VOL. 55, NO. 5, MAY 2007 [16] Balaji N, Kenneth E, “ A Computationally Efficient Super Resolution Algorithm for Video Processing using Partition Filters”, IEEE Transaction on circuits & systems for Video Technology, Vol. 17, No5, May 2007, pp-621-634 [17] W.S. Warner, “Recent developments in the USDA’s 35mm aerial photography programme,” J. Photographic Sci., vol. 44, no. 3, pp.70–72, 1996. [18] L. H. Liedholm, A. B. Linne, and L. Agelii, “The development of an interactive education program for heart failure patients—The Kodak Photo CD portfolio concept,” Patient Education and Counseling, vol. 29, 199–206, 1996. [19] R. A. Older, “Using the Photo CD in academic radiology,” Amer. J. Roentgenol., vol. 166, no. 2, pp. 453–456, 1996. [20] G. K. Wallace, “The JPEG still picture compression standard,” Commun. ACM, vol. 34, pp. 30–44, 1991. [21] T. Hamid, “Wavelet-based recording stores data at high resolution,” Vision Syst. Des., vol. 2, 9, pp. 12–13, 1997.

78

JOURNAL OF MULTIMEDIA, VOL. 5, NO. 1, FEBRUARY 2010

[22] Sung Cheol Park, Min Kyu Park, and Moon Gi Kang, “Super-Resolution Image Reconstruction: A Technical Overview”, IEEE Signal Processing Magazine, May 2003; a special issue on Super Resolution Imaging. pp 21-36 [23] H. Ur, D. Gross, “Improved resolution from sub-pixel shifted pictures”, CVGIP: Graph. Models Image Process. 54 (1992) 181–186. [24] Sina Farsiu, Dirk Robinson,Michael Elad, Peyman Milanfar,” Advances and Challenges in Super-Resolution”, J. Imag. Syst. Technol., Wiley Periodicals, Inc. vol. 14, no. 2, pp. 47–57, Oct. 2004. [25] Frank M. Candocia, and Jose C. Principe, “SuperResolution of Images Based on Local Correlations” Ieee Transactions On Neural Networks, Vol. 10, No. 2, March 1999 pp 372-380

2

Author, Dr. Dattatraya S. Bormane is PhD in Electronics Engineering. He has 20 years of teaching experience. He is currently working as Principal at Rajarshi Shahu College of Engineering, Pune (MS, India). He has 35 papers to his credit. His areas of interest include Digital Signal Processing, Image Processing and Pattern Recognition.

Authors Biography 3

§1Author, Varsha Hemant Patil is Doctorate in Computer Engineering and Currently working as Professor at Computer Engineering Department, university of Pune, India. She has 19 years of teaching experience. Her book on ‘Discrete Mathematics’ international edition has been published by McGraw-Hill. She has authored book on Data structures and on Theory of Computation. She has 25 papers to her credit.

Prof. Dr. Gajanan K. Kharate is born at Akola, India on 1st July 1963. He has completed his graduation, B. E. Electronics from Shri Sant Gajanan Maharaj College of Engineering, Shegaon, Amravati University in 1987 and M. E. Electronics from Walchand College of Engineering, Sangali, Shivaji University, Kolhapur, in 1997 and completed Ph. D. from Pune University in 2007. He is working as Principal at Matoshri College of Engg and Research centre, Nashik (MS), India. He has been in engineering teaching from 1987. He is the Chairman, Board of studies of Electronics Engineering, University of Pune. He is author of five books. His recent work is concentrated on developing Image Compression algorithms based on wavelets. He has 20 research conference and journal papers to his credit.

© 2010 ACADEMY PUBLISHER

Author, Snehal M. Kamlapur is postgraduate in Computer Engineering, from university of Pune, India. She has 11 years of teaching experience. She is currently working as Asst Professor in Computer Engineering Department of KKKWIEER, Nashik (MS, India). She is author of books titled Artificial Intelligence and Distributed systems. She has 12 papers to her credit.

Suggest Documents