IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
535
Edge-Preserving Texture Suppression Filter Based on Joint Filtering Schemes Zhuo Su, Student Member, IEEE, Xiaonan Luo, Zhengjie Deng, Yun Liang, and Zhen Ji, Member, IEEE
Abstract—Obtaining a texture-smoothing and edge-preserving filtered output is significant to image decomposition. Although the edge and the texture have salient difference in human vision, automatically distinguishing them is a difficult task, for they have similar intensity difference or gradient response. The state-of-the-art edge-preserving smoothing (EPS) based decomposition approaches are hard to obtain a satisfactory result. We propose a novel edge-preserving texture suppression filter, exploiting the joint bilateral filter as a bridge to achieve the purpose of both properties of texture-smoothing and edge-preserving. We develop the iterative asymmetric sampling and the local linear model to produce the degenerative image to suppress the texture, and apply the edge correction operator to achieve edge-preserving. An efficient accelerating implementation is introduced to improve the performance of filtering response. The experiments demonstrate that our filter produces satisfactory outputs with both properties of texture-smoothing and edge-preserving, while compared with the results of other popular EPS approaches in signal, visual and time analysis. Finally, we extend our filter to a variety of image processing applications. Index Terms—Degenerative scheme, edge preserving, image smoothing, oscillation, texture suppression.
I. INTRODUCTION
applications, e.g. tone mapping, detail manipulation, non-photorealistic rendering, etc. However, if the input image contains various textures, these EPS approaches would mistreat some textures as edges and preserve them instead of smoothing, see Fig. 1(c)–(e). Paris et al. [9] pointed out that these approaches usually depend on the variance of pixel intensity [10], gradient magnitude [3], or extreme value [11] to preserve the edge. Because this problem would produce serious inference to specified applications, suppressing the textures in EPS filtering is necessary. Subr and Farbman et al. exploited the weighted least squares (WLS) optimization framework [3], and successively constructed the envelop of local extrema [12] and diffusion map [13] to solve this problem and further demonstrated the applicability. In this paper, we propose a novel edge-preserving texture suppression filter. It takes the joint bilateral filter (JBF) [14], [15] as a bridge, combining different filtering schemes, to suppress the textures to the most extent with retaining the edge sharpness. Our filtered output owns both properties of texture-smoothing and edge-preserving. In the following, we summarize the state-of-the-art EPS approaches and give an overview of our solution. A. Related Work
I
N image filtering, how to distinguish edge, texture and smoothing transition is challenging and significant to specified applications. Classical linear filtering (e.g. Gaussian filtering) can smooth an image effectively but cause serious edge blurring [1], see Fig. 1(b). The state-of-the-art edge-preserving smoothing (EPS) approaches [2], [3], [5], [8] emphasize on the importance of edge sharpness, successfully applied to many
Manuscript received July 19, 2012; accepted September 13, 2012. Date of publication December 28, 2012; date of current version March 13, 2013. This work was supported by the NSFC-Guangdong Joint Fund (U0935004, U1135003), the National Key Basic Research and Development Program of China 973 (2013CB329505), the National Key Technology R&D Program (2011BAH27B01), the National Science Fund of China (61232011, 61262050, 61202293), and the Scholarship Award for Excellent Doctoral Student granted by the Ministry of Education 2012. The associate editor coordinating the review of this manuscript and approving it for publication was Eckehard G. Steinbach. Z. Su and X. Luo are with the National Engineering Research Center of Digital Life, State-Province Joint Laboratory of Digital Home Interactive Applications, School of Information Science & Technology, Sun Yat-sen University, Guangzhou, China (e-mail:
[email protected];
[email protected]). Z. Deng is with the School of Information Science and Technology, Hainan Normal University, Haikou, China (e-mail:
[email protected]). Y. Liang is with the School of Information, South China Agricultural University, Guangzhou, China (e-mail:
[email protected]). Z. Ji is with the Department of Computer Science, Shenzhen University, Shenzhen, China (e-mail:
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TMM.2012.2237025
The bilateral-based filter is first presented by Tomasi and Manduchi [10], which is a classical and effective edge-preserving smoothing filter. A popular formulation is to construct the coefficients of Gaussian low-pass filter according to the spatial and the range distances of pixels. Then a non-linear filtering mask is composed to implement the translate-variant spatial filter [9]. Since then, some extensions of the bilateral filter are presented. Choudhury et al. [16] presented a trilateral filter to improve the filtering effect in high contrast cases. Eisemann et al. [15] and Petschnigg et al. [14] successively introduced cross/joint bilateral filter by modifying the input of range function. Takeda et al. [17] presented the high-order bilateral filter by applying kernel regression theory. And Baek et al. [8] summarized the bilateral-based filter as a spatially varying high dimensional Gaussian filter. On the other hand, considering the naïve implementation of bilateral filter is time consuming, some accelerating methods were presented. Durand and Dorsey [18] accelerated BF by using a piecewise-linear approximation in the intensity domain and appropriate sub-sampling. Pham and Vliet [19] exploited the separability of Gaussian to apply 1D BF to each spatial direction. Paris et al. [2], [20], [21] developed a data structure called bilateral grid (BG) and took advantage of some signal processing results to accelerate the filtering process. Adams et al. [22], [23] successively presented two novel data structures based on the Gaussian KD-tree and permutohedral lattice to
1520-9210/$31.00 © 2012 IEEE
536
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
Fig. 1. The performance of the state-of-the-art image smoothing approaches applied to Barbara image with two local magnified oscillation regions: a portion of , ) [1]. (c) Hybrid median filtering [1]. (d) Paris’s scarf and a portion of tablecloth. (a) Original Barbara image. (b) Gaussian filtering ( , ) [2]. (e) Farbman’s weighted least squares filtering ( , ) [3]. (f) Chao’s improved anisotropic diffusion bilateral filtering ( , ) [4]. (g) Kass’s local mode filtering ( , ) [5]. (h) Fattal’s edge-avoiding wavelets filtering ( , ) [6]. (i) Xu’s ( gradient minimization filtering ( , ) [7]. (j) Our JIAS filtering ( , , ). The definition of the parameters is clarified in the corresponding references. And the detailed explanation is presented in Section IV-C.
further decrease the time and memory cost. Weiss [24] applied 3D histograms which were described by square box spatial kernel for acceleration. Yoshizawa et al. [25] used the Fast Gauss Transform (FGT) to accelerate the computation. Porikli and Yang et al. [26]–[28] further demonstrated the existing BF and developed a variety of spatial kernels and range kernels. Recently, some real-time implementations were proposed with recursive approximation [29], domain transform [30], and adaptive manifolds [31]. The optimization-based filter is an effective edge-preserving smoothing approach. Farbman et al. [3] proposed an edge-preserving multi-scale decomposition based on the weighted least squares optimization framework. Bhat et al. [32] formulated the edge-preserving smoothing problem in a variational framework and solved a 2D version of the screened Poisson equation. Subr et al. [12] considered an oscillatory model of texture regions and removed image details by identifying and fitting envelopes to local extreme intensities. Subsequently, Farbman et al. [13] introduced the idea of replacing Euclidean distances with diffusion distances which were calculated through edge-aware diffusion maps. Bhat et al. [33] further summarized a gradientdomain solution framework called GradientShop, which was based on weighted least squares optimization as well. He et al. [34] presented an explicit filter with the guided image based on the local linear model. And recently, Xu et al. [7] presented a novel image smoothing framework by solving gradient minimization. The diffusion-based filter is known as the partial differential equations based filter, and anisotropic diffusion [35] is a famous
approach for edge-preserving smoothing, which considers the local gradient information of each pixel in the image. Weickert et al. [36] proposed two efficient numerical approaches to accelerate the diffusion, which were in accordance with the fully discrete scale-space framework and based on an additive operator splitting (AOS). Chao et al. [4], [37] proposed a diffusion model incorporating both the local gradient and the gray level variance to preserve edges and fine details. The histogram-based edge-preserving filter is first introduced by Weijer and Boomgaard [11]. They proposed a local mode filtering that was motivated from both the local histogram with tonal scale and the robust statistics viewpoint. Felsberg et al. [38] presented an approach using B-spline look-up tables for the histogram-based smoothing filter, called channel smoothing. Subsequently, Kass and Solomon [5] also provided another way of smoothed local histogram filter for edge-preserving smoothing operation. The decomposition-based filter is a novel approach for multiscale edge-preserving smoothing. Fattal [6] presented a new family of second-generation wavelets constructed by a robust data-prediction lifting scheme. Paris et al. [39] presented a local Laplacian filter for edge-aware image processing on the basis of Laplacian pyramid [1]. However, most of the above edge-preserving smoothing approaches have the drawback that they would mistreat partial textures as edges, resulting in the incomplete results of neither preserving nor smoothing. We choose some state-of-the-art approaches and demonstrate their performance in Fig. 1.
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
537
II. JBF FRAMEWORK WITH JOINT FILTERING SCHEMES Joint bilateral filter (JBF) is an extension of bilateral filter (BF), which was introduced in [14], [15]. The major distinction between BF and JBF is that the latter one takes some specified image , called joint image, related with the input image as the input data in the range function of BF. The formulation is the following
Fig. 2. The pipeline of our approach, including three major stages: image degeneration, edge correction and JBF filtering. Essentially, taking advantage of the JBF framework [9] as a bridge, we obtain both properties of texture-smoothing and edge-preserving.
B. Overview of Our Approach As illustrated in Fig. 1, the state-of-the-art EPS approaches are hard to produce a satisfactory filtered output if an image contains textures. However, considering the result of Gaussian filtering (GF), we can see an obvious smoothing for the texture. That is, although the linear filtering produces edge blurring, it can perform a sound texture suppression at the same time. In other words, if the texture-smoothing characteristic of linear filtering and the edge-preserving characteristic of EPS approach can be combined in union, we can obtain both properties of texture-smoothing and edge-preserving. However, edge preservation and texture suppression is an incompatible operation, so a simple additive combination is not feasible. In our opinion, exploiting the framework of joint bilateral filter (JBF) [9], we can achieve our purpose elegantly. The pipeline of our approach is shown in Fig. 2. JBF imports a joint image to improve the filtered effect. Because of this characteristic, achieving both properties of texture-smoothing and edge-preserving becomes feasible. But how to construct a suitable joint image is a key problem. In one hand, to achieve the texture suppression, we develop two schemes based on iterative asymmetric sampling and local linear model. In the other hand, to prevent the edge degeneration caused by texture suppression, we adopt a gradient minimization to correct the edge. In addition, we exploit Paris et al.’s high dimensional linear convolution [2] to promote the computational efficiency. In Section IV, we will discuss the parameter setting and the relationship. Through the experiments in signal, visual, and time analysis, we demonstrate that our approach achieves the purpose even better than previous EPS approaches. Furthermore, some image applications are presented in our experiments as well, including edge detection, detail enhancement, texture copy, tone mapping, non-photorealistic rendering, etc. Our main contributions are as follows: 1) Propose a novel filter with both properties of texture-smoothing and edge-preserving. 2) Develop the iterative asymmetric sampling and the local linear model degenerative schemes to suppress the texture. 3) Exploit the postprocessing operators for the edge correction and the filtering acceleration. 4) Demonstrate the performance of our edge-preserving texture suppression filter and illustrate the applications with the filtered results.
(1) where and denote the spatial domain and the range domain in the image space, respectively. The pixel is the center of and the neighbor pixel , where with , , and denotes the radius of spatial domain . In addition, both the spatial and the range filtering function are constructed by Gaussian basis function , where the spatial controls the filtering smoothness, and the range controls the sensitivity of the edge preservation. Because of the flexibility in choosing the joint image , JBF (1) is an ideal framework to achieve both properties of texture-smoothing and edge-preserving. He et al. [34] proposed a guided image filter (GuiF) with the performance similar with JBF. Ideally, constructing some suitable weighted constraints in the optimization framework [3], [12] can also achieve aforementioned task. However, in our experiments, we found that the approximate fitting gradient, which would more possibly lead to total gray-level derivation, can not be incomparable to JBF, see gray arrow in Fig. 7(a). In the viewpoint of image decomposition, an image can be decomposed into low- and high-frequency signals, i.e. . The low-frequency signal can be produced by applying the linear filter to the original image in the spatial domain, i.e. , where is the spatial convolution operator. Since the textures are the high-frequency signals, the texture suppression is a process of smoothing high-frequency signal, such as removing all high-frequency signals. After filtered, the output low-frequency signal is said the degenerative image (corresponding to the original image) and the textures are suppressed as a consequent result. Considering the degenerative image as a specified joint image, we will focus on its construction in the following. A. Iterative Asymmetric Sampling Degenerative Scheme Signal sampling is an accelerating scheme usually applied in EPS filtering, e.g. Paris’s bilateral filtering [2] and local Laplacian filter [39], Chen’s bilateral grid [21], Fattal’s multiscale decomposition [40], etc. Besides acceleration, sampling operation can control the amount of information. Intuitively, image downsampling is a process of reducing information, and the upsam-
538
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
Fig. 3. (a) Input image. (e) The filtered results of JIAS (top) and JLLM (bottom) correspond to two marked regions in (a). From (b) to (d) are the results of JIAS . (c) The edge correction for the result of IAS . (d) The approach in different steps. (b) The degenerative image is constructed by IAS scheme . And from (f) to (h) are the results of JLLM approach ( , , ). Note final result of JIAS which is produced by JBF these magnified local subfigures. In the degenerative images (b) and (f), the textures are smoothed effectively, but the edges are blurred. By the edge correction, this phenomenon would be improved, but some blocks-like effects appear in (c) and (g). Through the final JBF operator, we obtain a sound tradeoff between texture smoothing and edge preserving.
pling adding. From the viewpoint of signal sampling, restoring an image from low-resolution to high-resolution is an ill-conditioned problem that merely regains the salient edges. Although the edge and the texture have similar performance in variance of intensity or gradient, textures usually present a gathering of variance in local regions. As illustrated in gray signal of Fig. 7, the obvious difference between edges and textures is that textures have some salient oscillations. We can conclude that the textures can not be restored if the sampling scope is over the range of oscillations. Exploiting this conclusion, we develop an asymmetric sampling approach. For flexibility, we introduce the iteration similar with Lev and Zucker’s approach [41]. Therefore, the course of iterative asymmetric sampling (IAS) is formulated as
Algorithm 1: Iterative Asymmetric Sampling Degenerative Scheme Input: : image, : iterative times Output: 1:
initial the sampling image
2: for
to do
3: gaussian pyramid 4: 5:
(2) where and are the discrete up- and down-sampling operand . denotes the iterative ations with a sampling rate times. is a smoothing operator to improve the texture suppression and anti-aliasing. In our implementation, we apply the bicubic and the bilinear sampling as the up- and down-sampling operations, respectively. And is the same with the spatial filin JBF. Finally, the output is taken as the tering degenerative image. The pseudo code of this scheme is given in Algorithm 1. And the outputs are shown in Fig. 3(b).
: degenerative image
downsample the input scalar image by
upsample the low-resolution image update the degenerative image
6: end for 7:
final output image
8: return B. Local Linear Model Degenerative Scheme Although IAS can achieve texture suppression, it is time consuming with the increasing of iterative times or image sizes (see Table II). He et al. [34] used the linear regression of local
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
patches and the implementation of box filter, to improve the filtering efficiency. Here, we exploit their implementation to construct the degenerative image. For each pixel , we obtain the degenerative image by setting all the pixel values to be zero except the pixels with the indices in the patch and those pixels are defined as the following (3) According to the solution of the linear equation, we obtain and [34] the explicit expression about linear coefficients
(4) denotes the variance on the pixel intensities in the where patch , denotes the number of the pixels of the patch , denotes the mean value of and restricted to the patch . The smoothing factor is used to control the texture suppression. The local linear model (LLM) can , be extended to the entire image, when , which overcomes the overlap of . The patches. Then, the joint image pseudo code of this scheme is given in Algorithm 2. And the outputs are shown in Fig. 3(f). Algorithm 2: Local Linear Model Degenerative Scheme Input: : image, : regular factor, : filtered kernel radius Output:
: degenerative image
1:for all
in
2:
mean of
3:
mean of
4:
variance of pixel intensity of the kernel
do
5:
by box filter in the kernel by box filter in the kernel
variance of
with with
in each patch
linear coefficient
6:
linear coefficient
7: 8: end for 9: for all
in
10:
mean of all
11:
mean of all
12: 13: end for 14: return
do
local linear model for input image
539
III. EDGE CORRECTION AND FILTERING ACCELERATION The textures can be suppressed by taking the degenerative schemes, however, the side effect of edge blurring appears as well. To reduce the edge degeneration, we adopt a gradient-reconstructing operation to correct the salient sharp edges. In addition, JBF is a nonlinear filter whose brute-force implementation is time consuming; therefore, we exploit the bilateral grid implementation [2] to accelerate our filtering process. A. Edge Correction Using Gradient Minimization Since the edges will be blurred in the degenerative procedure, to obtain the edge-preserving effect, the edge of degenerative image which is corresponded to the original input image should be corrected to prevent the blurring. From the visual effect, sharp variance of neighbor pixels reflects the most obvious edge feature. Therefore, we try to reconstruct the sharp edges in the degenerative image to overcome the serious distortions which are caused by degeneration. But reconstructing the step edges is a challenging problem. Recently, Xu et al. [7] proposed an image smoothing approach via gradient minimization (L0GM), which counts the non-zero gradients and considers the edge-preserving smoothing as a -norm regularized optimization problem. This approach is not suitable to the texture suppression because it merely treats the non-edge regions as constants. As illustrated in Fig. 1(i), that would lead to obvious intensity offset. However, inspired by Xu’s approach, we can take advantage of the property of -norm to describe the feature of sharp edges, and use it to reconstruct the salient edge in the degenerative image. Considering the solved image as a 2D spatial signal, the count of sharp edges is defined as the following [7] (5) denotes the operator counting the number of that where satisfies the non-zero gradients, that is the -norm of gradient. and denote the - and -direction first-order forward difference operators, respectively. According to the above (5), we will minimize the following optimization equation [7] (6) is used to preserve the similarity of the image where structure. is used to control the number of step edges. The smaller is, the more step edges are contained in the output. They give a special alternating optimization strategy with halfquadratic splitting [42] to approximately solve the above equation [7]. We demonstrate the effectiveness of edge correction based on L0GM in Fig. 4. As illustrated in Fig. 4(d), compared with the 1D signal of the original step edge, degeneration and correction, the signal of the corrected result regains the sharpness of edge. B. Acceleration Using High Dimensional Linear Convolution In our implementation, we consider applying BG [2], [21] to accelerate JBF, for it can be efficiently implemented on GPU and work seamlessly for color images. In this approach, the signal sampling is exploited to convert the two dimensional
540
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
TABLE I THE PARAMETER ASSIGNMENTS ARE RECORDED CORRESPONDING TO THE FILTERED OUTPUTS IN FIG. 8
Fig. 4. Edge correction. (a) Input. (b) Edge degeneration (ED). (c) Result of Edge Correction (EC). (d) 1D signal comparison. The degenerative edge is corrected to recover the sharpness comparable to the original input.
non-linear BF into a high dimensional linear convolution to accelerate the filtering process. By adding an additional dimension , a new coordinate is defined for each pixel of the input image . Therefore, the (1) can be rewritten in the following three dimensional linear convolution
Fig. 5. Parameters relationship. (a) JIAS . (b) JLLM measure the filtered outputs by Wang’s SSIM [43].
. We
(7) where is defined as the high-dimensional separable and denote the 3D weighted Gaussian convolution, and functions and intensity function in the product space , respectively. is the 2D normalized weighted functions defined in . By the nonlinear slicing and division operations [2], the output of JBF with degenerative image is the following (8) IV. PERFORMANCE AND DISCUSSION In this section, we firstly discuss the parameter settings in our approaches. Then, we analyze the relationship with the state-ofthe-art EPS approaches and give out the comparisons for the texture suppression problem. We demonstrate the effectiveness of our approaches of joint iterative asymmetric sampling (JIAS) and joint local linear model (JLLM) in signal performance, visual effect, and time efficiency. All the experiments are tested on PC with Intel I5-2450M 2.5GHz CPU, NVIDIA 610M, 4GB DDR3 Ram, and MATLAB R2010a. A. Parameters and Relationship Our joint filtering schemes refer to JBF, IAS, LLM and L0GM approaches, and each operation has its corresponding
Fig. 6. Compared with median filters. (a) Pinwheel iamge. (b) Photoshop me, dian filering. (c) Kass’ median filtering [5], (d) our JIAS filtering ( , ).
parameters. How to assign the parameters appropriately is a key problem in our JIAS and JLLM approaches. Firstly, we can adopt the adaptive scheme in terms of the image size to determine the radius of spatial filter in JBF, IAS and LLM. In our experiments, this radius is evaluated by round off . For the corresponding spatial in as JBF and IAS, we specify them in well. The default sampling rate is 1/2 according to the pattern of Gaussian pyramid. In the stage of EC, recommended by Xu’s implementation [7] for the natural image. After the above adjustment, we reduce the parameters of JIAS and JLLM to triple and . and are used to smooth is a balance the texture, is used to correct the edge, and factor between the texture smoothing and edge preserving. To analyze the relationship of the parameters, we take SSIM [43] and JLLM to measure the filtered results of JIAS with 100 groups parameters, respectively. The measurement records are shown in Fig. 5. In our experiments, or we can obtain the satisfactory output by adjusting only. Fig. 3 shows the output in each stage with the
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
541
Fig. 7. Comparison of the 256th row pixel intensity of images in Fig. 3. (a) Plots of 1D signal of input image (gray), WLS (green), IAS (blue), corrected edge (purple), and our JIAS approach (red). (b) Compare the outputs of BF (green) and JLLM (red). Through the magnified windows, we note that our JIAS and JLLM approaches have a sound performance in both the texture suppression and edge preservation.
corresponding parameters. And Fig. 8 shows the filtered output which corresponds to the parameter assignments in Table I. Compared with the state-of-the-art approaches [2], [3], [7], Our JIAS and JLLM adopt the joint filtering schemes which use JBF as a bridge to integrate all advantages of various approaches together. In edge preserving, our approach integrates the intensity estimation of BF and gradient counting of L0GM to be aware of the edge; in texture smoothing, both IAS and LLM schemes intend to construct the band-limited low-pass filters. We distinguish the edge and the texture by their geometry characteristics in spatial domain, rather than the scale of intensity or gradient variance. In our opinion, although both the edge and the texture are the intensity variance, they have obvious distinction in spatial distribution. The edge can be represented as the sole intensity variance, while the texture can be represented as the neighbor dense intensity variance in the spatial domain. For example, the scarf and the clothes of Barbara in Fig. 1, and the 1D signal in Fig. 7. Either the sampling operation of IAS or the local fitting of LLM can suppress the texture in the spatial domain, that is, the texture is removed by the change of spatial distances among textures. The removed textures can not be restored, but the salient edge can be corrected in terms of the blurring edge structure (see Fig. 4). On the basis of this property, we can adjust the parameter settings to distinguish the texture and the edge, and balance the texture smoothing and edge preserving. Fig. 1 shows the comparison of various approaches and demonstrates our approach assembles kinds of advantages. Median filter (MF) is a non-linear filter that has a sound noise reduction [24]. But in dealing with the texture, it would produce some artifacts. Kass et al. [5] pointed out this problem in their work and proposed an improved median filter to overcome this defect; however, it would produce the gray transition appar-
ently. We compare our JIAS output with Photoshop’s MF and Kass’ MF in Fig. 6. B. Signal Analysis Fig. 7 provides an obvious comparison with state-of-the-art approaches and reflects the advantages of our approaches. Usually, the step-like shape is the salient edge, the oscillation is the texture, and the slope is the intensity transition. The 512 512 size Barbara image is chosen as the tested image. Because the 256th pixel row contains the sharp step, multi-scale oscillations and slope signal, it is suitable for analyzing the filtering performance. The original signal is plotted in gray curve in Fig. 7. The degenerative result (blue), corrected result (purple) and the final result (red) of JIAS and JLLM are plotted as well, and their corresponding images are presented in Fig. 3. We magnify each subfigure in detail. Fig. 7(a) shows the comparison of our JIAS ( , , ) and WLS ( , ). Both WLS and our JIAS have sound texture suppression in Block a1. But WLS has an intensity offset problem. Note that the result of WLS is slightly lower than that of JIAS, and has an obvious deviation at the intensity transition (gray arrow). This variance would lead to changing the tone appearance in visual (see Fig. 1(e)), and may cause unexpected influence in some applications, e.g. tone mapping (see Fig. 13(c)). The comparable edge-preserving effect of WLS and JIAS is presented in Block a2. But in Block a3, the result of WLS appears unexpectedly incomplete smoothing in the strong oscillation. By contrast, the result of JIAS approach obtains a sound texture smoothing. Fig. 7(b) shows the comparison of our JLLM ( , , ) and BF ( , ). The result of BF presents obvious edge preserving in Block b2, but the textures which should be smoothed are mistreated as edges
542
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
Fig. 8. Compare the results of the state-of-the-art EPS approaches with those of our JLLM and JIAS on different types of images (first row). There are artifacts (a), textures (b), gray (c) and color natural images (d). We extract eight samples as inputs from these four types of images (two for each), and mark their positions by red and blue boxes in the corresponding images. In the extracted subfigures, from top to bottom, are the results of Gaussian filter (GF) [1], median filter (MF) gradient minimization [1], bilateral filter (BF) [10], weighted least squares filter (WLS) [3], guided filter (GuiF) [34], improved anisotropic diffusion (IAD) [4], (L0GM) [7], and our JIAS and JLMM.
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
543
Fig. 9. The visual performance with different parameter settings. We emphasize on the effect of the variance w.r.t. parameters , and . The results of JIAS are , . (b) , . (c) , . (d) , . And the results of JLLM are shown in the bottom shown in the top row. (a) , . (f) , . (g) , . (h) , . We can see various smoothing in Lena’s hairs and row. (e) well-preserving in the shoulders.
in Block b1 and b3. This mistreated output is usually difficult for manipulation, and produces irregular ruins in visual, as illustrated in Fig. 1(d). With our LLM scheme, the texture is suppressed, along with an obvious edge blurring. However, with edge correction, our JLLM achieves the texture smoothing as well as edge preserving. C. Visual Analysis To analyze the visual performance of the state-of-the-art approaches and our approaches, we choose Gaussian filter (GF), median filter (MF), Paris’s bilateral filter (BF) [2], Farbman’s weighted least squares filter (WLS) [3], He’s guided filter (GuiF) [34], Chao’s improved anisotropic diffusion (IAD) [4], Kass’s local mode filtering (LMF) [5], Fattal’s edge-avoiding wavelets filtering (EAW) [6] and Xu’s gradient minimization (L0GM) [7], and present the comparisons in Fig. 1, Fig. 8 and Fig. 9. In Fig. 1, we choose the Barbara image to test the comparative approaches individually because it contains complicated textures and edges. In Fig. 1(b), Gaussian filter [1] produces a sound texture smoothing but causes serious edge blurring. In Fig. 1(c), median filter [1] produces strange artifact which seems like that in Fig. 6(b). From (d) to (i), the textures are smoothed incompletely, although the edges are well preserved. Note that, WLS, IAD, LMF and L0GM produce over-flatted regions in the filtered output. Our JIAS balances the texture smoothing and the edge preserving and avoid the defects of the above comparative approaches. We test 10 samples with a variety of groups of parameters in Fig. 8. The tested samples are divided into 4 types, including ar-
tifacts, textures, gray and color natural images. Fig. 8 illustrates the filtered results of extracted subfigures, and Table I lists all the corresponding parameter assignments. Artifacts in Fig. 8(a) are made up of noise (top-left), stripe with grey transition (top-right), chessboard (bottom-left) and ripple (bottom-right). GF, IAD, JIAS and JLLM present a sound noise reduction. MF, WLS, and LOGM do not remove the extreme (black) noise, and BF and GuiF have a few residual artifacts. Except GF, other approaches are good at edge preserving in the stripe and chessboard. For the ripple, MF, BF, WLS and GuiF produce the irregular filtered outputs. IAD and L0GM have similar outputs which flat the larger gaps. Our JIAS and JLLM flat not only the larger gaps but also the smaller ones. Textures in Fig. 8(b) are made up of branches and leaves (topleft), bark (top-right), weave (bottom-left) and crystal (bottomright). The results of MF, BF, WLS and GuiF produce some neither texture smoothing nor edge preserving phenomenon, which seems like a slight haze over the output. The results of IAD and L0GM are similar, containing a few trivial patches. In our results, all the details are removed, so they seems smoother than those of IAD and L0GM. Gray (Statue) and color (Lena) natural tested images are shown in Fig. 8(c) and (d), respectively. MF, BF, WLS and GuiF produce the output still with the haze phenomenon, and IAD and L0GM produce some trivial patches in the filtered output, but our JIAS and JLLM do not. In Fig. 9, we present our JIAS and JLLM with different parameter settings. According to the range of the parameter setting in Fig. 5, we adjust the smooth factor (JIAS) and (JLLM) and their edge corrected factor , and observe the performance
544
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
TABLE II TIME COST OF EACH STAGE IN OUR JIAS AND JLLM WITH DIFFERENT SIZES OF 8-bit GRAY IMAGES. ALL THE RECORDS ARE EVALUATED IN MATLAB 2010A (UNIT: sec.)
of the output. Note that, with the parameter varying, the Lena’s hairs are smoothed and the boundary of the shoulder is preserved separately.
Fig. 10. Edge detection. All the subfigures are from the results of Fig. 1 and the results are obtained by the Canny operator [1] with threshold 0.1. (a) A portion of Barbara image. (b) Most of textures are treated as edges. Some residual textures are retained in (c) to (e), which are obtained by the BF, WLS, and LE approaches, respectively. The result of our JIAS approach has clearer edges and less residual textures in (f).
A. Edge Detection D. Time Analysis To demonstrate the efficiency of our approaches, we record the time cost of each step in our JIAS and JLLM with specific sizes of Barbara image and parameter assignments. This time record is under MATLAB 2010a, and Table II shows the records in detail. We found that LLM is more efficient than IAS , especially as the iterative times and image sizes increase. In addition, the major cost is in the edge correction. As mentioned previously, we adopt Chen’s bilateral grid [21] as our JBF implementation. Since we only apply the framework of JBF, our approach does not depend on the specific data structure or algorithms. Generally speaking, an improved accelerating implementation is feasible, e.g. the O(1) approaches [27]. For the medium size image ( 1 million pixels), our JIAS and JLLM can obtain a sound time response in processing.
V. APPLICATIONS Image decomposition based on edge-preserving smoothing is a fundamental processing for many image applications. Paris et al. [9] summarized a variety of applications on the basis of bilateral filter in their course, including denoising, high-dynamic-range tone mapping, data fusion, etc. And Farbman et al. [3] took advantage of the multi-scale operators to further improve the ability of edge-preserving decomposition, and applied it in detail enhancement, etc. A sound suppression of local textures during edge-preserving smoothing has sure benefit in applications. Subr et al. [12] demonstrated a local texture suppression based image decomposition for some applications. Inspired by their work, we apply our JIAS and JLLM approaches in different texture-related applications, including edge detection, detail enhancement, texture transfer, and so on.
Detecting salient edges is a basic operation in image processing and computer vision. Although lots of classical edge detection approaches have been proposed [1], it is still a hard work to detect the edges exactly. The main reason is that edges and textures have similar characteristics of intensity variance, see Fig. 10(a). If we apply edge detection for the original image, lots of textures are treated as edges and retained, see Fig. 10(b). Applying the popular edge-preserving smoothing approaches, to a certain extent, highlights some salient edges, but causes the interference of residual textures as well. The detected results of BF [10] and WLS [3] are presented in Fig. 10(c) and (d), respectively. As pointed by the red arrows, some residual textures are treated as edges obviously. Subr et al. [12] proposed a WLS-based local extrema approach to suppress the textures, eliminating the interference of textures during the detection, see Fig. 10(e). Unfortunately, some residual textures still lie near the edges. With our JIAS, the degenerative image suppresses most textures, and the edge correction and JBF filtering eliminate the residual textures near the salient edges. Therefore, a clear and edge-continuous result is obtained, see Fig. 10(f).
B. Noise Reduction Besides the edge-preserving property, EPS filter also has a sound noise reduction. We took the Pepper image as the tested sample, adding the zero-mean Gaussian noise with [1]. In our experiment, we found that the haze effect appeared in the output of BF when we increased the value of . And some extreme noise would be retained in the outputs of WLS and IAD. Our JIAS can remove the added Gaussian noise but the appearance tends to be non-photorealistic. Fig. 11 shows the comparison of noise reduction.
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
Fig. 11. Noise reduction. (a) Input. (b) Zero-mean Gaussian noised image . (c) BGF ( , ) [21]. (d) WLS ( , ) [3]. , ) [4]. (f) JIAS ( , , (e) Anisotropic diffusion ( ).
545
Fig. 12. Multi-scale detail manipulation. (a) Input image is obtained from Kodak’s true color dataset. (b) Fine-scale filtered images are obtained by JLLM (left) and JIAS (right) approaches, which are mapped by pseudo color. And the detail enhancements of our JLLM and JIAS approaches are in (c) and (d), respectively. All the operations are under the YUV color space.
C. Multi-Scale Detail Enhancement Edge-preserving smoothing is an important operation for image detail manipulation. Especially, the multi-scale decomposition can provide more subtle results in the detail enhancement. On the basis of WLS optimization framework [3], Farbman et al. emphasized on the ability of detail manipulation which is based on the edge-preserving multi-scale decomposition. Subsequently, Fattal took advantage of the second-generation wavelets to construct the edge-avoiding decomposition [6], and obtained a sound enhanced effect. Surb et al. [12] considered the property of local extreme suppression to manipulate the details. And Paris et al. [39] applied the multi-scale local Laplacian filter to be aware of the details for manipulation. Inspired by the multi-scale decomposition, our JLLM and JIAS approaches can be also applied to detail manipulation. Fig. 12(b) is the fine-scale filtered results of Fig. 12(a) by JLLM (left) and JIAS (right), respectively. The results are displayed in pseudo color visualization. Note that, the edges are well preserved and the textures are well smoothed. The detail enhanced results of JLLM and JIAS are shown in Fig. 12(c) and (d), respectively. D. Tone Mapping The high-dynamic-range (HDR) image tone mapping is an important application in image enhancement. Durand and Dorsey [18] firstly applied the bilateral filter to tone mapping, demonstrating the effectiveness of the edge-preserving smoothing approach for the HDR image. But limited by the Gaussian property of BF, the halo artifacts appear in the results of bilateral-based approaches commonly, see Fig. 13(b). Choudhury et al. [16] proposed a trilateral filter to improve the unsatisfactory effects. Subsequently, Farbman et al. [3] exploited the gradient constraint property of WLS optimization framework to alleviate the halo artifacts. Furthermore, He et al. [34] proposed a guided filter (GuiF) that not only has the bilateral-like property but also has the similar gradient constraint property of WLS. Fig. 13(c) and (d) show the results
Fig. 13. High dynamic range tone mapping. (a) Input over-exposure image. (b) The result of bilateral-based approach [18] has obvious halo artifacts. (c) The result of WLS approach [3] has obvious intensity reduction. (d) The result of Guided Filter [34] has slight halo artifacts near the textures. (e) The result of LLF approach [39]. (f) The result of our JIAS approach has no halo artifact near the textures and demonstrates a sound detail appearance.
of WLS-based and GuiF-based tone mapping. Note that, the reduction of intensity in the WLS-based mapping is still obvious, although the gamma correction can improve the intensity. The result of GuiF is more colorful than that of WLS-based approach, but a slight halo appears and the tone shifts. Recently, Paris et al. [39] applied the local Laplacian filter (LLF) to balance the tone and detail, and obtained a sound effect. Fig. 13(e)
546
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
Fig. 15. Non-photorealistic rendering. The input image is from RetargetMe dataset [45]. (b) The bilateral filter based approach [46]. (c) The median filter based approach [1]. Both the BF and MF would produce granular effect on the spindrift and the wave in (b) and (c). On the contrary, our JLLM approach does not produce granular effect in (d). We recommend the electronic version for a high resolution.
Fig. 14. Texture transfer. We transfer the cracks from a part of Mona Lisa (a) to the target image (b). The filtered image by our JLLM approach is shown in the left part of (c). The netty cracks are extracted in the YIQ color space, and they are shown in the right part of (c). Finally, these cracks are applied to the target image in (d).
shows the result of LLF without detail enhancement. With our JIAS, the halos are overcome completely. And the result is more colorful than those of both WLS and LLF, see Fig. 13(f). E. Texture Transfer For most ancient oil paintings, some cracks appear as historical trace. These cracks have the characteristics of textures, w.r.t. netty, dense and subtle. In the digital edition of the ancient painting, these cracks are usually eliminated with the shrinkage of image size, leading to loss of the historical sense. Therefore, how to enhance or restore the historical sense of a painting is an interesting topic. Actually, we can consider separating these netty textures from the original image, and then transfer them to the non-cracked image to create the historical sense. Inspired by the structure-texture decomposition [44], we implement this transfer process. In Fig. 14, we exploit the local texture suppression of our JLLM approach, separate the structure and texture component, and obtain the texture transfer result. Fig. 14(a) shows the portion of netty textures from Mona Lisa painting. The extracted image is filtered by our JLLM approach, and the texture detail is obtained by subtracting with the original image. Fig. 14(c) shows the filtered result (left) and the texture detail (right). By putting the separated textures to the Chardin’s work (Fig. 14(b)), the final result are obtained in Fig. 14(d). F. Non-Photorealistic Rendering Because the edge-preserving smoothing approaches emphasize on the importance of both edge preserving and local
Fig. 16. Hatching to tone. (b) The result of median filter [1]. Both the results of local extrema [12] in (c) and our JIAS approach in (d) yield a good estimation of the tone while preserving the edges of hatched regions. But our result is clearer at the thin edges than that of LE, such as the edges around shoulder area of the monster sculpt. And the tone appearance is more close to the original image.
smoothing, they satisfy the needs of non-photorealistic rendering (NPR) image applications, which need to outline the edge contours and flat local regions. Winnemöller et al. [46] proposed a bilateral-based real-time image abstraction, and Bhat et al. also emphasized on the NPR applications in their screened Poisson equation [32] and GradientShop [33]. As previously mentioned, if the image contains textures, traditional edge-preserving smoothing approaches produce unexpected
SU et al.: EDGE-PRESERVING TEXTURE SUPPRESSION FILTER BASED ON JOINT FILTERING SCHEMES
Fig. 17. The failures in JIAS and JLLM. (a) Input image. (b) The filtered result , , ). With over-smoothing the texture, it is of JIAS ( , likely to produce blocked artifacts. (c) The filtered result of JLLM ( , ). With under-smoothing the texture, the irregular residual textures appear.
preserving or smoothing effect, leading to serious inference on the applications. Illustrated as Fig. 15(b) and (c), the NPR results of both bilateral-based and median-based approaches have the grain-like effect in the abundant texture regions, e.g. the spindrift and the wave in Fig. 15(a). The reason of this phenomenon is that the textures are mistreated as edges and preserved incompletely. However, owning to the suppression of the textures in our JLLM approach, the grain-like effect is suppressed at the same time (Fig. 15(d)). In addition, for hatching or stippling images, it is hard to construct their NPR effects. The reason is that the content of the image is consisted of sketch draws, represented as the texture-like drawing. Exploiting the traditional edge-preserving smoothing approaches, it is difficult to distinguish the edges and the textures. The classical median filter [1] can achieve a certain texture smoothing for NPR, but the shapes of contents may be changed, see Fig. 16(b). Subr et al. took advantage of their local extrema (LE) approach [12] to obtain a sound NPR result, see Fig. 16(c). And we apply our JIAS approach and obtain a comparable result to that of LE. VI. CONCLUSION AND FUTURE WORK How to obtain both properties of texture-smoothing and edgepreserving is a challenging problem in filtering. In this paper, we fill the gap between the linear filter and the EPS filter by taking the JBF framework with joint filtering schemes. We propose a novel edge-preserving texture suppression filter to obtain a satisfactory result which has both properties of texture-smoothing and edge-preserving. We develop an iterative asymmetric sampling and the local linear model degenerative schemes to suppress the textures, respectively. Then an edge correction operator is used to achieve the purpose of edge preserving for the degenerative image. JBF is a bridge to link the properties of texture smoothing and edge preserving, and provides a feasible way for a better balance between them. In addition, we apply a effective accelerating approach to improve the time performance. Strength and weakness. In our JIAS and JLLM approaches, we integrate both properties of texture smoothing and edge preserving, and avoid some complicated models to recognize the textures explicitly. Our approach merely exploits the corresponding spatial characteristics of the edges and the textures. Generally speaking, the advantages of both linear filter and EPS filters are merged in a natural way. But our approaches still
547
have some weakness. The slight edge blurring still exists and some parameters in the specific applications are still empirically. A failed example is shown in Fig. 17. In our experiments, we found that for the animal fur or hairs, the filtered output would be unacceptable. In addition, the filter outputs are lack of a determined quality measurement and visual dependence as well. In the future work, we will consider the structure-texture decomposition as a choice of degeneration in our pipeline. We hope to further reduce the amount and the sensitivity of parameters in the implement. In addition, we will continue to extend our approaches to more applications, e.g. texture replacement, etc. REFERENCES [1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. Upper Saddle River, NJ: Prentice-Hall, 2008. [2] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” Int. J. Comput. Vis., vol. 81, no. 1, pp. 224–252, 2009. [3] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edge-preserving decompositions for multi-scale tone and detail manipulation,” ACM Trans. Graphics, vol. 27, no. 3, pp. 67–76, 2008. [4] S. M. Chao and D. M. Tsai, “An improved anisotropic diffusion model for detail- and edge-preserving smoothing,” Pattern Recogn. Lett., vol. 31, no. 13, pp. 2012–2023, 2010. [5] M. Kass and J. Solomon, “Smoothed local histogram filters,” ACM Trans. Graphics, vol. 29, no. 4, pp. 100:1–100:10, 2010. [6] R. Fattal, “Edge-avoiding wavelets and their applications,” ACM Trans. Graphics, vol. 28, no. 3, pp. 22:1–22:10, 2009. [7] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via l0 gradient minimization,” in Proc. SIGGRAPH Asia, 2011. [8] J. Baek and D. Jacobs, “Accelerating spatially varying gaussian filters,” ACM Trans. Graphics, vol. 29, no. 6, pp. 169:1–169:10, 2010. [9] S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, “Bilateral filtering: Theory and application,” Found. Trends Comput. Graphics Vis., vol. 4, no. 1, pp. 1–73, 2009. [10] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. 6th Int. Conf. Comput. Vis., 1998, pp. 839–846. [11] J. V. D. Weijer and R. V. D. Boomgaard, “Local mode filtering,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., 2001, vol. 2, pp. 428–433. [12] K. Subr, C. Soler, and F. Durand, “Edge-preserving multiscale image decomposition based on local extrema,” ACM Trans. Graphics, vol. 28, no. 5, pp. 147:1–147:9, 2009. [13] Z. Farbman, R. Fattal, and D. Lischinski, “Diffusion maps for edge-aware image editing,” ACM Trans. Graphics, vol. 29, no. 6, pp. 145:1–145:10, 2010. [14] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graphics, vol. 23, no. 3, pp. 664–672, 2004. [15] E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graphics, vol. 23, no. 3, pp. 673–678, 2004. [16] P. Choudhury and J. Tumblin, “The trilateral filter for high contrast images and meshes,” in Proc. 14th Eurographics Workshop on Rendering, 2003, pp. 186–196. [17] H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process., vol. 16, no. 2, pp. 349–366, 2007. [18] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Trans. Graphics, vol. 21, no. 3, pp. 257–266, 2002. [19] T. Q. Pham and L. J. V. Vliet, “Separable bilateral filtering for fast video preprocessing,” in Proc. IEEE Int. Conf. Multimedia and Expo, 2005. [20] S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” in Proc. Comput. Vis. Conf., 2006, vol. 3954, pp. 568–580. [21] J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” ACM Trans. Graphics, vol. 26, no. 3, 2007.
548
IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013
[22] A. Adams, N. Gelfand, J. Dolson, and M. Levoy, “Gaussian kd-trees for fast high-dimensional filtering,” ACM Trans. Graphics, vol. 28, no. 3, pp. 21:1–21:12, 2009. [23] A. Adams, J. Baek, and M. A. Davis, “Fast high-dimensional filtering using the permutohedral lattice,” in Proc. Comput. Graphics Forum, 2010, vol. 29, no. 2, pp. 753–762. [24] B. Weiss, “Fast median and bilateral filtering,” ACM Trans. Graphics, vol. 25, no. 3, pp. 519–526, 2006. [25] S. Yoshizawa, A. Belyaev, and H. Yokota, “Fast gauss bilateral filtering,” in Proc. Comput. Graphics Forum, 2010, vol. 29, no. 1, pp. 60–74. [26] F. Porikli, “Constant time o(1) bilateral filtering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recogn., 2008, pp. 1–8. [27] Q. X. Yang, K. H. Tan, and N. Ahuja, “Real-time o(1) bilateral filtering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recogn., 2009, pp. 557–564. [28] Q. X. Yang, S. N. Wang, and N. Ahuja, “SVM for edge-preserving filtering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recogn., 2010, pp. 1775–1782. [29] Q. X. Yang, “Recursive bilateral filtering,” in Proc. ECCV, 2012, pp. 399–413. [30] E. S. L. Gastal and M. Oliveira, “Domain transform for edge-aware image and video processing,” ACM Trans. Graphics, vol. 30, no. 4, pp. 69:1–69:12, 2011. [31] E. S. L. Gastal and M. M. Oliveira, “Adaptive manifolds for real-time high-dimensional filtering,” ACM Trans. Graphics, vol. 31, no. 4, pp. 33:1–33:13, 2012. [32] P. Bhat, B. Curless, M. Cohen, and C. Zitnick, “Fourier analysis of the 2D screened Poisson equation for gradient domain problems,” in Proc. ECCV, 2008, pp. 114–128. [33] P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: A gradient-domain optimization framework for image and video filtering,” ACM Trans. Graphics, vol. 29, no. 2, pp. 1–14, 2010. [34] K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,” in Proc. ECCV, 2010, vol. 6311, pp. 1–14. [35] P. Perona and J. Malik, “Scale space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7, pp. 629–639, Jul. 1990. [36] J. Weickert, Anisotropic Diffusion in Image Processing. Stuttgart, Germany: Teubner-Verlag, 1998. [37] S. M. Chao, D. M. Tsai, W. Y. Chiu, and W. C. Li, “Anisotropic diffusion-based detail-preserving smoothing for image restoration,” in Proc. IEEE 17th Int. Conf. Image Process., 2010, pp. 4145–4148. [38] M. Felsberg, P. E. Forsséen, and H. Scharr, “Channel smoothing: Efficient robust smoothing of low-level signal features,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 2, pp. 209–222, Feb. 2006. [39] S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: Edge-aware image processing with a Laplacian pyramid,” ACM Trans. Graphics, vol. 30, no. 4, pp. 68:1–68:12, 2011. [40] R. Fattal, M. Agrawala, and S. Rusinkiewicz, “Multiscale shape and detail enhancement from multi-light image collections,” ACM Trans. Graphics, vol. 26, no. 3, Jul. 2007, Art. ID 51. [41] A. Lev, S. W. Zucker, and A. Rosenfeld, “Iterative enhancemnent of noisy images,” IEEE Trans. Syst., Man Cybern., vol. SMC-7, no. 6, pp. 435–442, Jun. 1977. [42] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imaging Sci., vol. 1, no. 3, pp. 248–272, 2008. [43] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. [44] J. F. Aujol, G. Gilboa, T. Chan, and S. Osher, “Structure-texture image decomposition-modeling, algorithms and parameter selection,” Int. J. Comput. Vis., vol. 67, no. 1, pp. 111–136, 2006. [45] M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir, “A comparative study of image retargeting,” ACM Trans. Graphics, vol. 29, no. 5, pp. 160:1–160:10, 2010. [46] H. Winnemöller, S. C. Olsen, and B. Gooch, “Real-time video abstraction,” ACM Trans. Graphics, vol. 25, no. 3, pp. 1221–1226, 2006.
Zhuo Su (S’13) is a Ph.D. candidate of the National Engineering Research Center of Digital Life, School of Information Science and Technology, Sun Yat-sen University. He received the Master and Bachelor degree in software engineering from School of Software, Sun Yat-sen University in 2010 and 2008. His research interests include image processing and analysis, computer vision, and computer graphics.
Xiaonan Luo is a professor of School of Information and Science Technology, Sun Yat-sen University. He is the director of National Engineering Research Center of Digital Life and the director of Digital Home Standards Committee on Interactive Applications of China Electronics Standardization Association. He won the National Science Fund for Distinguished Young Scholars granted by the National Nature Science Foundation of China. His research interests include image processing, computer graphics & CAD, mobile computing.
Zhengjie Deng received the Bachelor and Master degrees in computer science from the Fudan University, the Ph.D. degree in computer applications technology from the Sun Yat-sen University, in 2002, 2005 and 2011 respectively. His research interests include graphics deformations, image processing, computer animations and modeling.
Yun Liang received the M.S. and Ph.D. degree in Information Science and Technology from Sun Yat-sen University in 2005 and 2011 respectively. She is a researcher in South China Agricultural University and the National Engineering Research Center of Digital Life in China. Her research interests include image processing, computer vision and pattern analysis.
Zhen Ji (M’04) received the B.E. and Ph.D. degrees from Xi’an Jiaotong University, Xi’an, China, in 1994 and 1999, respectively. He is currently a Professor with the Department of Computer Science, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China. In 2001, 2003, and 2004, he was an Academic Visitor with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, U.K. Since 2002, he has been the Director of the Texas Instruments DSPs Laboratory, Shenzhen University. His current research interests include digital image processing, computational intelligence, bioinformatics, and digital signal processors.