Hierarchized block wise image approximation by greedy ... - arXiv

31 downloads 0 Views 3MB Size Report
Aug 27, 2013 - is termed hierarchized block wise OMP. The ad- vantage of the .... Butterfly and Chest X-Ray are of sizes 512 × 512,. 448×600 and 592×728 ...
1

Hierarchized block wise image approximation by greedy pursuit strategies

arXiv:1308.5876v1 [cs.CV] 27 Aug 2013

Laura Rebollo-Neira, Ryszard Macioł and Shabnam Bibi Mathematics Department, Aston University, Birmingham, B4 7ET, UK

Abstract—An approach for effective implementation of greedy selection methodologies, to approximate an image partitioned into blocks, is proposed. The method is specially designed for approximating partitions on a transformed image. It evolves by selecting, at each iteration step, i) the elements for approximating each of the blocks partitioning the image and ii) the hierarchized sequence in which the blocks are approximated to reach the required global condition on sparsity.

I. I NTRODUCTION The representation of an image by a piece of data, of lower dimensionality than the pixel/intensity values, is called a sparse representation of a compressible image. Approximations using a redundant set, called a dictionary, may attain high sparsity, thereby benefiting applications that range from denoising to compression [1]–[9]. Those techniques for approximation which evolve by selection of dictionary elements, called atoms, are refereed to as greedy strategies. The motivation of this Communication is to consider an effective implementation of the Orthogonal Matching Pursuit (OMP) strategy [10] to approximate an image in the wavelet domain. For this to be computationally effective, with respect to speed and storage demands, the transformed image should be partitioned into small blocks. Approximations in the wavelet domain have been shown useful for image compression [5]–[7]. Indeed, as will be illustrated here, the approximation of images which are compressible in the wavelet domain results significantly sparser if carried out in such a domain. In addition, while some other artifacts can be caused, approximations by blocking in the wavelet domain avoid visually unpleasant blocking artifacts in the intensity image. The letter is organized as follows: Sec. II discusses the need for considering a dedicated version of the OMP strategy to operate on a transformed image. Sec. III proposes a particular version, which is termed hierarchized block wise OMP. The advantage of the proposed approach is illustrated by numerical tests in Sec. IV. The conclusions are summarized in Sec. V.

II. T HE

NEED FOR DEDICATED PURSUIT

STRATEGIES TO OPERATE ON PARTITIONS

Approximating an image partitioned into blocks implies having to consider some ‘communication’ between the blocks. The need for communication appears when trying to decide up to what error each block is to be approximated. As the following discussion suggests, an appropriate stopping criterion for recursive approximation of blocks in the wavelet domain needs to be a global measure. Firstly let us introduce the notational convention: R represents the sets of real numbers. Boldface fonts are used to indicate Euclidean vectors or matrices, whilst standard mathematical fonts indicate the components, e.g., c ∈ RN is a vector of components c(i), i = 1, . . . , N and I ∈ RNx ×Ny a matrix of elements I(i, j), i = 1, . . . , Nx , j = 1, . . . , Ny . In particular I ∈ RNx ×Ny will indicate an intensity image of Nx × Ny pixels and ˜I ∈ RNx ×Ny its corresponding wavelet transform. For square matrices i.e., when Nx = Ny , to shorten notation the range of indices is indicated as i, j = 1, . . . , Nx . Consider, without loss of generality, a real, orthogonal, and separable wavelet transform, which transforms an intensity image I ∈ RNx ×Ny into the transformed image ˜I ∈ RNx ×Ny . Thus, each of the ˜ m), n = 1, . . . , Nx , m = 1, . . . , Ny points I(n, ˜ m) = hΨun ⊗ is a Frobenius inner product I(n, Ψvm , IiF , where the operation ⊗ indicates the tensor product. The subscript un in Ψun is used to identify an ordered pair consisting of the scale and translation parameters of the corresponding mother wavelet Ψ . Accordingly, for n = 1, . . . , Nx , m = 1, . . . , Ny , X

Nx ,Ny

˜ m) = I(n,

Ψun (i)I(i, j)Ψvm (j).

(1)

i,j=1

We shall assume for simplicity a uniform partition and consider that a transformed image, ˜I, is the composition of Q identical and disjoint blocks ˜Iq , q = 1, . . . , Q. Hence, ˜I = ∪Q ˜Iq , where every q=1 ˜Iq is a 2D block of size Nb × Nb . It is clear from (1) that points of the transformed image corresponding to a particular block of size Nb × Nb contain some global information about the intensity image. This suggests that a

2

suitable stopping criterion for the approximation of a partition in the wavelet domain needs to be a global measure. Consequently, even if the approximation of each block, as such, is carried out independently of the others, a greedy selection strategy implemented by approximating blocks of the transformed image should aim at selecting i) the elements for approximating each of the blocks in the partition and ii) the hierarchized sequence in which the blocks should be approximated to reach the global condition required by the algorithm. The method introduced in the next section, which we term hierarchized block wise OMP (HBW-OMP) implements the selection of i) and ii) simultaneously. III. H IERARCHIZED B LOCK W ISE OMP Given a redundant 2D dictionary D of M atoms, D = {Dn ∈ RNb ×Nb }M n=1 , each of the Q blocks ˜Iq ∈ RNb ×Nb in the partition of the transformed image is approximated by an atomic decomposition I˜kq ,q (i, j), i, j = 1, . . . , Nb of the form I˜kq ,q (i, j) =

kq X

ckq ,q (n)Dℓqn (i, j), q = 1, . . . , Q.

n=1

For each q, the atoms Dℓqn , n = 1, . . . , kq are selected from the dictionary D as follows: On setting R0,q = ˜Iq , q = 1, . . . , Q at each iteration the algorithm selects the atom Dℓqk +1 that maximizes q the absolute value of the Frobenius inner products hDn , Rkq ,q iF , n = 1, . . . , M, q = 1, . . . , Q, i.e., ℓqkq +1 = arg max |hDn , Rkq ,q iF | n=1,...,M q=1,...,Q

with

(2)

Rkq ,q = ˜Iq −

kq X

. ckq ,q (n)Dℓ,q n

n=1

For each q and kq the coefficients ckq ,q (n), n = 1, . . . , kq in (2) are such that kRkq ,q kF is minimum, where k · kF is the norm defined through the Frobenius inner product. This is ensured by requesting that Rkq ,q = ˜Iq − PˆVqk ˜Iq , where PˆVqk q q is the orthogonal projection operator onto Vqkq = kq }n=1 .

The implementation discussed in span{D [11] provides us with the representation of PˆVqk ˜Iq as given by, ℓqn

PˆVqk ˜Iq =

kq X n=1

D

ℓqn

˜ hBk,q n , Iq iF

=

kq X

ck,q (n)Dℓqn .

n=1 k Bnq ,

n = 1, . . . , kq For q fixed, the matrices are biorthogonal to the selected atoms Dℓqn , n = 1, . . . , kq and span the identical subspace Vqkq . These matrices can be effectively calculated by recursive biorthogonalization and Gram Schmidt orthogonalization with one re-orthogonalization step.

The coefficients in (2) are obtained from the inner k ,q products ckq ,q (n) = hBnq , ˜Iq iF , n = 1, . . . , kq . For a given number PQ K, the algorithm iterates until the condition q=1 kq = K, is met. In other words, the algorithm stops when the maximum number of total atoms required for the image approximation is reached. The difference between OMP and the hierarchized block wise (HBW) version discussed here lies in a) the sequence in which the blocks are partially approximated at each step and b) the stopping criterion. Standard OMP would be applied independently to each block up to a given error, which is independent of the approximation of the other blocks. As condition (2) states, HBWOMP adds a hierarchized selection of the blocks to be approximated in each iteration. Thus, at each step, the maximum in (2) changes only for the selected block. This implies that, by storing the maximum for each block, the selection of blocks introduces the overhead of finding the maximum element of an array of size Q. As far as storage is concerned, HBW-OMP has to store all the stepwise outputs of the algorithm, which includes matrices k Bnq , n = 1, . . . , kq , for each of the Q blocks in the partition. While the approximation is carried out in a block wise P manner, the relation introduced by the condition Q q=1 kq = K inhibits the complete approximation of each block at once. Other pursuit strategies can also be adapted to this selection process. In particular, the HBW implementation of the Matching Pursuit (MP) method [12] differs from that of OMP in that, for each q and kq , the coefficients ckq ,q (n), n = 1, . . . , kq in (2) are calculated simply as ckq ,q (n) = hDℓqn , Rkq −1,q iF . It is appropriate to highlight also the difference between the proposed approach and the Block OMP/MP (BOMP/BMP) approach introduced in [13]. While BOMP/BMP selects blocks of atoms at each iteration step, our proposal keeps selecting the atoms as in OMP/MP , i.e., one by one. IV. N UMERICAL

TESTS

The dictionary used for the tests is a separable mixed dictionary consisting of two components, D1 and D2 . The component D1 is a Redundant Discrete Cosine (RDC) dictionary given by: D1 = {wi cos(

π(2j − 1)(i − 1) ), j = 1, . . . , N }, 2M

with wi , i = 1, . . . , M normalization factors. The number N is set equal to the number of pixels in one of the block sides, and M = 2N to have redundancy two. D2 is the standard Euclidean basis, also called the Dirac basis (DB), i.e. D2 = {ei (j) = δi,j , j = 1, . . . , N }N i=1 .

3

I P B C F S A

First row: Planet, Butterfly and Chest X-Ray images of sizes 512 × 512, 448 × 600, and 592 × 728 pixels, respectively. Second row: Flower, Spider Web and Artificial high resolution images from [18], all of size 1512 × 2264 pixels. Fig. 1.

The required 2D dictionary is the tensor product D ⊗ D, where D = D1 ∪ D2 . This small dictionary, called RDCDB in [8], is chosen only for simplicity. Extending the RDCDB dictionary by adding wavelet atoms, for instance [14], or using other dictionaries such as those arising by learning processes [15]–[17], would result in an increment of sparsity with all the approaches. However the purpose of these numerical tests is only to illustrate the advantage of implementing matching pursuit strategies on partitions, in the proposed HBW manner. The set of six grey intensity levels images used for illustrating the proposed HBW strategy are shown in Fig 1. The images in the first row: Planet, Butterfly and Chest X-Ray are of sizes 512 × 512, 448 × 600 and 592 × 728 pixels. The images in the second row: Flower, Spider Web and Artificial are high resolution images taken from [18]. The Spider Web and Artificial are both only fractions of the much larger images in [18], which we have cropped to the same size as the Flower, i.e., 1512 × 2264 pixels. All the images of Fig. 1 are approximated in both, the intensity and wavelet domains by partitions of 8 × 8 pixels. The transformed image is obtained using the CDF97 WT. The sparsity of an approximation is measured by the Sparsity Ratio N N (SR) defined as SR = xK y , with Nx Ny the total number of pixels and K the number of nonzero coefficients to approximate the whole image. The I P B C F S A

MP 28.1 10.6 22.6 41.0 34.8 26.4

HBW 29.0 11.2 23.2 42.8 35.7 27.5

OMP 30.4 12.3 23.6 42.6 36.5 29.7

HBW 31.0 12.7 24.4 43.9 37.0 30.7

WT 40.7 9.6 34.2 119.5 70.0 24.9

DCT 27.1 10.7 20.7 40.0 33.5 19.4

TABLE I SR RESULTING FROM APPROXIMATING ( IN THE INTENSITY DOMAIN ) THE SIX IMAGES OF F IG . 1, UP TO PSNR=45.0 D B, BY THE METHODS : MP, OMP, THEIR CORRESPONDING HBW VERSIONS , WT AND DCT.

approximation of all the images is carried out to

MP 39.1 10.6 32.5 48.6 41.1 25.0

HBW 53.0 12.0 53.2 156.6 87.9 30.1

OMP 43.9 12.7 35.0 50.5 43.3 28.4

HBW 62.9 14.4 60.7 181.5 99.2 35.0

WT 40.7 9.6 34.2 119.5 70.0 24.9

DCT 27.1 10.7 20.7 40.0 33.5 19.4

TABLE II S AME DESCRIPTION AS IN TABLE 1 BUT IN THIS CASE MP, OMP AND THE CORRESPONDING HBW VERSIONS ARE IMPLEMENTED IN THE WAVELET DOMAIN . T HE RESULTS OF THE WT AND DCT ARE REPEATED HERE TO FACILITATE THE COMPARISON .

achieve the required PSNR of 45.0dB using the above defined RDCDB dictionary and the greedy strategies OMP, MP and their corresponding HBW versions. The SRs arising when these techniques are implemented in the intensity domain are displayed in the first four columns of Table 1. The rows, from top to ground, correspond to the Planet (P), Butterfly (B), Chest (C), Flower (F), Spider Web (S) and Artificial (A) images (I). For comparison purposes, the results produced by conventional WT and DCT approaches are displayed in the last two columns of the table. The WT results are obtained by applying the CDF97 WT, on the whole image, and reducing coefficients by iterative thresholding until the PSNR of 45.0dB is reached. The results from the orthogonal DCT are obtained, from a partition of 8×8 pixels, also by thresholding of coefficients. From Table 1 it is clear that: a) Except for the Butterfly and Artificial images, in the intensity domain the results produced by a conventional WT approach significantly over perform the other approaches. b) Whilst the block independent OMP and MP methods produce smaller SRs than their corresponding HBW versions, the difference is not very significant. However, as can be observed in Table 2, all this is reversed if the greedy techniques are implemented with the RDCDB dictionary but in the wavelet domain. In this domain: a) for all the images the HBW versions of MP and OMP, with the simple RDCDB dictionary, significantly overperform conventional WT and DCT approaches. b) Except for the Butterfly and Artificial images the difference in the SR produced by the HBW approaches, with respect to the block independent versions, is massive. Putting aside the Butterfly and Artificial images, the common feature of the other images in Fig. 1 is the very significant difference in the SR produced by a conventional WT approximation, with respect to the conventional DCT. This is an indication that the representation of the images is particularity sparse in the wavelet domain. It is clear from Table 2 that, the sparser the approximation by the WT is,

4

the lower the performance of OMP with respect to the proposed HBW version. Finally, some information with regard to processing times is relevant. The tests presented here were run on a laptop with a 2.2 GHz Intel Processor and 3GB of RAM. The processing time (average of 10 independent runs in Matlab) for approximating the first three images of Fig 1 in the wavelet domain are: For the Planet 2.04 secs with OMP and 2.78 secs with HBW-OMP. For the Butterfly 7.13 and 11.56 secs and for the Chest 4.3 and 5.1 secs, respectively. Using a C++ MEX file for both approaches the corresponding times are reduced up to ten times. Thus, the larger images (second row of Fig 1) were approximated using MEX files. The running times corresponding to those images (left to right) are 5.71, 5.93 and 6.3 secs with OMP and 6.76, 9.7 and 21.9 secs with HBW-OMP. In order to reduce processing time, the approximation of a large image can be realized by dividing the image into segments to be processed independently. Since each of the segments will contain different information, the sparsity of the segments may differ from one another. For comparison with the approximation of the whole image at once, it is necessary to make uniform the sparsity of all the segments. A possible way of achieving this is to randomize the position of the blocks in the whole image, to ensure that each segment contains blocks from different regions of the image. The implementation is carried out as follows: i) Perform a random permutation of the small blocks in the transformed domain. ii) Divide the resulting image into segments. iii) Apply HBW-OMP to each segment. iv) Place the blocks back in the original position. Through this simple procedure and taking 12 segment of 540 × 560 pixels each, the time to approximate the Artificial image is reduced to 6.1 secs while the PSNR does not change significantly (45.0dB). V. C ONCLUSIONS An approach, for HBW implementation of greedy methodologies operating on partitions, has been proposed. The proposal was motivated by the convenience of approximating some images in the wavelet domain. Certainly, if the image is characterized by having large smooth regions, for instance, when approximated in the intensity domain blocking artifacts are likely to be noticeable, even at high PSNR. Additionally, if an image is compressible in the wavelet domain, it can be expected to have a sparser approximation in that domain. This was illustrated through the OMP and MP methodologies. The numerical examples were chosen to enhance the fact that, for images which

are highly compressible in the wavelet domain, approximations by partitions in such a domain may achieve much higher sparsity, provided that the proposed HBW version of greedy pursuit strategies is applied. Acknowledgements We are sincerely grateful to the Referees for their constructive criticism which has been of much help to present the proposal in the present form. Support from EPSRC UK is acknowledged. The MATLAB and C++ MEX files for implementation of HBW-OMP with a separable dictionary (HBWOMP2D) are available on [19] (section Examples). R EFERENCES [1] J. Mairal, M. Eldar and G. Sapiro, “Sparse Representation for Color Image Restoration”, IEEE Trans. Image Proces., 17, 53 – 69 (2008). [2] M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer (2010). [3] J. Wright, Yi Ma, J. Mairal, G. Sapiro, T.S. Huang, and S. Yan, “Sparse Representation for Computer Vision and Pattern Recognition”, Proc. of the IEEE 98, 1031 – 1044 (2010). [4] J-L. Starck, F. Murtagh and J. M. Fadili, “Sparse Image and Signal Processing”, Cambridge University Press (2010). [5] Y. Yuan and D. M. Monro, “Improved matching pursuits image coding”, In ICASSP, 2, 201–204 (2005). [6] K. Skretting and K. Engan, “Image compression using learned dictionaries by RLS-DLA and compared with KSVD”, ICASSP, 1517 – 1520 (2011). [7] R. Maciol, Y. Yuan and Ian T. Nabney, “Colour Image Coding with Matching Pursuit in the Spatio-frequency Domain”, ICIAP, Lecture Notes in Computer Science, 6978, 306–317 (2011). [8] J. Bowley and L. Rebollo-Neira, Sparsity and ‘Something Else’: An Approach to Encrypted Image Folding, IEEE Signal Processing Letters, 8, 189–192 (2011). [9] L. Rebollo-Neira, J. Bowley, A. Constantinides and A. Plastino, “Self contained encrypted image folding”, Physica A, 391, 5858–5870 (2012). [10] Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” Proc. of the 27th ACSSC,1,40–44 (1993). [11] L. Rebollo-Neira and D. Lowe, “Optimized orthogonal matching pursuit approach”, IEEE Signal Process. Letters, 9, 137–140 (2002). [12] S. G. Mallat and Z. Zhang, “Matching Pursuits with TimeFrequency Dictionaries”, IEEE Trans. Signal Process., 41, 3397–3415 (1993). [13] Y. Eldar, P. Kuppinger and H. Bi¨olcskei, “Block-Sparse Signals: Uncertainty Relations and Efficient Recovery”, IEEE Trans. Signal Process., 58, 3042–3054 (2010). [14] L. Rebollo-Neira and J. Bowley, “Sparse representation of astronomical images” Journal of The Optical Society of America A, Vol 30, 758–768 (2013). [15] M. Yaghoobi, T. Blumensath and M. E. Davies, “Dictionary learning for sparse approximations with the majorization method”, IEEE Trans. Signal Process., 109, 1– 14 (2009). [16] K. Skretting and K. Engan, “Recursive Least Squares Dictionary Learning Algorithm”, IEEE Trans. Signal Process., 58, 2121—2130 (2010). [17] I. To˘si´c and P. Frossard, Dictionary Learning, IEEE Signal Process. Magazine, 28, 27–38 (2011). [18] http://www.imagecompression.info [19] http://www.nonlinear-approx.info