IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006
89
A Range/Domain Approximation Error-Based Approach for Fractal Image Compression Riccardo Distasi, Michele Nappi, and Daniel Riccio
Abstract—Fractals can be an effective approach for several applications other than image coding and transmission: database indexing, texture mapping, and even pattern recognition problems such as writer authentication. However, fractal-based algorithms are strongly asymmetric because, in spite of the linearity of the decoding phase, the coding process is much more time consuming. Many different solutions have been proposed for this problem, but there is not yet a standard for fractal coding. This paper proposes a method to reduce the complexity of the image coding phase by classifying the blocks according to an approximation error measure. It is formally shown that postponing range\slash domain comparisons with respect to a preset block, it is possible to reduce drastically the amount of operations needed to encode each range. The proposed method has been compared with three other fractal coding methods, showing under which circumstances it performs better in terms of both bit rate and/or computing time. Index Terms—Classification, feature vector, fractal image compression.
I. INTRODUCTION
D
IGITAL libraries and medical systems for remote diagnosis are growing in popularity, and using the Internet is part of the daily routine for many people worldwide. Image compression techniques reduce both the memory space necessary for storage and the transmission time required for transferring the images. Lately, the interest of researchers has focused on indexing [11] and compression technique based on fractal theory [7], developing several post processing strategies for this approach [14], [16]. In spite of the manifold advantages offered by fractal compression, such as high decompression speed, high bit rate, and resolution independence, the greatest disadvantage is the high computational cost of the coding phase. At present, fractal coding cannot compete with other techniques (wavelets, etc.) if compression per se is involved. However, most of the interest in fractal coding is due to its side applications in fields such as image database indexing [11] and face recognition [13]. These applications both utilize some sort of coding, and they can reach a good discriminating power even in the absence of high PSNR from the coding module. The latter is often the computational bottleneck, so it is desirable to make it as efficient as possible. In the case of fractal coding, we should try to reduce the search time, because full exhaustive search is prohibitively costly. Manuscript received May 15, 2003; revised November 16, 2004. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Charles D. “Chuck” Creusere. The authors are with Dipartimento di Matematica e Informatica, Università di Salerno, 84084 Fisciano (SA), Italy (e-mail:
[email protected];
[email protected];
[email protected]). Digital Object Identifier 10.1109/TIP.2005.860334
Many different solutions have been proposed for this problem [1],[3],[6],[9],[17],[18],forinstance,modifyingthepartitioning process or providing new classification criteria or heuristic methods for the range/domain matching problem. All these approaches can be grouped in two classes: classification methods and feature vectors [5]. In this paper, we present a new approach, namely defering range/domain comparison (DRDC), based on feature vectors. The main idea is to defer the comparisons between ranges and domains. Rather, a preset block is used as a temporary replacement. The preset block is computed as the average of the ranges present in the image. The coding phase is divided in two phases. In the first phase, where the domain codebook is created, all the domains are extracted from the image, then each of them is compared with the preset block by solving a mean square root problem. The preset block/domain approximation error is computed and stored in a KD-tree data structure. In the second phase, the ranges have to be encoded; each one of them is compared with the preset block, thus obtaining the preset block/range approximation error, in the same way as we did for domains. Using this data, we find the domains that are likely to encode the current range with the best accuracy. This criterion is formally justified by the lemma in Section II, which proves that a generic range block is accurately coded by domains with equal or similar approximation error. In this way, for each range we have to perform a much smaller number of range/domain comparisons, and the time spent for coding is significantly reduced. Two reasons make the approximation error convenient as a discriminating criterion. •
It can be managed even when using a low-bitrate quantization of its components. • It preserves a sufficient amount of information to be discriminating enough. In order to show the effectiveness of DRDC, we compared it with Saupe’s method [20] and two other strategies of recent publication: Mass Center [15] and Cardinal’s algorithm [8]. The algorithms we chose for the sake of comparison have significant affinities with our method, since they also use feature vectors or domain classification in the coding process. The inputs of Cardinal’s algorithm are both range and domain block sets. First, it computes feature vectors for ranges and domains, then it applies Saupe’s operator [20]. Subsequently, the vector space is recursively partitioned until each partition contains a sufficiently small number of ranges and domains.
1057-7149/$20.00 © 2006 IEEE
90
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006
The partitioning of a vector space into two subspaces is performed by means of a cutting hyperplane. Cardinal suggests four different ways to determine the cutting plane: heuristic, random, optimal, and by a KD-tree. Finally, range/domain comparisons are made inside each partition, storing the best transformation for each range. Owing to the partitioning process, this method reaches a reasonable PSNR value, improving it slowly as time spent in the coding phase increases. The mass center features are based on the mass distribution within the blocks. Given a range block to be approximated, a domain which is a good candidate to the best approximation is one that has a shape similar to that of the range. The shape of a block is determined by the distribution of pixel mass within the block, so that similar distribution correspond to similar shapes. Both DRDC and Saupe’s method have been combined with the Mass Center-based MC technique [15], obtaining faster hybrids, without a significant loss of PSNR. DRDC has been found to deliver a significant speed up, alone or hybridized and, in many cases, also an improvement of the PSNR figure. These performance aspects make DRDC a good candidate for incorporation in fractal-based face recognition and image indexing applications. The rest of the paper is organized as follows. In Section II, we present the DRDC strategy, while Section III describes the domain tree data structure. Finally, Section IV illustrates the experimental results.
Fig. 1. First phase of the DRDC process.
II. DRDC TECHNIQUE In order to reduce the computational cost of exhaustive search whilestill preservingagoodimagequality,wescanthedomainsin the codebook to compute error vectors that will help us to choose the most promising candidate domains for encoding a given range. The fundamental idea of the proposed algorithm consists in deferring the comparisons between ranges and domains, as a temporary replacement with utilizing a preset block which ranges and domains are compared. After computing the preset block as the average block (1) •
In the first phase, all domains are compared with , and the approximation error is computed and stored in a KD-tree, as shown in Fig. 1. • In the second phase, all ranges to be encoded are compared with , and the approximation error is computed but not stored: It is used as a feature vector and serves as the search key for locating the best fitting domain for the given range. This is illustrated in Fig. 2. To see how it is done, consider a function , where , and represents the domain after contraction, defined as follows: (2) where ,
and
are vectors of length .
Fig. 2. Second phase of the DRDC process.
The values and can be easily calculated by solving a minimum square error problem, while the value of the funcis a real component vector that represents the aption proximation error between and . From this point of view, the range/domain best matching problem can be reformulated
DISTASI et al.: RANGE/DOMAIN APPROXIMATION ERROR-BASED APPROACH
91
as follows:
TABLE I PERCENTAGE
(3) that must be computed for each . , reducing the In order to minimize the quantity time needed for this operation, range/domain comparisons are performed with respect to a preset block , so that in the following phases, for each range , we consider only the domain is nearest to the block whose approximation error . In other words, for a fixed image approximation error block , this method performs two phases. The correctness of this process is formally justified by the following lemma. Lemma 1: Let and be, respectively, a domain block and a range block, and let be a preset block of the same dimensionof and .If ,then . Proof: Expressing in terms of , we have (4) while, in terms of , we obtain (5) Comparing the right hand sides of (4) and (5), we obtain
and, therefore, we can write
as follows: (6)
Now, we consider the best approximation of by , obtained through the solution of a mean square error problem such as (7) with (8) Since is by definition the minimum error in a leastsquares sense, it must be true that (9) Last, since
, by (9) this implies (10)
OF ITEMS FALLING IN THE RANGE OF FOR LENA, ZELDA, AND GOLDHILL
0
[ 16, 16]
III. FUNCTION AND THE DOMAIN TREE DATA STRUCTURE Because of the data structure used, it is convenient to deal with integer rather than real component vectors. We then define , namely a slightly different version of the function with
(11)
defined as follows: if if if
(12)
The function maps all real values external to the ininto the values , depending on whether terval or greater than , while all values internal they are less than are mapped into the set . The value to of depends on the kind of data considered, because the nature of data influences the behavior of the function . has many elHowever, when the set ements, the efficiency of the algorithm degrades considerably, since the values in are all mapped on the two extremes 0 and of the discrete interval . Therefore, much information about the morphological features of the blocks is lost. The solution for this problem consists in a suitable choice of the preset block . Although we could choose it arbitrarily, statistically we have observed that selecting the mean range block (1). assumes, with high probability, values inThe function . This is shown in Table I. ternal to the interval Then, by quantizing to 5 bits, the codomain of the function can be mapped into the set without an excessive loss of information. Therefore, even if it requires more operations, the use of the preset block is useful for the performance of the algorithm from the point of view of decoded image quality. This makes sense because, for the way in which is calculated, the preset block minimizes the differences with all the other image blocks. A. Domain Tree: A Data Structure for Feature Vectors On the bases of Lemma 1 in Section II, the range/domain matching problem can be reformulated in terms of exact correspondence between the range’s and the domain’s block feature vector. However, since in the first phase of the coding process we need to store the feature vectors of all domains, we must select a data structure that can manage this information effi-
92
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006
Fig. 4. Configuration of the nodes of the domain tree in the worst case.
Fig. 3. Graphical representation of the bit shuffle of the error features.
ciently [4]. In fact, considering the domain feature vector set , where e , for each range block we need to carry out comand/or is an inconparisons, which, for great values of venient solution. However, modifying feature vector structure and allowing an acceptable loss of precision, we can solve the problem in a more effective way. Given a generic feature vector computed respect to the middle block
each component , is non zero integer value in , , then the corresponding binary the interval representation requires exactly . Given the following representation of the feature vector:
.. .
.. .
..
.
.. .
.. .
we carry out a row shuffle in the matrix, obtaining a new vector
(13) This process can be seen in Fig. 3. The advantage of this new representation is the following: Given any two vectors, a large difference between their prefixes implies a meaningful generic difference, since the most significant have greater weight in comparisons. After their calculation through the function and after the bit-shuffle operation, the feature vectors are memorized into the domain tree. It is a binary tree, in which every node consists of two references, one to the left child and one to the right. Insertion, search and nearest-neighbor search operations are defined on it. The domain tree is structured so that every bit of the feature vector
corresponds to one tree level, from the most significant in the root to the least meaningful one in the leaves. It is clear, therecorrespond fore, that feature vectors with the same prefix to the same path, from the level 0 to the level . The domain tree has several interesting properties. Specifically, we consider the number of nodes in the worst, best and average case, to show the effective feasibility of the data structure. In the following, we refer to the memory needed by DRDC when using a binary tree, while in Section IV we propose a most powerful approach based on KD-trees. •
Worst Case: We determine the number of nodes in the tree by induction. If there is only one feature vector in the tree, then there is only one path, and it is nodes long, where is the height of the tree. Inserting another feature vector in the structure, since the first level (root) of the tree is full, then two paths must share at least the root, and the . maximum number of distinct nodes is In a generalized manner, the worst case with features and , occurs when the vectors, where first levels are full and all the underlying paths are disjoint (Fig. 4) [4]. In this case, there are nodes, for a total of , and , which, in the common case of correspond to 24.20 Mb of RAM. • Best Case: The best case occurs when all the paths of the nodes and they features vectors share the first are disjoint for all underlying nodes (Fig. 5) [4]. In this , which case, the number of nodes is corresponds to 768 Kb of RAM. • Average Case: From the practical application of the data structure, we have estimated that the RAM required is 12–13 Mb. Other properties of the domain tree are: • •
•
all the leaves are on the same level; elimination of copies: when there are several domain blocks with the same feature vector, it is possible to insert them only the first time; feature vector reconstruction.
Moreover, we have observed that feature vectors falling inside leaves that are close are necessarily close according to the Euclidean norm; however, the fact that two feature vectors are
DISTASI et al.: RANGE/DOMAIN APPROXIMATION ERROR-BASED APPROACH
93
Fig. 6. Comparison respect to time/PSNR between KD-tree-based DRDC and McDRDC methods and domain tree-based version of the DRDC strategy, on the image Zelda 512 512, 8 bits.
2
Fig. 5.
Configuration of the nodes of the domain tree in the best case.
in distant leaves is not sufficient to guarantee that they are distant according to the Euclidean norm. In any case, the nature of the data and the large number of domain blocks in the structure make this a negligible issue. IV. EXPERIMENTAL RESULTS The performance of the algorithm has been assessed under different points of view. The results reported include the bit rate reached, the PSNR obtained, the number of range/domain comparisons carried out and the memory ususage. The bit rates have been calculated by the following formula:
Because of the partial reversibility of the coding process, the fractal compression of the image adds noise to the original signal. Less added noise means greater image quality, and, therefore, a better algorithm. Noise is usually measured by the peak signal-to-noise ratio (PSNR), which, in decibels, can be computed as follows:
where and are image width and height, 255 is the maxis the pixel value in the original image imum pixel value, is the corresponding pixel in the decoded image. The and number of range/domain comparisons is merely the average number of domains to be examined in order to obtain a good encoding for a given range block. We partition the original 16, 8 8, and 4 4 pixel blocks using a image in 16 quadtree, while the size of the approximation error vector is 4, obtained by averaging pixels of the original always 4
vector, even if there are more effective ways, as shown in [2], domains, using [19]. Considering a 512 512 image with standard quantizion when all the ranges are 4 4, we have a lower bound of
that is, about 1.56 bpp, while in the case of 16 16 the lower bound is about 0.1 bpp. In order to assess the performance of DRDC, we compared it with the algorithms introduced in Section I, using several 8 bpp 512 512 sample images, including Lena, Zelda, and Goldhill, drawn from the BragZone image collection [12]. The experiments were aimed at assessing the speed up provided by the DRDC method, while making sure that memory usage remained at acceptable levels. Overall, it can be said that the speed up is consistent over a wide range of circumstances. As for memory usage, in the worst case the domain tree is equivalent to the classic KD-tree, while in the best case it is more conservative. DRDC, in its pure form and hybridated with MC, has been compared with well-known exististing similar techniques, such as several variants of Cardinal’s and Saupe’s algorithms. A. Memory Usage A comparison among the KD-tree-based DRDC, MC-DRDC method, and the domain tree-based DRDC method is shown in Fig. 6 respect to the time/PSNR ratio on 512 512 image Zelda. It is evident how the former perform better than the latter in terms of time and PSNR. However, one of the difficulties with fractal coding is that its faster implementations tend to be a little memory-hungry. Therefore, it is interesting to consider the methods under exam from the point of view of memory usage, showing in what circumstances the domain tree results in memory savings respect to the other spatial access methods. As stated in [10], representing records in a KD-tree requires nodes. Since each node contains exactly keys as well as two pointers, it is easy to deduce that the memory requirement for
94
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006
Fig. 8. Comparison on Goldhill 512 fixed at 8.53. Fig. 7. Lena image decoded at different compression ratio: (a) original image, (b) compression ratio of 6.98 with a PSNR of 37.29, (c) compression ratio of 16.70 with a PSNR of 33.95, and (d) compression ratio of 34.03 with a PSNR of 30.43.
this data structure is of the order . Therefore, if the vectors that describe the domains have size , the space required , where is the domain pool. is A similar line of reasoning can be applied to Cardinal’s algorithm. However, the KD-tree variant uses the tree to store not only the domains’ characteristic vectors, but also the ranges’. As a result, the memory required jumps up to , where is the set of ranges. As for the three remaining cases, consider a generic data structure used for memorizing the characteristic vectors of both ranges and domains. Initially, the structure will contain data about the whole vector space. As the space gets partitioned, each block will be represented in one partition only; therefore, the . total occupation will not change, remaining at Finally, the Mass Center algorithm only considers two features related to a block’s mass center [15]. The values for these features are quantized in such a way as to obtain classes. Since there are only two features, it is possible to use a random access data structure, namely, a two-dimensional array of elements. is a pointer to the list of all the domains Each element contained in Class . Therefore, the overall memory use is , where the first term accounts for the array and the second accounts for the linked lists. It should be noted that, even if it appears to be the most convenient memory-wise, the Mass Center method only uses two features. Therefore, its classification effectiveness is somewhat reduced, and the resulting performance in terms of PSNR is not as good as that obtained by other methods that use longer feature vectors (DRDC and Saupe, in particular). As for a KD-tree versus domain tree [4] comparison, in the worst case, the asymptotic memory occupation is for both, but the hidden constants are better (smaller) for KD-trees. For an image with domains and error vectors of length , levels, in the worst case, we have for a domain tree of
2 512, 8 bits, with compression ratio
. On the other hand, in the best case scenario it is the opposite: here the domain tree has a memory occupation of , which is better than that required by the corresponding KD-tree. Finally, Fig. 7 shows some Lena images decoded at different bit rate using KD-tree-based DRDC. By looking at the pictures, it is possible to see that the perceived distorsion is rather low even at high compression ratios. In the following experimental results we have adopted the KD-tree-based DRDC method. B. Cardinal’s Algorithm A comparison with Cardinal’s Algorithm four variants (heuristic, random, optimal, and KD-tree), as shown in Figs. 8 and 10, shows the particular behavior of DRDC: Although it gets a good transformation set very quickly, an increase of the number of range/domain comparisons does not imply a proportional increase of the PSNR, which tends to stabilize near an asymptotic value. This happens because most domains with similar approximation errors fall in near locations in the KD-tree, where the algorithm carries out the search. For this reason, selecting a domain block and subsequently all its neighbors, the proposed algorithm gets domains that are quite similar, causing a stabilization of the PSNR. Furthermore, while Cardinal’s algorithm starts with a small PSNR value, improving it as time spent in coding grows, Fig. 10 shows how approximation error-based methods get a better PSNR immediately and tends to retain it as the bit rate increases. In fact, experimental results already show a gap of about 2 dB for high bit rate values between our algorithm and all four variants of Cardinal’s Algorithm on the image Zelda, used for the comparison illustrated in Fig. 9. A possible explanation for this phenomenon is that, in order to reduce the number of range/domain comparisons, Cardinal’s algorithm carries out the search only inside of the range’s partition, even if the best approximating domains for some ranges are not necessarily found in there. On the contrary, in the proposed
DISTASI et al.: RANGE/DOMAIN APPROXIMATION ERROR-BASED APPROACH
Fig. 9. PSNR/compression ratio comparison on Zelda 512 time forced to be constant.
2 512, 8 bits and
95
Fig. 11.
Time/PSNR comparison on Lena.
2 512, 8 bits, with compression ratio fixed
Fig. 12. Average time/PSNR comparison with four variants of Cardinal’s algorithm, on 12 different standard images from the Waterloo BragZone, 8 bits, with compression ratio fixed at 10.01.
method the search for the best matching domain is carried out in the whole codebook, at least potentially, so that any domain can be selected. Examining Figs. 12 and 13, it can be seen that DRDC, both in pure form and combined with the Mass Center technique, obtains better performance than Cardinal’s algorithm under the aspects of speed and bitrate/PSNR, over the whole experimental set of images. Another point is that while pure DRDC gives the best bitrate/PSNR curve, it is somewhat slower than the MC/DRDC hybrid, which instead gives the best speed up.
strategies. However, it is obvious, that only two features are less discriminant, so that tree-based methods perform better than MC. Furthermore, looking at Fig. 11, the difference of behavior between the proposed method and Saupe’s strategy becomes evident: The former is faster, but it eventually converges to the same PSNR values of Saupe’s method—even if slowly. Identical observations can be made for the hybrids MC-DRDC and MC-Saupe, noticing that these are a good compromise between MC and DRDC/Saupe in terms of time/PSNR ratio. Fig. 14 was drawn by averaging the results on the BragZone image collection in [12]. Indeed, there is little variation from image to image, so the qualitative behavior emerging from this average is consistent with the data from tests on individual images: pure and hybrid DRDC are the fastest alternatives for PSNR values of about 28.5 dB or lower. These relatively low PSNR values are sufficient for most of the already mentioned applications, such as image indexing and face recognition. The main difference between pure and hybrid DRDC is that the
Fig. 10. Comparison on Zelda 512 at 12.51.
C. Mass Center and Saupe Results shown in Fig. 11 illustrate that although the MC method is faster, it reaches lower PSNR values. This is because it uses only two features for range and domain classification. By doing so, it can utilize a direct access structure for information storage, that results in a data management technique faster then tree-based nearest-neighbor search used by Saupe and DRDC
96
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006
number of operations required to generate a good fractal code for a given image, and consequently a performance improvement of the coding process. Indeed, for applications where speed is more important than extreme accuracy, it could be said that hybrid DRDC is the candidate of choice. In future work, we will investigate the behavior of the KD-tree-based DRDC method, when a dimensionality reduction strategy is applied on the error vector space, to speed-up the coding phase even further.
REFERENCES
Fig. 13. Average PSNR/compression ratio comparison with four variants of Cardinal’s algorithm, on 12 different standard images from the Waterloo BragZone with compression time forced to be constant.
Fig. 14. Average time/PSNR comparison on 12 different standard images from the Waterloo BragZone.
former yields somewhat higher PSNR values, while the hybrid version trades a few tenths of decibels of PSNR for much faster operation (about twice as faster).
V. CONCLUSION In this paper, we proposed a new classification method for fractal image compression called DRDC. It is based on the approximation error, which is computed deferring range/domain comparisons with respect to a preset block. We also described two different version of the DRDC method: one based on domain tree and the other implemented with the KD-tree data structure, showing the differences in terms of time/PSNR ratio and memory usage. Especially for KD-tree-based DRDC, experimental results have shown a significant reduction of the
[1] B. Bani-Eqbal, “Speeding up fractal image compression,” in Proc. IS&T/SPIE Symp. Electronic Imaging: Science and Technology, vol. 2418, San Jose, CA, Sep. 1995, pp. 67–74. [2] C. Aggarwal, “On the effects of dimensionality reduction on high dimensional search,” in Proc. IBM T. J. Watson Research Center, ACM PODS Conf., Yorktown Heights, NY, 2001, pp. 1–11. [3] C. Popescu, A. Dimca, and H. Yan, “A nonlinear model for fractal image coding,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 373–382, Mar. 1997. [4] D. Riccio and M. Nappi, “Defering range/domain comparison in fractal image compression,” presented at the Int. Conf. Image Analysis and Processing, Mantova, Italy, Sep. 2003. [5] D. Saupe and R. Hamazoui, “Complexity reduction methods for fractal image compression,” presented at the I.M.A. Conf. Image Processing: Mathematical Methods and Applications, Sep. 1994, pp. 1–24. [6] H. Hartenstein, M. Ruhl, and D. Saupe, “Region-based fractal image compression,” IEEE Trans. Image Process., vol. 9, no. 7, pp. 1171–1184, Jul. 2000. [7] H. Hartenstein, M. Ruhl, D. Saupe, and E. R. Vrscay, “On the inverse problem of fractal compression,” in Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems, B. Fiedler, Ed. New York: Springer-Verlag, 2001. [8] J. Cardinal, “Fast fractal compression of greyscale images,” IEEE Trans. Image Process., vol. 10, no. 1, pp. 159–163, Jan. 2001. [9] J. F. L. De Oliveira, G. V. Mendoça, and R. J. Dias, “A modified fractal transformation to improve the quality of fractal coded images,” in Proc. IEEE Int. Conf. Image Processing, Oct. 1998, pp. 4–7. [10] J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Commun. ACM, vol. 18, no. 9, pp. 509–517, Sep. 1975. [11] R. Distasi, M. Nappi, and M. Tucci, “FIRE: Fractal Indexing with Robust Extensions for image databases,” IEEE Trans. Image Process., vol. 12, no. 3, pp. 373–384, Mar. 2003. [12] Waterloo BragZone and Fractals Repository, J. Kominek. (2004, Feb. 3). [Online]. Available: http://links.uwaterloo.ca/bragzone.base.html [13] H. E. Komleh, V. Chandran, and S. Sridharan, “Face recognition using fractal,” in Proc. Int. Conf. Image Processing, vol. 3, Oct. 2001, pp. 58–61. [14] K. G. Nguyen and D. Saupe, “Adaptive post-processing for fractal image compression,” in IEEE Int. Conf. Image Processing, Sep. 2000. [15] M. Polvere and N. Nappi, “Speed-up in fractal image coding: comparison of methods,” IEEE Trans. Image Process., vol. 9, no. 6, pp. 1002–1008, Jun. 2000. [16] R. Hamzaoui, H. Hartenstein, and D. Saupe, “Local iterative improvement of fractal image codes,” Image Vis. Comput., vol. 18, no. 6/7, pp. 565–568, 2000. [17] R. Hamzaoui, D. Saupe, and M. Hiller, “Distortion minimization with fast local search for fractal image compression,” J. Vis. Commun. Image Represent., no. 12, pp. 450–468, Dec. 2001. , “Fast code enhancement with local search for fractal image com[18] pression,” in Proc. IEEE Int. Conf. Image Processing, vol. 2, 2000, pp. 156–159. [19] S. Roberts and E. Richard, Independent Component Analysis. Principles and Practice. Cambridge, U.K.: Cambridge Univ. Press, 2000. [20] Y. Fisher, Fractal Image Compression: Theory and Application. New York: Springer-Verlag, 1994.
DISTASI et al.: RANGE/DOMAIN APPROXIMATION ERROR-BASED APPROACH
Riccardo Distasi was born in Naples, Italy. He received the Laurea degree and the Ph.D. degree in computer science from the University of Salerno, Salerno, Italy, in 1992 and 2001, respectively. He has been a grantee with the Italian National Research Council since 1994. He was an Assistant Professor of computer science at the Second University of Naples for two years. He is currently an Assistant Pprofessor with the Department of Mathematics and Computer Science, University of Salerno. His research interests include signal processing, image indexing/compression, and multimedia databases.
Michele Nappi was born in Naples, Italy, in 1965. He received the Laurea degree (cum laude) in computer science from the University of Salerno, Salerno, Italy, in 1991, the M.Sc. degree in information and communication technology from I.I.A.S.S., E.R. Caianiello, Vietri sul Mare, Salerno, and the Ph.D. degree in applied mathematics and computer science from the University of Padova, Padova, Italy. He is currently an Assistant Professor of computer science at the University of Salerno. His research interests include pattern recognition, image processing, compression and indexing, multimedia databases, and visual languages. Dr. Nappi is a member of IAPR.
97
Daniel Riccio was born in Cambridge, U.K., in 1978. He received the Laurea degree (cum laude) in computer science from the University of Salerno, Salerno, Italy, in 2002. He is currently purusing the Ph.D. degree at the University of Salerno, Salerno, Italy. His research interests include biometrics, fractal image compression, and indexing. Mr. Riccio has been a member of GIRPR since 2004.