Pattern Recognition Letters 25 (2004) 1787–1797 www.elsevier.com/locate/patrec
An adjustable algorithm for color quantization Zhou Bing a
a,*
, Shen Junyi b, Peng Qinke
c
Department of Computer Science, Northeastern University at QinHuangDao, 066004 QinHuangDao, China b Institute of Computer Software, Xi’an Jiaotong University, 710049 Xi’an, China c Institute of System Engineering, Xi’an Jiaotong University, 710049 Xi’an, China Received 1 October 2003; received in revised form 4 June 2004 Available online 28 September 2004
Abstract Color quantization is an important technique in digital image processing. Generally it involves two steps. The first step is to choose a proper color palette. The second step is to reconstruct an image by replacing original colors with the most similar palette colors. However a problem exists while choosing palette colors. That is how to choose the colors with different illumination intensities (we call them color layers) as well as the colors that present the essential details of the image. This is an important and difficult problem. In this paper, we propose a novel algorithm for color quantization, which considers both color layers and essential details by assigning weights for pixel numbers and color distances. Also this algorithm can tune the quantization results by choosing proper weights. The experiments show that our algorithm is effective for adjusting quantization results and it also has very good quality of quantization. Ó 2004 Elsevier B.V. All rights reserved. Keywords: Color quantization; Cluster feature; Octree; Digital image processing; Weighted product
1. Introduction The color quantization is one of the basic digital image processing techniques. Its task is: (1) based upon image processing requirements, choose K colors from one true-color image to build a proper palette combination, (2) relying on the characteristics of the human vision system
*
Corresponding author. E-mail address:
[email protected] (Z. Bing).
(HVS), reconstruct the image using the palette to make it close to the original image as much as possible and to obtain the best vision effect. In general, the color quantization process has two steps. The first step is to select an appropriate palette, and the second step is to obtain a reconstructed image by replacing the original color elements with the palette color elements. The quality of the reconstructed image mainly depends on the first step of the color quantization. The color quantization needs to consider several factors such as the least distortion, the
0167-8655/$ - see front matter Ó 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2004.07.005
1788
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
Fig. 1. An example of true-color image.
algorithm complexity, the characteristics of HVS, and so on. So far, for the choice of color palette, there is no satisfactory solution to the problem that how to preserve the color layers and certain essential details at mean time. For example, in Fig. 1, there are lots of green 1 leaves and some small pink flowers. The green leaves have different illumination intensities and make some different color layers. The pink flowers are very different with the green leaves. They are the essential details of the image. We can observe that the color layers generally are manifested by certain similar colors with different illumination intensities. Those similar colors are characterized by having a big pixel number that makes them very easy to be distinguished out. The essential details are represented by colors with small pixel number but sharp different from other colors, which makes them to be very attractive. As the above description, we can find the contradiction that similar colors with a large number of pixels represent the color layers, while largely different colors representing essential details share a small number of pixels. Therefore, if the palette
is chosen according to the color pixel number, the color layers can be retained but certain essential details might be lost. However, if the palette is chosen according to the magnitude of color difference, the color layers will be lost due to similar colors will be merged although essential details are retained. Hence, it is difficult to balance the color layers and the essential details. And also, one may want the beautiful color layers to be preserved but other wish the important essential details left. Different requirements need the adjustability for a quantization algorithm. In this paper, we propose a novel algorithm for color quantization. This algorithm considers the requirements of both color layers and essential details together, and can tune their weights to get the optimized color palette according to the requirements of different color quantization tasks. This paper has five sections. Section 2 introduces some common color quantization algorithms. In Section 3, we describe our new algorithm. Section 4 shows the experiments. The last section is our concluding remarks.
2. Related work 1 For interpretation of colors in the figures, the reader is referred to the web version of this article.
Some earlier color quantization methods, such as uniform quantization (Heckbert, 1982), popu-
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
larity algorithm (Heckbert, 1982), media-cut algorithm (Heckbert, 1982), center-cut (Joy and Xiang, 1993), octree algorithm (Gerrautz and Purgathofer, 1988), and clustering-based method, focus on selecting an appropriate palette. A brief introduction of these algorithms is given below. The uniform quantization divides the color space into subspaces directly, and chooses a group color with evenly distributed red, blue and green components. For example, when the color number K is 256, and considering the sensitivity of human eyes to different colors, the uniform quantization divides the red and green components into three intervals respectively, and the blue component into two intervals, thus we get 8 grays of red and green colors, 4 grays of blue color, which give 8 * 8 * 4 = 256 colors. The uniform quantization requires that the palette colors have no relationship with the colors of original image. This method is simple and fast. However, since a real image does not contain all the evenly distributed colors, the reconstructed image usually differs largely from the original image. The popularity algorithm simply chooses the K colors with the highest frequencies from the histogram, to yield a color palette. This method can choose different palettes for different images, so the effect of reconstructed images is improved. However, its time and space complexity is high, and some colors are discarded due to low frequencies. Hence the essential details cannot be preserved properly. The key idea behind the median cut algorithm is to divide the color space of the original image into K boxes, each of which contains an equal number of pixels of the original image. The average colors of each box are used as the K colors of the color palette. The colors with high pixel numbers can be included into one box, so these colors can get good representing colors. This is not true for the colors with low pixel numbers. When such colors are grouped in one box, they cannot get good representing colors. Therefore, the median cut can preserve the color layers very well, but loses some essential details. The center cut algorithm, like the median cut, is also a kind of partitioning algorithm. This method repeatedly splits the color set whose bounding box
1789
has the longest side until K sets are generated. The centers of the K sets are used as palette colors. In this algorithm, the splitting positions are always close to the colors with more pixels, but far away from the colors with fewer pixels. Hence it also cannot preserve the essential details very well. The octree method is an agglomerative method that is based on a predetermined subdivision of the RGB color space into levels of octants. This method scans the pixels sequentially, and uses the first different K colors as initial colors. While the number of colors is larger than K, it merges the color with the lowest frequency into the closest color. This method may loss some colors due to the low frequency, and distorts the essential details. The clustering-based algorithms, such as Kmeans algorithm, Maximum-distance clustering algorithm, and LBG algorithm, extract the quantized colors via various clustering algorithms. They usually use the minimal distance as a metric for cluster merging. Since close colors will be merged, the color layers will be discarded. Generally speaking, the above algorithms cannot preserve both color layers and essential details very well since they do not consider the color layers and essential details together. In this paper, we propose a novel color quantization algorithm. Our method begins with N (N > K) initial colors obtained from a clusteringbased algorithm, and then selects the color with the most pixels as the base-color. For each of the other colors, the weighted products of its pixel number and its distance to the base-color are computed. Next the products are sorted in a descending order, and the first K 1 colors and the base-color are chosen as the initial palette. Finally the left N– K colors are merged with the closest colors in the initial palette to produce the final palette. This algorithm is described in detail in Section 3. Recently, some new algorithms make progress in the aspect of speed and vision effect, but the basic idea is similar with traditional algorithms. Zhao and Wang (2000) is an improvement of K-means algorithm. Cheng and Yang (2001) is still based on color-distance, while it can group the colors quickly by adopting a distance-projecting method. The method proposed in (Atsalakis et al., 2002) divides an image into some small windows, and
1790
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
quantizes the major color of these windows. In (Li et al., 2003), the authors propose a very different algorithm from the above algorithms. Their algorithm uses the edge detection technique to find the color clusters and then a proper color number K is decided. However, this algorithm needs three parameters to guide the procedure of finding color number and these parameters, compared to the color number k, are complex and indirect. Some other algorithms do lots of work for obtaining the best vision effect of reconstructed images, such as (Kolpatzik and Bouman, 1995; Akarun et al., 1996; Ketterer et al., 1998; Buhmann et al., 1998; Lo et al., 2003). They combine the techniques of dithering or half-toning to eliminate the degradation of quantized image, such as contouring artifacts. Since quantization and digital half-toning are independent processes, and they usually utilize different quality criteria, the final integrated result is not very good. In (Ketterer et al., 1998; Buhmann et al., 1998), the authors propose a method that uses a uniform quality criterion by integrating the technique of model-based half-toning into the quantization cost-function and leads to a significant improvement in image display quality. In (Lo et al., 2003), a 3D frequency diffusion filter is used to improve the quantization effect. However, we believe how to choose the color palette is the most important for color quantization. If some important colors are discarded, even the best dithering or half-toning techniques cannot restore those colors. Color quantization has been widely used in such fields as image segment, image retrieval, and image compression. In order to satisfy different requirements of different quantization tasks, the quantization algorithms should have adaptability and adjustability. Our algorithm can obtain different quantized images by different weights set by users. As we know, the RGB color space is not a uniform color space. In (Gu et al., 2002), the authors propose a color quantization algorithm integrated with Gamma correction, which has been proved to be efficient to improve the vision effect of quantized images. In this paper, RGB color space is used to simplify computation, although our algorithm is suitable for uniform color spaces, such as L * a * b * color space. Some other interesting
issues about color quantization include the restoration of color-quantized images (Li et al., 2003), and the quantization of video sequences (Cheung and Chan, 2003).
3. Color quantization algorithm Our algorithm is different with traditional twostep color quantization algorithms. Our algorithm has three steps. In first step, we use a clusteringbased algorithm to get N (N > K) initial colors. And then we choose the most proper K colors according to different requirements of different quantization tasks from the initial color set. The third step is to obtain a reconstructed image by replacing the original color elements with the palette color elements. Our clustering-based algorithm used in the first step is based upon the definitions of Cluster Feature and CF-tree (Tian et al., 1996). Next we will introduce these concepts first. 3.1. Cluster Feature and CF-tree A Clustering Feature is defined as a triple: CF ¼ fN ; L~ S; SSg. N is the number of data points in a cluster.P L~ S is the linear sum of the N data N ~ points, i.e., i¼1 X i . It stands for the center of a cluster. SS is the square sum of the N data points, P ~2 . It represents the size of a cluster. The i.e., Ni¼1 X i smaller SS, the more compact a cluster. CF summarizes the information of data points in a cluster, so that the data in a cluster can be represented by a CF instead of a collection of data points. This CF summary is not only efficient because it stores much less than all the data points in the cluster, but also accurate because it is sufficient for calculating all the measurements that we need for making clustering decisions in BIRCH (Tian et al., 1996). A CF-tree is a height-balanced tree with two parameters: branching factor B and threshold T. Branching factor is the maximum number of children of a non-leaf node. Threshold defines the limitation of the diameter size of a cluster. If the diameter of a data cluster is larger than T, the points in this cluster cannot be in the same cluster. The tree size is a function of T. The larger T is, the
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
smaller the tree is. We can change the tree size by modifying T. When the memory is not enough, we can set a larger value for T so that a leaf node can contain more points, and then we can re-build a smaller tree. In a CF-tree, each non-leaf node contains at most B entries. Each entry has the form [CFi, childi], where i = 1,2, . . ., B, and childi is a pointer to its ith child node. A leaf node contains at most L entries, each of which has the form [CFi], where i = 1,2, . . ., L. BIRCH requires a node to fit in a page of size P (e.g., 1024 bytes). Once the dimension d of a data space is given, the sizes of leaf and non-leaf entries are known, and then B and L can be determined by P. 3.2. Getting initial colors BIRCH algorithm is used to cluster the high dimension data sets. One color can be treated as a point of 3-d space, so we can use a Cluster Feature to express a cluster of colors. We call it Color Cluster Feature (CCF). The corresponding CFtree is called CCF-tree. Each child entry of a leaf node in a CCF-tree represents a color cluster.
1791
We use the CCF-tree to get the initial colors. First, we set the number of initial colors N = 2K, where K is the number of the required quantized colors. Let T be zero. Second, choose the first different N colors as an initial cluster, and record the minimal color distance minColDis that is larger than T. When the present color number is larger than N, set T = minColDis, and re-build a CCFtree. During the re-building procedure, if two colors have a distance less than T, merge them together. Next, read in the new pixels and repeat the above procedure until finishing all pixels. Finally, use the center of each entry as an initial color. The procedure of building a CCF-tree is showed in Fig. 2. The function split_father(father, newChild) is shown in Fig. 3. The function of the rebuild() is shown in Fig. 4. The advantage of using this clustering method is that the first N colors could be re-generated when the CCF-tree is re-built. This can eliminate the influence of the initial color conditions. After clustering, we can get an initial color set Q with M (N > M > K) initial colors.
Input: an image, a predefined value K, N=2K. Output: an initial color set Q with M colors, where N>M>K. Method: Step 1. set T and minColDis to be 0; Step 2. read the new color of a pixel; Step 3. find the closest entry in leaf node and calculate the distance; Step 4. if the distance smaller than T, merge the new color with the entry and go to Step10; Step 5. record the minimal color distance minColDis that is larger than T; Step 6. if the child number of leaf node smaller than L, add the new color into leaf node as a new entry and go to Step10; Step 7. generate a new leaf node newChild and add the new color to newChild; Step 8. if the child number of the leaf node’s father smaller than B, add the newChild to father and go to Step10; Step 9. call split_father( father, newChild); Step 10. if the number of all entries in leaf nodes larger than N, let T be minColDis and call rebuild(); Step 11. repeat Step 2-10 until handled all pixels.
Fig. 2. The procedure of building a CCF-tree.
1792
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797 fuction : split_father(father,newChild) Step 1. generate a new node newBranch and add newChild to newBranch; Step 2. if father is root, generate a new root newRoot, add father and newBranch to newRoot, and go to Step 5 Step 3. if the child number of father's father smaller than B, add newBranch to father's father and go to Step 5; Step 4. call split_father(father’s father, newBranch); Step 5. return;
Fig. 3. The function of split_father(father, newChild).
fuction : rebuild() Step 1. read an entry of leaf node from the old tree; Step 2. find the closest entry of leaf node in the new tree and calculate the distance; Step 3. if the distance smaller than T, merge the old entry with the closest entry in the new tree and go to Step 8 ; Step 4. if the child number of the leaf node in new tree smaller than L, add the old entry to the leaf node in new tree and go to Step 8; Step 5. generate new child newChild and add the old entry to newChild; Step 6. if the child number of the leaf node’s father in new tree smaller than B, add the newChild to father and go to Step 8; Step 7. call split_father(father, newChild); Step 8. repeat Step 1 -7 until handled all entries of old tree; Step 9. free the memory of the old tree;
Fig. 4. The function of rebuild( ).
3.3. Choosing palette
8cðc 2 Q j c 6¼ cbase Þ ! ðP ðcÞ < P ðcbase ÞÞ:
Before describing how to choose palette colors from the initial color set Q, we give some definitions.
Definition 4. A weighted product of color c, V(c), is defined as
Definition 1. Function D(c1,c2) is the distance between two color vectors c1 and c2, such as Euclidean distance. Definition 2. Function P(c) is the pixel number of color c. Definition 3. Base-color cbase of an initial color set Q is the color that satisfies the equation
wp
V ðcÞ ¼ ðP ðcÞÞ
wd
ðDðc; cbase ÞÞ ;
where wp is the weight of the number of pixels, and wd is the weight of a color distance. Given the first color cbase, our method calculates the weighted products of other colors and selects the first K 1 largest product. The corresponding K 1 colors with the base-color are used to form an initial palette. The left M–K colors in
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
1793
Q are merged with the closest colors in the initial palette to produce the final palette. If the final palette color, c, is made up by two colors, c1 and c2, c is calculated by the following equation: c¼
P ðc1Þc1 þ P ðc2Þc2 : P ðc1Þ þ P ðc2Þ
During this procedure, we can modify the values of wp and wd to satisfy the different requirements of color layers and essential details. If more color layers are needed, wp is set to a larger value. This causes the colors with large pixel numbers to be chosen and hence color layers can be preserved. On the contrary, giving wd a larger value helps preserve essential details. 3.4. Performance studies In the first phase of getting initial colors, one leaf of the CCF-tree could have L color clusters, so there are d2K/Le leaves totally. Then the height of the CCF-tree is logBd2K/Le + 1. To find the closest color cluster for a pixel from root will pass
Fig. 5. Original image 1.
logBd2K/Le + 1 nodes, each node need B times of comparison to find the closest entry. For the whole image, the time complexity is O(C * B * (logBd2K/ Le + 1)), in which the C is the pixel number of the image. During this phase, the rebuilding procedure of CCF-tree will happen. In one rebuilding procedure, all the color clusters of leaves will reinsert into the tree again. The time complexity is O(2K * B * (logBd2K/Le + 1)). The number of rebuilding R is highly related with the feature of image, and hard to predict. The time complexity of phase one is O(C * B * (logBd2K/Le + 1) + R * 2K * B * (logBd2K/Le + 1)).
Fig. 6. Evaluate the impact of different weights on the quantization results.
1794
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
To choose K colors from M initial colors needs M times of comparison of the pixel number to get the base-color and then M 1 times calculation of the weighted products and M2/2 times of comparison of these products for sorting. So the complexity is O(2M 1 + M2/2).
4. Experimental results The proposed algorithm has been implemented with C and JAVA languages on a PC, which has a CPU of Intel P4 1.7G and 512 M memory. To evaluate the impact of different weights on the quantization results, we compare the results
Table 1 The colors selected with different weights for Fig. 5. wp = 1, wd = 1
wp = 1, wd = 2
wp = 1, wd = 1
Wp = 1, wd = 2
Color1
Red Green Blue
54 58 50
54 58 50
Color9
Red Green Blue
186 183 193
179 195 186
Color2
Red Green Blue
100 116 105
239 245 243
Color10
Red Green Blue
114 136 114
165 182 171
Color3
Red Green Blue
76 89 81
208 205 217
Color11
Red Green Blue
221 231 228
232 189 134
Color4
Red Green Blue
125 137 133
186 183 193
Color12
Red Green Blue
194 208 201
100 116 105
Color5
Red Green Blue
157 161 165
157 161 165
Color13
Red Green Blue
129 150 129
150 172 145
Color6
Red Green Blue
208 205 217
221 231 228
Color14
Red Green Blue
165 182 171
202 132 120
Color7
Red Green Blue
37 33 15
125 137 133
Color15
Red Green Blue
89 107 86
129 150 129
Color8
Red Green Blue
239 245 243
194 208 201
Color16
Red Green Blue
150 172 145
138 156 147
Fig. 7. The effects of wp and wd on the image of Fig. 1.
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
1795
K-means, and if the value of wd increases, the colors of red flowers get enhanced. The colors selected with different weight are depicted by Table 1. The different colors are marked bold. The effects of wp and wd on the image of Fig. 1 are shown in Fig. 7. The colors selected with different weight for the image of Fig. 1 are depicted by Table 2. We then compare our algorithm with some other algorithms, which include octree method, center-cut method, and K-means method. The color number K is 64, and wp and wd are set to be 1. In the following pictures (Fig. 8), (a) is the original image, (b) is the result of octree, (c) is the result of center-cut, (d) is the result of K-means, and (e) is the result of our algorithm. The following results are obtained when the color number K is 256, and wp and wd are 1. (a) is the original image, (b) is the result of octree, (c) is the
gained by our algorithm with the results of octree and K-means algorithms. As described in Section 2, the octree method can be considered as a kind of popularity algorithm, which chooses the palette colors according to the frequencies of colors. It can preserve color layers but may discard essential details. K-means is a kind of clustering method, which chooses colors according to color distances. Unlike the octree method, K-means can preserve essential details very well but not color layers. In this test, in order to make obvious results, we set the color number K to be 16. The results are shown in Figs. 5 and 6. From the results, we can see that the octree method discards the colors of the red flowers, and the K-means method preserves the essential details of the red flowers but has a worse effect of green background. For our algorithm, when wp and wd are equal, the colors of red flowers are preserved but the effect is worse than Table 2 The colors selected with different weights for the image of Fig. 1
wp = 1, wd = 1
Wp = 2, wd = 1
Color1
Red Green Blue
wp = 1, wd = 1 5 99 19
wp = 2, wd = 1 5 99 19
Color9
Red Green Blue
168 216 201
102 199 145
Color2
Red Green Blue
21 13 2
21 13 2
Color10
Red Green Blue
179 231 208
8 43 4
Color3
Red Green Blue
33 153 71
33 153 71
Color11
Red Green Blue
8 43 4
168 216 201
Color4
Red Green Blue
66 176 108
66 176 108
Color12
Red Green Blue
2 69 6
179 231 208
Color5
Red Green Blue
110 202 147
110 202 147
Color13
Red Green Blue
10 116 67
10 116 67
Color6
Red Green Blue
235 239 242
2 69 6
Color14
Red Green Blue
175 200 141
4 93 45
Color7
Red Green Blue
142 216 175
235 239 242
Color15
Red Green Blue
47 149 104
7 130 8
Color8
Red Green Blue
102 199 145
142 216 175
Color16
Red Green Blue
220 231 175
47 149 104
1796
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
Fig. 8. The results with K = 64.
Fig. 9. The results with K = 256.
Z. Bing et al. / Pattern Recognition Letters 25 (2004) 1787–1797
1797
Table 3 The average quantization errors Test
Color number
Octree
Center-cut
K-means
Our algorithm
No. 1 No. 2
64 256
14.26 9.68
12.08 5.96
13.78 7.86
13.36 6.57
result of center-cut, (d) is the result of K-means, (e) is the result of our algorithm (Fig. 9). The average quantization errors of these experiments are shown in Table 3. From the above pictures and Table 3, we can see that the center-cut method, the octree method, and our new method are more or less in the same performance class. 5. Conclusion This paper discusses some commonly used color quantization algorithms followed by a novel algorithm for color quantization. First this algorithm uses a clustering-based algorithm to get N (N > K) initial colors. Then the color with the most pixels is used as the base color. For each of the other colors, the weighted products of pixel number and distance to the base-color are computed. After that, the products are sorted in a descending order, and the first K 1 colors and the base-color produce an initial palette. Finally, the left N–K colors are merged with the closest colors of the initial palette to form the final palette. This algorithm can meet the requirements of different color quantization tasks by tuning the weights of pixel numbers and color distances to get a color palette for the best effects of color layers and essential details. The experiments show that our algorithm is effective for adjusting the quantization results as well as good quality of quantization. References Akarun, L., Ozdemir, D., Yalcun, O., 1996. Joint quantization and dithering of color images. In: Proceedings of the
International Conference on Image Processing, (ICIPÕ96), pp. 557–560. Atsalakis, A., Kroupis, N., Soudris, D., Papamarkos, N., 2002. A window-based color quantization technique and its embedded implementation. IEEE ICIP 2002 23, 365–368. Buhmann, J.M., Fellner, D., Marcus H., Jens K., Puzicha, J., 1998. Dithered color quantization. In: Proceedings of the EUROGRAPHICSÕ98, Lisboa (Graphics Forum, Vol. 17(3)). Cheng, S.-C., Yang, C.-K., 2001. A fast and novel technique for color quantization using reduction of color space dimensionality. Pattern Recogn. Lett. 22, 845–856. Cheung, W.-F., Chan, Y.-H., 2003. Color quantization of compressed video sequences. CirSysVideo(13) (3), 270–276. Gerrautz, M., Purgathofer, W., 1988. A simple method for color quantization: Octree quantization. Proceedings of CG InternationalÕ88, pp. 219–230. Gu, E.-d., Xu, D.-q., Chen, C., 2002. Fast color quantization algorithm integrated with Gamma correction. J. Comput. Aided Des. Comput. Graph. 14 (4), 356–360. Heckbert, P., 1982. Color image quantization for frame buffer display. Comput. Graph. 16 (2), 297–307. Joy, G., Xiang, Z., 1993. Center-cut for color image quantization. Visual Comput. 10 (1), 62–66. Ketterer, J., Puzicha, J., Held, M., Fischer, M., Buhmann, J.M., Fellner, D., 1998. On spatial quantization of color images. In: Proceedings of the European Conference on Computer Vision, Freiburg, 1998. Kolpatzik, B., Bouman, C., 1995. Optimized universal color palette design for error diffusion. J. Electron. Imaging 4, 131–143. Li, X.-L., Yuan, T.-Q., Yu, N.-H., Yuan, Y., 2003. Adaptive color quantization based on perceptive edge protection. Pattern Recogn. Lett. 24 (December), 3165–3176. Lo, K.C., Chan, Y.H., Yu, M.P., 2003. Colour quantization by three-dimensional frequency diffusion. Pattern Recogn. Lett. 24 (July), 2325–2334. Tian, Z., Raghu, R., Miron, L., 1996. BIRCH: An efficient data clustering method for very large databases. SIGMOD Õ96 Montreal, Canada, pp. 103–114. Zhao, Y.-W., Wang, W.-L., 2000. A new clustering algorithm for color quantization and its applications. J. Computer Aided Des. Comput. Graph. 12 (5), 340– 343.