Mining Dichromatic Colours from Video Vassili A.Kovalev Centre for Vision, Speech and Signal Processing School of Electronics and Physical Sciences University of Surrey, Guildford, Surrey GU2 7XH, United Kingdom
[email protected];
[email protected]
Abstract. It is commonly accepted that the most powerful approaches for increasing the efficiency of visual content delivery are personalisation and adaptation of visual content according to user’s preferences and his/her individual characteristics. In this work, we present results of a comparative study of colour contrast and characteristics of colour change between successive video frames for normal vision and two most common types of colour blindness: the protanopia and deuteranopia. The results were obtained by colour mining from three videos of different kind including their original and simulated colour blind versions. Detailed data regarding the reduction of colour contrast, decreasing of the number of distinguishable colours, and reduction of inter-frame colour change rate in dichromats are provided.
1
Introduction
With the advent of digital video revolution the volume of video data is growing enormously. The visual content is produced by different sources including the broadcasting and film industry, recent video communication systems and camera phones, the systems providing access to the content of vast film and video archives at broadcasters, museums, industries and production houses, by automated video surveillance systems, by the video-based business communications and, finally, by the remote education and e-learning systems [1]. The television broadcasting industry, home video systems, and mobile services slowly but surely transferring to an end-to-end digital video production, transmission and delivery. Lately, it was commonly recognised that the most powerful approaches for increasing the efficiency of visual content delivery are personalisation [1], [2], [3], [4], [5] and adaptation of visual content according to user preferences and his individual characteristics [6], [7], [8]. As a result, there has been enormous growth of research work and development of new technologies and industrial standards in this area (see [6] for an overview). For example, while MPEG-7 already can be used to describe user’s preferences for visual content filtering, searching and browsing, the new MPEG-21 standard (part 7 ”Digital Item Adaptation” [9]) expands these possibilities further to implement a user-centered adaptation of multimedia content to the usage environment. In particular [7] [2], it addresses the customization of the presentation of multimedia content based on user’s
preferences for content display/rendering, quality of service as well as configuration and conversion with regard to multimedia modalities. It also facilitates adaptation of visual content to user’s colour vision deficiency such as colour blindness and low-vision impairments as well as audition and audio accessibility characteristics. In the last decade, the data mining research and developments provided various techniques for automatically searching large stores of data for discovery of patterns and new knowledge (eg, [10], [11]). These techniques has been successfully used for video data mining in a number of applications. For instance, it was demonstrated that for a specific kind of video data such as sport video it is possible to automatically identify high-level features for semantic video indexing [12] and to detect gradual transitions in various video sequences for temporal video segmentation [13]. As a result of an extensive study of colour and texture properties of video sequences [14], there have been suggested new descriptors that provide an efficient extraction, representation, storage, and similarity retrieval in video. These descriptors include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution [14]. All the descriptors are included in the MPEG-7 standard. Another kind of colour descriptors included in the standard are the descriptors discussed in [15] that were developed to reliably capture the color properties of multiple images or groups of frames. One family of such descriptors, called alphatrimmed average histograms, combine individual frame or image histograms using a specific filtering operation to generate robust color histograms that can eliminate the adverse effects of brightness/color variations, occlusion, and edit effects on the color representation [15]. Based on so-called colour correlograms [16] or colour co-occurrence matrices [17] that capture the spatial structure of colour in still images, authors of work [18] have lately suggested colour descriptors involving a color adjacency histogram and color vector angle histogram. The color adjacency histogram represents the spatial distribution of color pairs at color edges in an image, thereby incorporating spatial information into the proposed color descriptor. The color vector angle histogram represents the global color distribution of smooth pixels in an image. Thus, colour appears to play a key role in delivery of visual content and, in certain circumstances, may even be vital for correct perception and interpretation of visual data in professional applications. And yet, 8% of the population is suffering from a specific colour vision deficiency known as dichromasia [19], [20], [21]. These are the people who have some sort of colour blindness, and they usually can distinguish only two hues. They are the protanopic and deuteranopic viewers who have difficulties in seeing red and green respectively. Such people are collectively known as dichromats. A small fraction of people can only see a single hue, and these are the truly colour-blind people [19]. An important issue then arises, concerning the video as seen by these viewers, the way it appears to them, and whether the use of colour conveys to them the same information it
conveys to normal viewers [22], [20]. Several studies have been done to answer this question, and indeed we know with pretty high confidence the way the world looks like through the eyes of such viewers (eg, [21]). Recently, quite a few studies have been made in the way colour coded information should be processed and displayed so that it is equally useful to all viewers (eg, [22], [23], [7], [24]). “An image is a thousand words”, and an image conveys information by the relative colours and contrasts it contains. If a person does not see these contrasts caused by the use of different colours, the person may miss a significant part of the information conveyed by the image. Thus, methods and algorithms of converting colours for dichromats generally depend on the joint colour appearance in real video data as seen by dichromats compared to people with normal vision. However, the issue of statistical properties of colour contrast and temporal colour change has not received much attention yet. In our previous works (eg, [24], [25]) we investigated influence of colour blindness to image retrieval results and have suggested an adaptive colour conversion technology for still images and other static media. The problem of converting colours was posed as an optimisation task with different cost function terms (achieved colour contrast, similarity to normal vision, etc). As yet, there are several points that need to be investigated for converting videos. They are concerned with specific aspects of temporal colour change. Industrial re-production of high quality videos adapted for dichromatic users supposes pre-calculation of some kind of dynamic look-up table based on a colour analysis performed prior to the conversion. Computation expenses for calculating an optimal conversion look-up table are generally function of the number of different colours presented in the media as well as their temporal change. Once computed, the specific look-up table can be used for colour conversion virtually for free. In this work, we present results of a comparative study of colour contrast and characteristics of colour change between successive video frames for normal vision and two most common types of colour blindness: the protanopia and deuteranopia. The blue blindness known as tritanopia is not considered here because it is extremely rare [19]. The results were obtained by colour mining from three videos of different kind including their original and simulated colour blind versions. In our best knowledge this is the first work on mining dichromatic colours from video.
2 2.1
Materials The Videos
In this study we used the following three videos that represent relatively different ways of spatial colour appearance: – a lyrical-humorous love story film ”The Love and Doves” by Mosfilm containing similar proportion of in-door and out-door scenes, which is conditionally referred here to as V1–Ordinary,
Table 1. General technical characteristics of videos used for the analysis Video
frame size (pixels) number of frames number of horisontal vertical included into the analysis key frames V1–Ordinary 640 368 150,600 842 V2–Animation 512 368 120,000 2324 V3–Nature 700 516 78,600 936
– a commonly known animation movie ”The Lion King” by Walt Disney Pictures (V2–Animation), – a popular documentary movie ”Cats”, part 2 about the wild life of cat family animals from National Geographic (V3–Nature). In all three occasions video frames with captioning textual data as well as frames containing no information (eg, black background only) were excluded from the analysis. General technical characteristics of mined videos are given in Table 1. 2.2
Simulating the Colour Blindness
To enable mutual comparisons, the original three videos were also converted to their protanopic and deuteranopic versions. Construction of dichromatic versions of colours was based on the LMS specification (the longwave, middlewave and the shortwave sensitive cones) of the primaries of a standard video monitor [23], [26]. The conversion from trichromatic to dichromatic colors itself was done using the dichromat package implemented by Thomas Lumley within the R, a language and software environment for statistical computing and graphics [27], [28]. Examples of video frames of three colour versions of all three videos are provided in Fig. 1.
3 3.1
Methods Colour Space Conversion
Everywhere in this work the perceived difference between colours was measured using the Euclidean metric in Lab colour space [29]. The Lab and Luv colour spaces are standardised by the Commission Internationale d’Eclairage (CIE) and known to be approximately perceptually uniform [29], [30]. The Lab describe colors of objects, and so require specification of a reference ”white light” color. The most commonly used are illuminant D65, which represents a standard indirect daylight, illuminant D50, which is close to direct sunlight, and illuminant A that corresponds to the light from a standard incandescent bulb. In this work we employed illuminant D65. The original video frame colours were converted from sRGB colour space (ie, the RGB space of standard PC monitors) to the Lab in two steps. First, r, g, b
Fig. 1. Example frames taken from three videos (by rows) used in this study as seen by people with normal vision (left column) and individuals suffering from protanopia (central column) and deuteranopia (right column).
components were converted into standard CIE XYZ space capable of representing all visible colors but not in a perceptually uniform way. At the second step we converted XYZ to the uniform Lab colours. Specifically, for a given RGB colour whose components are in the nominal range [0.0, 1.0], the following equations were used: [XY Z] = [rgb] [M ] , where r=
R/12.92 R≤t ((R + 0.055)/1.055)2.4 R > t
g=
G/12.92 G≤t ((G + 0.055)/1.055)2.4 G > t
b=
B/12.92 B≤t ((B + 0.055)/1.055)2.4 B > t
(1)
and threshold t = 0.04045. For the reference white D65, the 3 × 3 conversion matrix M known also as ”Adobe RGB (1998)” is 0.576700 0.297361 0.0270328 M = 0.185556 0.627355 0.0706879 . 0.188212 0.075285 0.9912480 For given reference white (Xr , Yr , Zr )=(0.3127,0.3290,0.3583), the Lab colour components are calculated as L = 116fy − 16,
a = 500(fx − fy ),
b = 200(fy − fz ),
(2)
where fx =
√ 3 x r xr > ε , kxr +16 xr ≤ ε 116 xr =
fy =
√ 3 y r yr > ε , kyr +16 yr ≤ ε 116
fz =
√ 3 z r zr > ε , kzr +16 zr ≤ ε 116
X Y Z , yr = , zr = , ε = 0.008856, k = 903.3 Xr Yr Zr
Finally, the perceived difference between any given pair of colours ci and cj was measured as the Euclidean distance in the Lab space: q 2 2 2 d(ci , cj ) = (Li − Lj ) + (ai − aj ) + (bi − bj ) (3) 3.2
Measuring Colour Contrast
Colour contrast Ccntr was measured within each frame of every video including its original, protanopic, and deuteranopic versions (ie, 3 videos × 3 versions = 9 videos in total). The contrast values Ccntr were computed by calculating the perceived colour difference (3) between the pairs of neighboring pixels pi and pj situated at the distances d = {1, 2, 3} pixels apart. The scale for original r, g, b pixel intensity values was 0–255. All possible pixel pairs with no repetition were considered for given distance range according to computational procedure similar to the one used for calculating colour co-occurrence matrices discussed in [17], [24]. This particularly means that for each non-border pixel the number of pairs it participating in was 4 pairs for d = 1, 6 pairs for d = 2, and 7 pairs for d = 3. The mean contrast values computed over the whole frame with these three inter-pixel distances were then used as final contrast characteristics of the frame. The primary goal was to find out and to describe quantitatively the difference in contrast between the normal and colour blind versions of each video. 3.3
Measuring Colour Change Between Video Frames
The absolute value of colour change Cchang−A between the two successive video frames was calculated as a sum of two numbers Cchang−A = Ngone + Ncome . First number is the number of different colours that presented in the first frame
T but disappear in the second frame Ngone = NF 1 − NF 1 NF 2 . Oppositely, the second number is the number of new colours that appear T in the second frame being not presented in the first one Ncome = NF 2 − NF 1 NF 2 . The frequencies of colours (ie, the number of pixels with given colour) were not considered here. The exception was only that we used them for a thresholding: any colour was considered to be presented in a video frame if its frequency exceeds 4 what is equivalent to a minimal frame patch of 2 × 2 pixels in size. It is clear that the range of Cchang−A value depends on the number of distinguishable colours, ie, the colour quantisation scheme. In this work we used an uniform resolution of 6 bits per pixel for each of R, G, and B colour planes. max Thus, the maximal number of different colours was Ncol = 262144. As an additional feature we also employed a relative colour change Cchang−R between the two frames, which was calculated as the absolute number of changed colours Cchang−A normalised to the sum of different colours appeared in both frames (per cent): [ Cchang = 100 × Cchang /(NF 1 NF 2 ) Note that in the above expressions the intersection and union set operations are used in a simplified manner for brevity.
4 4.1
Results Colour Contrast
Colour contrast Ccntr was measured for every frame of each of 9 videos. General characteristics of color contrast for inter-pixel distance d = 1 are summarised in Table 2 in form of descriptive statistics calculated over all the frames of each video. Note that the local contrast (ie, colour contrast computed for d = 1) normally plays the most important role in distinguishing colour borders. Contrast reduction score given in the last column of Table 2 is the ratio of the mean contrast of norm and mean contrast of corresponding dichromatic version of the video. As it can be immediately inferred from the results reported in Table 2, the reduction of colour contrast for deuteranopia is notably higher than in case of protanopia for all three types of videos. Most likely, the magnitude of contrast reduction score for every particular video depends on the amount of scenes containing various hues of red and green colours, which are seen as shadows of yellow by protanopes and deuteranopes. Similarly, an obvious asymmetry of temporal colour contrast distribution detected in videos V2 and V3 with characteristic high skewness values is rather individual feature of these videos. It can be explained by relatively large proportion of scenes with predominantly high and low colour contrast. Mean colour contrast values for inter-pixel distances d = 2 and d = 3 were always higher than those reported in Table 2 for d = 1 with approximately linear increase with d. For instance, mean contrast values of V1 video measured at distances d = 2 and d = 3 were equal to 472.6 and 577.1 for norm, 389.3 and 473.4 for protanopia, and 329.3 and 400.1 for deuteranopia
Table 2. Descriptive statistics of colour contrast calculated over all video frames Video
version
normal vision protanopia deuteranopia normal vision V2–Animation protanopia deuteranopia normal vision V3–Nature protanopia deuteranopia V1–Ordinary
colour contrast Ccntr for inter-pixel distance d = 1 mean STD skewness contrast reduction score 323.4 101.6 -0.021 — 266.1 93.5 0.058 1.22 225.7 85.5 0.085 1.43 180.1 63.6 0.949 — 137.9 52.3 1.077 1.31 116.9 46.1 1.305 1.54 259.0 86.8 0.781 — 208.7 75.5 0.834 1.24 176.9 64.6 0.836 1.46
Fig. 2. Examples of video frames with extreme values of colour contrast taken from two videos: V1–Ordinary (top row) and V2–Animation (bottom row). (a), (c) Colour texture frames with high colour contrast. (b), (d) Low contrast frames with domination of homogeneous colour regions.
respectively. Such a behavior was found to be well predictable considering the spatial colour correlation phenomena.
Descriptive statistics of colour contrast given in Table 2 provide general, mean-wise information regarding the contrast in videos of different kind as well as contrast reduction ratio for colour blind viewers. More detailed, frame-wise information is presented in Table 3 in form of linear regression of video frame pro deu contrast of protanope Ccntr and deuteranope Ccntr versions of videos to the nrm contrast of norm Ccntr for d = 1. As usual, the statistical significance of the regression is measured with the help of squared correlation coefficient R2 , which provides linearity of significance scores. Note that in this particular case the regression significance is equivalent to the significance score of a paired Student’s t−test. Table 3. Linear regression of dichromatic colour contrast to the norm Video V1–Ordinary
version protanopia deuteranopia V2–Animation protanopia deuteranopia V3–Nature protanopia deuteranopia
regression equation for d = 1 significance, R2 pro nrm Ccntr = 0.914 · Ccntr − 29.6 0.987 deu nrm Ccntr = 0.822 · Ccntr − 40.0 0.953 pro nrm Ccntr = 0.806 · Ccntr − 7.3 0.963 deu nrm Ccntr = 0.682 · Ccntr − 5.8 0.887 pro nrm Ccntr = 0.867 · Ccntr − 15.8 0.990 deu nrm Ccntr = 0.731 · Ccntr − 12.5 0.961
Table 4. Descriptive statistics of the number of different frame colours
Video
version
normal vision protanopia deuteranopia normal vision V2–Animation protanopia deuteranopia normal vision V3–Nature protanopia deuteranopia V1–Ordinary
number of distinguishable colours in frame, NF colour mean, relative mean STD skewness reduction to 218 , score pro mil 2445.4 908.7 -0.162 — 9.33 499.8 152.9 0.094 4.89 1.91 473.7 142.4 0.095 5.16 1.81 2077.0 1179.1 1.102 — 7.92 449.6 242.8 0.937 4.62 1.72 389.7 232.3 0.989 5.33 1.49 3862.3 1602.4 0.034 — 14.73 814.9 424.1 1.358 4.74 3.11 689.2 344.5 1.300 5.60 2.63
Finally, specific examples of video frames with extreme values of colour contrast are shown in Fig. 2. Fig. 2(a) represents colour texture appeared highly nrm contrasted for normal vision (Ccntr = 480.8), which loses 102.7 contrast units
pro (21.4%) being observed by protanopes (Ccntr = 378.1) and 178.6 units (37.1%) deu when observed by deuteranopes (Ccntr = 302.2). The animation movie video frame presented in Fig. 2(c) demonstrates similar properties with the startnrm ing colour contrast value for norm as high as Ccntr = 702.4. On the contrary, the frame depicted in Fig. 2(b) has very low colour contrast value in norm nrm Ccntr = 87.1, which is almost preserved on the same level for protanopia and pro deu deuteranopia observers (Ccntr = 83.4 and Ccntr = 81.2). This is because it is mostly occupied by shadows of blue colour, which are not distorted much in protanopic and deuteranopic vision. Despite the animation movie frame provided in Fig. 2(d) appears as pretty colourful, its contrast is also low because of domination of large homogeneous colour regions.
4.2
Colour Change Between Video Frames
At firs, we calculated the number of different colours (18 bit resolution) in each video frame NF for all three versions of three videos involved in this study. Resultant descriptive statistics are summarised in Table 4. Colour reduction score is the ratio of the mean number of colours in norm and dichromatic versions of videos. As it can be easily noticed from Table 4, the number of distinguishable colours is reduced dramatically in case of dichromatic vision. The reduction score achieves values in the range of 4.6–5.6 being sufficiently higher in deuteranopes comparing to protanopes: 5.16 vs. 4.89 for an ordinary video combining both indoor and out-door scenes, 5.33 vs. 4.62 for the animation video, and 5.60 vs. 4.74 for video about the nature. No obvious differences in the colour reduction score were observed depending on the type of video. The linear regression equations describing relationships between the number of different frame colours for normal vision, protanopia, and deuteranopia are included in Table 5 together with the squared correlation coefficient. Note the low values of regression coefficients and reasonably low significance scores R2 suggesting relatively weak dependence. Table 5. Regression of the number of different dichromatic and normal colours Video V1–Ordinary
version protanopia deuteranopia V2–Animation protanopia deuteranopia V3–Nature protanopia deuteranopia
regression equation significance, R2 pro nrm Ccntr = 0.132 · Ccntr + 177.2 0.615 deu nrm Ccntr = 0.120 · Ccntr + 179.3 0.590 pro nrm Ccntr = 0.158 · Ccntr + 121.2 0.589 deu nrm Ccntr = 0.145 · Ccntr + 89.4 0.539 pro nrm Ccntr = 0.201 · Ccntr + 40.2 0.574 deu nrm Ccntr = 0.163 · Ccntr + 60.3 0.574
At the second step we calculated the number of colours that changed between the pairs of successive video frames. Results are reported in Table 6 in the same way as for distinguishable colours in previous table. The rate of colour change
Table 6. Characteristics of inter-frame colour change
Video
version
mean
normal vision 571.8 protanopia 37.7 deuteranopia 35.0 normal vision 463.5 V2–Animation protanopia 39.1 deuteranopia 35.8 normal vision 1125.9 V3–Nature protanopia 81.3 deuteranopia 66.5 V1–Ordinary
number of changed colours colour mean, relative STD skewness change to colours in reduction two frames, % 293.8 2.35 — 23.7 24.2 7.15 15.2 8.0 22.9 7.47 16.3 7.9 470.7 4.35 — 21.7 50.7 6.72 11.9 8.5 48.1 6.68 12.9 8.9 866.0 2.50 — 14.73 88.4 4.30 13.8 3.11 71.2 4.34 16.9 2.63
reduction caused by dichromasia was found to be as high as 11.9–16.9 depending on the particular video and type of dichromasia. Again, relatively to the normal vision, the reduction rate was worse in case of deuteranopia comparing to protanopia for every particular video (16.3 vs. 15.2, 12.9 vs. 11.9, and 16.9 vs. 13.8 respectively).
5
Conclusions
Results of mining dichromatic colours from video reported with this study allows us to draw the following conclusions. 1. Colour contrast of video frames perceived by subjects suffering from dichromasia is lower compared to people with normal vision. For the three videos of different type used in this study the mean contrast reduction score varied in the range of 1.22–1.54. Reduction of colour contrast for deuteranopia was always higher than in case of protanopia. 2. Most likely, the magnitude of colour contrast reduction score for every particular video depends on the amount of scenes containing various hues of red and green colours, which are seen as shadows of yellow by protanopes and deuteranopes. 3. Mean colour contrast values for inter-pixel distances d = 2 and d = 3 were always higher than those obtained for d = 1 with approximately linear increase with d. 4. The number of different colours in video frames reduces dramatically in observers with dichromatic vision. In this particular study, the reduction score achieved values in the range of 4.6–5.6 being sufficiently higher in deuteranopes comparing to protanopes: 5.16 vs. 4.89 for an ordinary video combining both in-door and out-door scenes, 5.33 vs. 4.62 for an animation video, and 5.60 vs.
4.74 for video about the nature. No obvious differences in the colour reduction score were found depending on the type of video. 5. For a colour quantisation scheme with the maximum of 262,144 distinguishable colours the mean value of colours changed in two successive frames varied in the range of 463–1126 colours for normal vision, 38–81 colours for protanopia, and 35–67 colours for deuteranopia. Again, relatively to the normal vision, the reduction of inter-frame colour change rate was worse in case of deuteranopia comparing to protanopia for every particular video (16.3 vs. 15.2, 12.9 vs. 11.9, and 16.9 vs. 13.8 respectively).
Acknowledgments This work was supported by the grant number GR/R87642/01 from the UK Research Council and partly by the EU project INTAS 04-77-7036.
References 1. Hanjali´c, A.: Content-based analysis of digital video. Kluwer Academic Publisher, Boston (2004) 194p. 2. Tseng, B.L., Lin, C.Y., Smith, J.R.: Using MPEG-7 and MPEG-21 for personalizing video. IEEE Trans. Multimedia 11(1) (2004) 42–52 3. Wu, M.Y., Ma, S., Shu, W.: Scheduled video delivery — a scalable on-demand video delivery scheme. IEEE Trans. Multimedia 8(1) (2006) 179–187 4. Feiten, B., Wolf, I., Oh, E., Seo, J., Kim, H.K.: Audio adaptation according to usage environment and perceptual quality metrics. IEEE Trans. Multimedia 7(3) (2005) 446–453 5. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Analysis Mach. Intel. 22(12) (2000) 1349–1380 6. Vetro, A., Timmerer, C.: Digital item adaptation: overview of standardization and research activities. IEEE Trans. Multimedia 7(3) (2005) 418–426 7. Nam, J., Ro, Y.M., Huh, Y., Kim, M.: Visual content adaptation according to user perception characteristics. IEEE Trans. Multimedia 7(3) (2005) 435–445 8. Ghinea, G., Thomas, J.P.: Quality of perception: user quality of service in multimedia presentations. IEEE Trans. Multimedia 7(4) (2005) 786–789 9. ISO: Information Technology. Multimedia Framework. Part 7: Digital item adaptation. (2004) ISO/IEC 21000–7. 10. Bozdogan, H., ed.: Statistical Data Mining and Knowledge Discovery. Chapman & Hall/CRC Press, Boca Raton, Florida (2004) 624p. 11. Abbass, H.A., Sarker, R.A., Newton, C.S., eds.: Data Mining: A Heuristic Approach. Idea Group Publishing, Hershey, London (2002) 310p. 12. Zhu, X., Wu, X., Elmagarmid, A.K., Feng, Z., Wu, L.: Video data mining: Semantic indexing and event detection from the association perspective. IEEE Trans. Knowl. Data Eng. 17(5) (2005) 665–677 13. Joyce, R.A., Liu, B.: Temporal segmentation of video using frame and histogram space. IEEE Trans. Multimedia 8(1) (2006) 130–140 14. Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., Yamada, A.: Color and texture descriptors. IEEE Trans. Circ. Syst. Video Technol. 11(6) (2001) 703–715
15. Ferman, A.M., Tekalp, A.M., Mehrotra, R.: Robust color histogram descriptors for video segment retrieval and identification. IEEE Trans. Image Proc. 11(5) (2002) 497–508 16. Huang, J., Kumar, S., Mitra, M., Zhu, W.J., Zabih, R.: Image indexing using color correlograms. In: 16th IEEE Conf. on Computer Vision and Pattern Recognition, San Juan, Puerto Rico (1997) 762–768 17. Kovalev, V., Volmer, S.: Color co-occurrence descriptors for querying-by-example. In: Int. Conf. on Multimedia Modelling, Lausanne, Switzerland, IEEE Computer Society Press (1998) 32–38 18. Lee, H.Y., Lee, H.K., Ha, Y.H.: Spatial color descriptor for image retrieval and video segmentation. IEEE Trans. Multimedia 5(3) (2003) 358–367 19. Vi´enot, F., Brettel, H., Ott, L., M’Barek, A.B., Mollon, J.: What do color-blind people see? Nature 376 (1995) 127–128 20. Rigden, C.: The eye of the beholder - designing for colour-blind users. British Telecom Engineering 17 (1999) 2–6 21. Brettel, H., Vi´enot, F., Mollon, J.: Computerized simulation of color appearance for dichromats. Journal Optical Society of America 14 (1997) 2647–2655 22. Vi´enot, F., Brettel, H., Mollon, J.: Digital video colourmaps for checking the legibility of displays by dichromats. Color Research Appl. 24(4) (1999) 243–252 23. Meyer, G.W., Greenberg, D.P.: Color-defective vision and computer graphics displays. IEEE Computer Graphics and Applications 8(5) (1988) 28–40 24. Kovalev, V.A.: Towards image retrieval for eight percent of color-blind men. In: 17th Int. Conf. On Pattern Recognition(ICPR’04). Volume 2., Cambridge, UK, IEEE Computer Society Press (2004) 943–946 25. Kovalev, V., Petrou, M.: Optimising the choice of colours of an image database for dichromats. In Perner, P., Imiya, A., eds.: Machine Learning and Data Mining in Pattern Recognition. Volume LNAI 3587., Springer Verlag (2005) 456–465 26. Walraven, J., Alferdinck, J.W.: Color displays for the color blind. In: ISandT/SID Fifth Color Imaging Conference: Color Science, Systems and Appl, Scottsdale, Arizona (1997) 17–22 27. Becker, R.A., Chambers, J.M., Wilks, A.R.: The New S Language. Chapman and Hall, New York (1988) 28. Everitt, B.: A Handbook of Statistical Analyses Using S-Plus. 2nd edn. Chapman & Hall/CRC Press, Boca Raton, Florida (2002) 256p. 29. Hunt, R.W.G.: Measuring Color. 2nd edn. Science and Industrial Technology. Ellis Horwood, New York (1991) 30. Sharma, G.: Digital Color Imaging Handbook. Volume 11 of Electrical Engineering & Applied Signal Processing. CRC Press LLC, New York (2003) 800p.