COLOR EDGE DETECTOR USING JOINTLY HUE, SATURATION AND INTENSITY Thierry Carron - Patrick Lambert Laboratoire d’Automatique et de MicroInformatique Industrielle LAMII / CESALP - Université de Savoie - BP 806 -74016 Annecy Cedex FRANCE (CNRS-GdR 134 Signal and Image Processing) email:
[email protected] and
[email protected]
ABSTRACT In the Hue Saturation Intensity (HSI) space, a Hue difference taking into account the Hue relevance is defined. A Hue gradient operator is build up with this Hue difference. Two color edge detectors are then proposed. These detectors mix in a different way H,S,I gradients. Experimental results are presented.
INTRODUCTION In color image processing, a pixel is often defined by three values corresponding to the tristimuli R (Red), G (Green) and B (Blue). The RGB system is the classical physical color system used to digitize color images. Many processings have been developed in the RGB space such as filtering, edge detection or region segmentation [1][2][3]. Using the RGB system insures that no distortion of the initial information is introduced. It is possible to extend algorithms to the general case of multi-spectral images. However, RGB representation is far from the human concept of color. In the RGB space, color features are highly correlated, and it is impossible to evaluate the similarity of two colors from their distance in this space. Various kinds of color features can be calculated from components R, G an B. Each color feature has its own characteristics. For instance, Hue (H), Saturation (S) and Intensity (I) are more suited to the segmentation processing of color images [4][5][6]. In this way, Neviata[7] has proposed edge detection techniques based on searching for local discontinuities in chromatic feature space [8]. In this paper, we propose to use jointly Hue, Saturation and Intensity, according to their relevance, in order to define a new color edge detector. The paper is organized as follows: Section 1 describes the proposed HSI representation. A definition of the difference between Hue features is introduced in section 2. In section 3, we present the color edge operator. Some experiments on colors images are then reported in section 4.
1. HSI REPRESENTATION 1.1. RGB - HSI transform There are many different HSI transformations, one of them is given below. To map the input values into the HSI space, RGB values are first converted into the YC1C2 tristimulus values: Y 1⁄3 1⁄3 1⁄3 R C1 = 1 –1 ⁄ 2 –1 ⁄ 2 ⋅ G B C2 0 – 3⁄2 3⁄2 Tristimulus values YC1C2 are then transformed into HSI coordinates by means of the following equations: 2
S =
I = Y
If C 2 ≥ 0
then
Else
C1 + C2 H = cos
H = 2⋅Π – cos
–1
2
( C2 ⁄ S )
–1
( C2 ⁄ S )
Intensity (I) represents the average grey level. Hue (H) represents the color feature. It has the property of being relatively unaffected by shadow caused by the light source. Saturation measures the degree of purity of the Hue. It should be noticed that Hue feature has an angular representation (from 0 to 255 after normalization) the lower values near 0 or 255 represent red pixels, values near 85 represent green pixels and values near 170 represent blue pixels. Figure 1 shows the localization of basic colors in the Hue component. Red Magenta
Blue
0 42
213 170
128
85
Yellow
Green
Cyan
fig. 1: Hue component
This transformation has been chosen for two reasons. First we can calculate a simple reciprocal transformation (HSI to RGB space). The second reason is that color features, especially Saturation, are less sensitive to the non linear effects [6][9] than color features defined by the classical transformation: S = 1 – 3 ⋅ Min ( R, V, B ) ⁄ ( R + V + B ) 1.2. Noise sensitivity and relevance of Hue feature Because of quantization errors, non linear effects and noise of RGB components, the Hue feature is obviously inaccurate. The noise sensitivity of Hue component is not homogenous in the chrominance plane C1C2. This characteristics is illustrated below. We consider two uniform images. The two sets of RGB value correspond to a same Hue value and different Saturation levels (low and high one). They are represented by point P1 (low Saturation level) and point P2 (high Saturation level) in the chrominance plane. Before calculating HSI features, RGB components are corrupted by additive white independent gaussien noises. Figure 2 shows the Hue spread in the chrominance plane C1C2. C2
∆H(P2) max
150
P2 100
P1
50 0
0
50
∆H(P1) max
100
150
C1
fig. 2: Hue spread in C1C2 plane It can be shown that the Hue angle range is lower in the high chromatic region, than in the low chromatic region [5]. This variable noise sensitivity can also be shown by observing the dependance of Hue noise distribution on Saturation level (figure 3). Hue is obtained from RGB homogenous components corrupted by additive white independent gaussien noises. The components have the same standard deviation. The initial Hue value is 100. 0.1
Hue Variance / Noise Variance RGB
5 4
Intensity Variance / Noise Variance RGB
3 2
1⁄
1 0
20
50
0
Saturation
Because of the variation of the Hue relevance as a function of Saturation, it will be difficult to define a difference between Hue features. 2. DEFINITION OF A SIGNED DIFFERENCE BETWEEN HUE FEATURES 2.1. Difference between two Hue values The definition of the difference between Hue features of two pixels, must take into account: ❏ the circular representation of Hue ❏ the fact that the relations between colors (Red, Yellow...) and their numerical values are arbitrary. For instance there is no reason to have a greater numerical difference between Red and Blue than between Red and Yellow. The problem of the circular representation is easily solved by calculating a difference modulo 255. Let denote this difference between two Hue H1 – H2 255
features H1 and H2. A solution to the second problem consists of thresholding the absolute difference. The threshold (∆Hmax) is defined by calculating the difference of Hue between two consecutive vertex of the initial RGB cube.
Else
60
Saturation40 20
150
fig. 4: Ratio between Hue or Intensity variance and RGB noise variance
If
80
90
3
So we can defined a signed Hue difference δ(H1,H2) by:
Hue distribution
0.05
100
If Saturation is low (near the achromatic axis), Hue is very noisy or unstable and hence irrelevant. Conversely, if Saturation is high, Hue is then very relevant. Its sensitivity to the image noise is even lower than that of Intensity. Figure 4 shows the evolution of the ratio between Hue variance and noise variance on each RGB component. The same study is also done with Intensity.
0 0
50
100
150
200
Hue
fig. 3: Dependance of Hue noise distribution
250
H1 – H1
255
δ ( H 1, H 2 ) =
≤ ∆H max then H1 – H1
255
≥ 0 then If H 1 – H 1 255 δ ( H 1, H 2 ) = + ∆ H max Else δ ( H 1, H 2 ) = – ∆ H max
2.2. Measure of Hue relevance To calculate a relevant difference between Hue features of two pixels, we have to take into account the relevance of each Hue value. So, we have defined a function, ranging from 0 to 1, measuring the relevance of Hue with respect to the Saturation level. Let α(S) be this function. An example of α(S) is given in figure 5. 1 α(S)
:
Gx
-1 -1 1 yc 2 1 1
xc 0 0 0
1 -1 -2 -1
Gy
-1 xc -1 1 2 yc 0 0 1 -1 -2
1 1 0 -1
It is possible to extend the use of this operator to define a Hue gradient. Gradient values are calculated as a sum of differences ∆H(). Then, the Hue Sobel operator along x direction is defined by: H
0,5 0
S0 Irrelevant Hue α~0
Saturation 255 Relevant Hue α~1
fig. 5: function measuring Hue relevance The evolution of this function is controlled by two parameters S0 and β. ❏ S0 is the Saturation level corresponding to the medium relevance of Hue. ❏ β is a parameter controlling the slope of α at S0 Hence, in order to obtain the joint relevance of two pixels, respectively characterized by H1, S1, I1 and H2, S2, I2, we calculate the geometrical mean of the coefficients α(S1) and α(S2) measuring the Hue relevance. Let p ( S 1, S 2 ) = α ( S 1 ) ⋅ α ( S 2 ) denote this geometrical mean. Figure 6 gives an example of the evolution of this coefficient (S1 and S2 are ranging from 0 to 255 after normalization). p(S1,S2) 1 0.5 0
S1
S2
250
S0
0
S0
250
fig. 6: Evolution of p(S1,S2) The purpose of this coefficient is to moderate the difference between two Hue features, in order to yield a relevant difference. As shown in figure 6, the difference is taken into account only if the two Saturation levels are jointly high. Then the relevant signed difference between two Hue features is define by ∆H(H1, H2): ∆H ( H 1, H 2 ) = p ( S 1, S 2 ) ⋅ δ ( H 1, H 2 ) with δ(H1,H2) is thresholded signed difference (modulo 255) defined in section 2.1. 2.3. Hue gradient operator Gradient operators using differences between features generally perform a local averaging in order to reduce the effects of noise. For example the grey level Sobel operator uses two masks
G x ( x c, y c ) = ∆H ( H– 1, – 1, H 1, – 1 )+ 2∆H ( H– 1, 0, H 1, 0 ) + ∆H ( H– 1, 1, H 1, 1 ) Eq. 1 A similar formula is used to for the vertical component Gy H . Based upon this definition of Hue gradient, it is possible to build up a color edge detector mixing Hue, Saturation and Intensity components. 3. DEFINITION OF A COLOR EDGE DETECTOR 3.1. Strategies of a color gradient operator Two strategies have been investigated, depending on the way different features are mixed. ❏ First strategy: The Hue information is regarded as a complement to Intensity and Saturation informations. Then the color difference may be defined by: ∆C = | ∆H | + | ∆S | + | ∆I | The absolute value is used to avoid compensations between different differences. In this case, the difference is enhanced between regions where Hue is relevant. ❏ Second strategy: The Hue information is regarded as more important than Intensity or Saturation information. The color difference is then defined by: ∆C = | ∆H | + (1 - p) | ∆S | + (1 - p) | ∆I | In this case, the Intensity and Saturation differences are effective only if the Hue difference is irrelevant (p ≅ 0). Both strategies were used to build up a color gradient operator based upon the Sobel technique. 3.2. A Sobel color operator using Hue as a complement of information 3.2.1.evaluation of the module The Hue gradient of pixel (xc,yc) is calculated as defined by equation Eq. 1. The Intensity and Saturation gradient are calculated with classical Sobel operators. Horizontal and vertical components of the color gradient (GxC and GyC) are then given by: GxC = | GxH | + | GxS | + | GxI | GyC = | GyH | + | GyS | + | GyI |
The modulus of the color gradient is then given by: C
G ( x c, y c ) =
2
C
C
G x ( x c, y c ) + G y ( x c, y c )
2
3.2.2.evaluation of the direction One way to define the direction of the gradient is to choose the direction of the greatest squared gradient among GH, GS and GI (the squared gradient | G |2 is the sum of the two squared gradient components). For example, if | GH(xc,yc) |2 > | GS(xc,yc) |2 > | GI(xc,yc) |2 then the direction is estimated by: C
–1
H
H
Φ ( x c, y c ) = tan ( G y ( x c, y c ) ⁄ G x ( x c, y c ) ) The color direction of the gradient is defined on a π magnitude domain. 3.3. A color Sobel operator privileging Hue In this strategy, Saturation and Intensity informations are moderated by a coefficient (1 - p) operating inversely to Hue relevance. Then the x Sobel Intensity component is given by: I
G x ( x c, y c )= ( 1 – p ( S– 1, – 1, S 1, – 1 ) )⋅( I – 1, – 1 – I 1, – 1 ) + 2⋅( 1 – p ( S– 1, 0, S 1, 0 ) )⋅( I – 1, 0 – I 1, 0 ) + ( 1 – p ( S– 1, 1, S 1, 1 ) )⋅( I – 1, 1 – I 1, 1 )
The y Sobel component is calculated by the same way. The same formulas are used to calculate the Saturation gradient components. The modulus and the direction of the color gradient are estimated in the same way defined in 3.2. We noticed that this operator is robust against shadow caused by the light source. The choice between the two strategies depends on context. 4. EXPERIMENTAL RESULTS We present experimental results obtained with actual color images. The first one is composed of different colored objects whose sides are exposed to different illuminations. The second one is a map of France in which some adjacent regions differ only in Saturation. In the first image, the grey level Sobel finds an edge at ridges of a same cube because of the difference of Intensity levels (arrows 1 on the yellow cube). The color Sobel privileging Hue do not find this edge. Indeed, if Hue is really relevant, this Sobel is not sensitive to the shadow caused by the light source. The color Sobel using Hue as a complement gives enhanced edges compared with those of the grey level Sobel (arrow 2). In the second image, we notice that the grey level Sobel and the color Sobel privileging Hue miss a region (arrow 3). This region is included in a region which has similar
Intensity and Hue levels, but a different Saturation level. Then, this region can only be found by using the color Sobel mixing all components gradients. Because of the least noise sensitivity of Hue, we also notice that edges are better defined with the color Sobel privileging Hue than with the grey level Sobel (arrow 4). CONCLUSION In color image processing, Hue is closely related to the human perception of colors. Furthermore, its sensitivity to the image noises may be lower than that of Intensity. However, Hue is a difficult to process, because of its variable relevance. In this paper we defined the difference between two Hue values. This definition takes the Saturation level into account. By using this definition, it is possible to build up several gradient operators depending on the relative weight of the Hue concept with respect to Saturation and Intensity. REFERENCES [1] A. Cumani ‘‘Edge detection in multispectral images”, in Graphical Models and Image Processing, vol. 53, No. 1, pp. 40-51, Jan. 1991. [2] H. C. Lee and D. Cok ‘‘Detection boundaries in a vector field”, IEEE Trans. Sig. Proc.,vol.39, No 5, May 1991. [3] P.E Trahanias, A.N.Venetsanopoulos, ‘‘Color edge detection using vector order statistics’’, IEEE Trans on Image Processing, Vol2 no.2, April 1993. [4] Y-I Ohta, T. Kanade & T. Sakai “Color segmentation for region segmentation”, Computer Graphics and Image processing, 13, 222-241, 1980. [5] T. Miyawaki, S. Ishibashi & F. Kishino “A region segmentation methode using color information” Actes 1er coll. sur les chaînes professionnelles de l’image. IMAGECOM 90 Bordeaux France Nov 90. pp 288-292 [6] T. Carron, P.Lambert, P. Morel “Analyse d’une segmentation en regions utilisant conjointement les informations de chrominance et d’intensite”, Proceedings of 14th conf. GRETSI, Juan-Les Pins (France) Sept. 1993 . pp 1157-1160 [7] R. Nevatia “A color edge detector and its use in scene segmentation”, IEE transactions on systems, man, and cybernetics, vol. smc-7, no. 11, Nov 77, pp 820-826 [8] J.T Allen T Huntsberger “Comparing Color Edge Detection and Segmentation Methods”, IEEE proceedings 1989 Southeastcon. pp 722- 728 [9] Kender J.“Saturation, hue and normalized color: calculation digitization effects, and use”, Master’s thesis, Dept of CS. Carnegie-Mellom university, 1976.