Contrast image correction method

8 downloads 0 Views 1MB Size Report
gamma correction and histogram equalization to be the most common. Gamma ...... ideal flat histogram does not always represent a visually pleasing image. Fig.
Journal of Electronic Imaging 19(2), 1 (Apr–Jun 2010)

AQ: #1

1

Contrast image correction method

2 3 4 5 6 7 8 9 10

Raimondo Schettini Francesca Gasparini Silvia Corchs Fabrizio Marini Università degli Studi di Milano-Bicocca Dipartimento di Informatica, Sistemistica e Comunicazione Viale Sarca 336 20126 Milano, Italy E-mail: [email protected]

11 12 13 14 15

Alessandro Capra Alfio Castorina STMicroelectronics Catania Advanced System Technology—Catania Lab Italy

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49

way for simultaneous shadow and highlight adjustments. Of the global contrast enhancement techniques, we found gamma correction and histogram equalization to be the most common. Gamma correction is a simple exponential correction and is common in most codes for image processing. Histogram equalization techniques have also been used by different authors. Based on the image’s original graylevel distribution, the image’s histogram is reshaped into a different one with uniform distribution property in order to increase the contrast. Numerous improvements have also been made to simple equalization by incorporating models of perception.1–3 Different algorithms for local contrast correction have also been proposed. Moroney4 uses nonlinear masking in order to perform local contrast correction. This correction can simultaneously lighten shadows and darken highlights, and it is based on a simple pixel-wise gamma correction of the input data. However, one of the limitations of the Moroney’s algorithm 共common also to other local corrections兲 is the introduction of “halo” artifacts 共due to the smoothing across scene boundaries兲 and also the shrinking of the dynamic range of the scene. The adaptive histogram equalization 共AHE兲 methods use local image information to enhance the image. In Ref. 5, several adaptive 共AHE兲 techniques are reviewed and compared. The author also proposed a new AHE method based on a “modified cumulation function” that introduces two parameters. Arici et al.6 presented a general framework based on histogram equalization where contrast enhancement is posed as an optimization problem that minimizes a cost function. The authors also introduce penalty terms into the optimization problem in order to handle noise robustness and black/white stretching. Both these methods can achieve different levels of contrast enhancement, from histogram equalization to no contrast enhancement, through the use of different adaptive parameters. Depending on the image content, these parameters have to be manually set.

Abstract. A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm. © 2010 SPIE and IS&T. 关DOI: 10.1117/1.3386681兴

1 Introduction Let us consider a scene of a room illuminated by a window that looks out on a sunlit landscape. A human observer inside the room can easily see individual objects in the room, as well as features in the outdoor landscape. This is because the eye adapts locally as we scan the different regions of the scene. If we attempt to photograph our view, the result could be disappointing: either the window is overexposed and we cannot see outside, or the interior of the room is underexposed and looks black. Several methods for adjusting image contrast have been developed in the field of image processing. In general, we can discriminate between two classes of contrast corrections: global corrections and local corrections. Global contrast corrections can produce disappointing results when both shadow and highlight details have to be adjusted simultaneously. On the other hand, the advantage of the local contrast corrections is that they provide a method to map one input value to many different output values, depending on the values of the neighboring pixels and allowing in this Paper 09135R received Jul. 28, 2009; revised manuscript received Jan. 11, 2010; accepted for publication Mar. 2, 2010; published online Dec. xx, xxxx. 1017-9909/2010/19共2兲/1/0/$25.00 © 2010 SPIE and IS&T.

Journal of Electronic Imaging

1-1

Apr–Jun 2010/Vol. 19(2)

50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86

AQ: #2

Schettini et al.: Contrast image correction method

The retinex model introduced by Land and McCann7 has 88 been applied for many image processing tasks and in par89 ticular for image enhancement. This model aims to predict 90 the sensory response of lightness. The fundamental concept 91 behind retinex computation of lightness at a given image 92 pixel is the comparison of the pixel’s value to that of other 93 pixels, and the main difference between the different ret94 inex algorithms is the way in which the other comparison 95 pixels are chosen, including the order in which they are 96 chosen. The original way of defining comparisons is by 97 following a path, or set of paths, from pixel to neighboring 8 98 pixel through the image. Jobson et al. and Rahman et 9–11 developed the retinex concept into a full-scale auto99 al. 100 matic image enhancement algorithm. Their method, called 101 multiscale retinex with color restoration 共MSRCR兲, com102 bines color constancy with local contrast/lightness enhance103 ment to transform digital images into renditions that ap104 proach the realism of direct scene observation. A different 105 multiresolution approach for contrast enhancement is pre12 106 sented by Starck et al.et al. Using curvelet transform, the 107 authors address the multiscale enhancement problem and 108 apply their curvelet-based enhancement technique to edge 109 detection and segmentation. Rizzi et al.13 presented an algorithm for unsupervised 110 111 enhancement of digital images with simultaneous global 112 and local effects, called automatic color equalization 113 共ACE兲. Inspired by some adaptation mechanisms of human 114 vision, ACE realizes a local filtering effect by taking into 115 account the color spatial distribution in the image. ACE has 116 proven to achieve an effective color constant correction and 117 a satisfactory tone equalization performing simultaneously 118 global and local image corrections. However, the computa119 tional cost of the algorithm is very high. Fairchild and 14 120 Johnson formulated a model called the image color ap121 pearance model 共iCAM兲. The objective in formulating 122 iCAM was to provide traditional color appearance capabili123 ties, spatial vision attributes and color difference metrics in 124 a simple model for practical applications such as high125 dynamic-range tone mapping. Rendering high-dynamic-range images 共HDRIs兲 on low126 127 contrast media, a process known as tone mapping, is also 128 related to the problem of contrast enhancement. Photographers use dodging and burning to locally adjust 129 130 print exposure in a dark room, inspiring an early paper by 15 131 Chiu et al. that constructs a locally varying attenuation 132 factor by repeatedly clipping and low-pass filtering the 133 scene. Their method works well in smoothly shaded re16 134 gions. Larson et al. presented a tone reproduction opera135 tor that preserves visibility in high-dynamic-range scenes. 136 They introduced a new histogram adjustment technique, 137 based on the population of local adaptation luminance in a 17 138 scene. Tumblin and Turk devised a hierarchy that closely 139 follows artistic methods for scene renderings. Each level of 140 hierarchy is made from a simplified version of the original 141 scene consisting of sharp boundaries and smooth shadings. 142 They named the sharpening and smoothing method low143 curvature image simplifiers 共LCIS兲. The technique was 144 shown to be effective to convert high-contrast scenes to 145 low-contrast, highly detailed display images. A survey of 146 early methods and algorithms able to deal with high18 147 dynamic-range images is presented in Battiato et al. More 148 recently, other authors have explored similar ideas—for ex87

AQ: #3

AQ: #4

Journal of Electronic Imaging

ample, Meylan and Süsstrunk19 proposed a method to render high-dynamic-range images based on the centersurround retinex model. Their method uses an adaptive filter, whose shape follows the image’s high-contrast edges, thus reducing halo artifacts common to other methods. Another well-known tone mapping approach is the fast bilateral filtering of Durand and Dorsey.20 Their method is based on anisotropic diffusion to enhance boundaries while smoothing nonsignificant intensity variations. This strategy is similar to the one of Tumblin and Turk,17 but the use of the bilateral filter enables a speed-up compared to the partial derivative filter proposed by Tumblin and Turk, as well as enhanced stability. Pattanik and colleagues21 developed a computational model of adaptation spatial vision for tone reproduction. Their model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. In our work, a local contrast correction is developed starting from Moroney’s technique, where instead of the Gaussian filter, which produces the halo artifacts, the bilateral filter of Tomasi and Manduchi22 is used. Bilateral filtering smoothes images while preserving edges by means of a nonlinear combination of nearby image values. The bilateral filter combines gray levels or colors based on both their geometric closeness and their photometric similarity and prefers near values to distant values in both spatial and intensity domains. We also introduce here an image dependency to tune the strength of the correction with a parameter automatically evaluated from the global statistics of the image. Last, in order to improve the overall image enhancement, we introduce a histogram clipping procedure, also based on the image properties, automatically piloted by the histogram analysis, and an algorithm for color saturation gain. Preliminary results of this work have been presented in Ref. 23. The paper is organized as follows. In Sec. 2, our algorithm is presented. In Sec. 3, we compare and discuss its performance with respect to other image enhancement methods available in the literature. A reliable way of assessing the quality of an image is by subjective evaluation. Therefore, a psychovisual test 共Mean Opinion Score, or MOS兲 is performed to evaluate the quality of the correction. Last, Sec. 4 summarizes the conclusions.

149

2 Local Contrast Correction „LCC… Method 2.1 Local Exponential Correction in LCC The method we propose for contrast enhancement is based on a local and image-dependent exponential correction. The simpler exponential correction, better known as gamma correction, is common in most codes for image processing and consists of elaborating the input image through a constant power function, with exponent ␥. Let us assume for simplicity a gray image I共i , j兲, of 8-bit range. The gamma correction transforms each pixel I共i , j兲 of the input image into the output O共i , j兲, according to the following rule:

192

冋 册

O共i, j兲 = 255

I共i, j兲 255



,

共1兲

150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 AQ: 191 #5

193 194 195 196 197 198 199 200 201 202

203

where ␥ is a positive number that usually varies between 0 204 and 3. This correction gives good results for totally under- 205 exposed or overexposed images. However, when both un- 206 1-2

Apr–Jun 2010/Vol. 19(2)

Schettini et al.: Contrast image correction method

␥关i, j,N共i, j兲兴 = ␣关128−BFmask共i,j兲/128兴 ,

Fig. 1 共a兲 Original image with simultaneous underexposed and overexposed regions. 共b兲 Gamma correction with ␥ = 0.35.

207

derexposed and overexposed regions are simultaneously present in an image, this correction is not satisfactory. In the image presented in Fig. 1 共left兲, the region of the window is well illuminated, whereas the rest of the photo is too dark to see the details. Applying gamma correction 共with ␥ = 0.35兲, the underexposed regions will become lighter as desired 共making noticeable the details兲, but the central region of the photo will be overexposed, as seen in Fig. 1 共right兲. The extension of Eq. 共1兲 to a color image is straightforward, applying the rule to each of the components in the RGB space or only to the luminance Y in the YCbCr space. In what follows, we continue considering the case of a gray image. For the case of an image that has only certain regions with the correct illumination and other regions that are not well exposed, a local correction will be needed that allows for simultaneous shadow and highlight adjustments. As we are interested in a local correction, the exponent of the gamma correction will not be a constant but instead will be chosen as a function that depends on the point 共i , j兲 to be 227 corrected and on its neighboring pixels N共i , j兲. Equation 共1兲 228 thus becomes:

208 209 210 211 212 213 214 AQ: 215 #6 216 217 218 219 220 221 222 223 224 225 226

冋 册

I共i, j兲 O共i, j兲 = 255 255 229 230 231

232

␥关i,j,N共i,j兲兴

.

where BFmask共i , j兲 is an inverted low-pass version of the intensity of the input image, filtered with a bilateral filter,22 and ␣ is a parameter depending on the image properties. The creation of the mask and the choice of the parameter are extensively reported in the following sections.

249

2.2 Bilateral Filter in LCC The Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the center. The assumption is that near pixels are likely to have similar values, and it is therefore appropriate to average them together. However, the assumption of slow spatial variations fails at edges, which are consequently blurred by low-pass filtering. In order to avoid averaging across edges, while still averaging within smooth regions, Tomasi and Manduchi22 presented the bilateral filter. Their idea was to do in the intensity range of an image what traditional filters do in its spatial domain. Two pixels can be close to one another—that is, occupy nearby spatial location—or they can be similar to one another—that is, have nearby values. The goal of using the bilateral filter to calculate the mask in Eq. 共4兲 instead of the Gaussian filter proposed by Moroney is to decrease the halo effects that appear on certain images when only the spatial Gaussian filter is used. In particular, regions with high-intensity gradients are candidates to show the halo effect. One of the principal characteristics of the bilateral filter is to smooth images while preserving edges. Therefore, we expect the halo effect to decrease. The LCC algorithm we propose is a local exponential correction, applied to the intensity, as follows:

254

冋 册

O共i, j兲 = 255

共2兲

␣关128−BFmask共i,j兲/128兴

共5兲

.

255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278

279

j+K

i+K

BFmask共i, j兲 =

共3兲

1 兺 兺 k共i, j兲 p=i−K q=j−K

再 再

⫻exp −

where mask共i , j兲 is an inverted Gaussian low-pass filtered 234 version of the intensity of the input image. Mask values 235 greater than 128, corresponding to dark pixels with dark 236 neighbors in the original image, will give rise to exponents 237 ␥ smaller than 1, and therefore, from Eq. 共2兲, an increase of 238 the luminance will be observed. Values smaller than 128, 239 corresponding to bright pixels with bright neighbors in the 240 original image, will result in exponents ␥ greater than 1 and 241 a decrease of the luminance. Mask values equal to 128 will 242 produce an exponent equal to one, and no modification of 243 the original input is obtained. The greater the distance from 244 the mean value 128, the stronger the correction. We note 245 that white and black pixels, as in the gamma correction, 246 remain unaltered independently of the value of the expo247 nent. In this work, Eq. 共3兲 becomes: 233

Journal of Electronic Imaging

I共i, j兲 255

250 251 252 253

The BFmask共i , j兲 is defined over a window of size 共2K 280 + 1兲 ⫻ 共2K + 1兲 and is given by the bifiltered image of the 281 inverted version of the input Iinv共i , j兲 = 255− I共i , j兲: 282

Moroney4 suggested the following expression for the exponent:

␥关i, j,N共i, j兲兴 = 2关128−mask共i,j兲/128兴 ,

共4兲 248

⫻exp −

1 2␴21 1 2␴22

关共i − p兲2 + 共j − q兲2兴



关Iinv共i, j兲 − Iinv共p,q兲兴2

283



⫻Iinv共p,q兲,

284

285

共6兲 286

where k共i , j兲 is the normalization factor given by: i+K

k共i, j兲 =



j+K

兺 兺 exp p=i−K q=j−K



⫻exp −

1 2␴22



1 2␴21

关共i − p兲2 + 共j − q兲2兴



关Iinv共i, j兲 − Iinv共p,q兲兴2 .

287



288

共7兲 289

In Eqs. 共6兲 and 共7兲, ␴1 is the standard deviation of the 290 Gaussian function in the spatial domain, and ␴2 is the stan- 291 1-3

Apr–Jun 2010/Vol. 19(2)

Schettini et al.: Contrast image correction method

Fig. 4 共a兲 Moroney correction of Fig. 2. 共b兲 Gaussian mask used.

been presented by Paris and Durand.24 In our work, in order to make the bilinear filter algorithm faster, the spatial Gaussian in Eq. 共6兲 is simply replaced by a box filter of dimension 3*␴1. The elaborated images do not show important differences, and the times are greatly reduced 共by a factor of 7 in the case of Fig. 2兲.

Fig. 2 Original image to be elaborated. Dimension: 511⫻ 341.

292 293 294 295 296 297 298 299 300

dard deviation of the Gaussian function in the intensity domain. The parameter ␴1 is based on the desired amount of low-pass filtering. A large ␴1 blurs more—that is, combines values from more distant image locations. Similarly, ␴2 is set to achieve the desired amount of combination of pixel values. As ␴2 increases, the bilateral filter approaches the Gaussian low-pass filter. The dimension K of the window depends on the shape of the spatial Gaussian, through the following relation:

301

K = 2.5 ⫻ ␴1,

302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322

where   indicates “the integer part of.” If ␴1 is too high, the image results are too blurred, the exponent tends to a constant, and Eq. 共5兲 tends to the simple gamma correction of Eq. 共1兲. Otherwise, if the window is too small, the image is not blurred, and the correction does not take into account the local properties of the image. In Fig. 2, an image is presented that shows a highintensity gradient and thus is a candidate to represent the halo effect. The LCC output 共with ␣ = 2兲 and the corresponding bifilter mask are shown in Fig. 3 共left and right, respectively兲. The Moroney correction and the corresponding spatial Gaussian mask are shown in Fig. 4 共left and right, respectively兲. Comparing these two figures, we observe that the halo artifacts present in Fig. 4 are significantly reduced in Fig. 3. Different authors have followed the ideas of Tomasi and Manduchi, and in particular, Durand and Dorsey20 provided a theoretical framework for bilateral filtering and also accelerated the method using a piecewise-linear approximation in the intensity domain and subsampling in the spatial domain. A fast approximation of the bilateral filter using a signal processing approach has

2.3 ␣ Parameter Optimization In this section, we evaluate the more convenient value for ␣ in Eq. 共5兲 in the elaboration of a given image. In fact, it would be preferable to perform different contrast corrections depending on the characteristics of the single shot. For low-contrast images, where a stronger correction is needed, ␣ should be high, say between 2 and 3, while for better contrasted images, ␣ should diminish toward 1, which corresponds to no correction. In order to develop an automatic tool to be applied to every image, a formulation of an estimated value of ␣ is needed. Let us rewrite Eq. 共5兲 in terms of expected values:

共8兲

E关O共i, j兲兴 = 255 ⫻ E

␣关128−BFmask共i,j兲/128兴



,

共9兲

where E indicates expected value. Starting from the Moroney formula 关Eq. 共3兲兴, where the threshold for the mask inversion was set equal to 128, we set the mean value of a gray image equal to 128. Within this assumption, Eq. 共9兲 becomes: E关O共i, j兲兴 = 255 ⫻ E

再冋 册 I共i, j兲 255

␣关128−BFmask共i,j兲/128兴



,

324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340

341 342 343 344 345 346

347

共10兲 348

=128.

As we know that 0 艋 mask艋 255, we can consider the two 349 extreme conditions, and from Eq. 共10兲, we will have the 350 following two estimated values of ␣: 351

␣⬵

␣⬵

ln共I¯/255兲 ln共0.5兲 ln共0.5兲 ln共I/255兲

when BFmask = 255,

共11兲

when BFmask = 0,

共12兲

where ¯I is the estimated value or mean value of the intensity of the input image. If ¯I ⬍ 128, that means a dark image, mask values are toward 255, because it is associated with the negative of the intensity image, and thus Eq. 共11兲 can be

Fig. 3 共a兲 LCC output of Fig. 2. 共b兲 Bilateral filtered mask used. Journal of Electronic Imaging

再冋 册 I共i, j兲 255

323

1-4

Apr–Jun 2010/Vol. 19(2)

352

353 354 355 356 357

Schettini et al.: Contrast image correction method

Fig. 5 共a兲 Original image; dimension 320⫻ 240. 共b兲 LCC output with ␣ = 1.5. 共c兲 LCC output with ␣ = 2. 共d兲 LCC output with ␣ = 2.6.

used for the estimation of ␣. Otherwise, if ¯I ⬎ 128—that is, 359 a bright image—␣ can be estimated with Eq. 共12兲. The 360 parameter ␣ given by Eqs. 共11兲 and 共12兲 could be consid361 ered as a criterion to decide whether it is valid to apply the 362 contrast enhancement method to the original image. That is, 363 if ␣ is close to one 共␣ ⬍ 1.2兲, we could argue that the origi364 nal image does not need to be elaborated. As an example, 365 we show in Fig. 5 an original dark photo and the corre366 sponding elaborated LCC images for different values of the 367 ␣ parameter. The corresponding value of ␣ in this case is 368 evaluated from Eq. 共11兲, and results are equal to 2.6. 369 The entire enhancement procedure of our method is or370 ganized as a chain of algorithms, as indicated in Fig. 6. In 371 the following sections, we explain the remaining modules 372 corresponding to stretching, clipping, and color saturation. 358

AQ: #7

AQ: #8

Fig. 6 Flowchart of the local contrast correction method proposed here.

2.4 Contrast Enhancement Chain: Stretching, Clipping, and Saturation Gain in LCC 375 2.4.1 Stretching and clipping 376 From a deeper analysis of the intensity histogram before 377 and after the local correction proposed, we find that despite 378 a better occupation of the gray levels, the overall contrast 379 enhancement is not satisfying. Also, especially for low380 quality images with compression artifacts, the noise in the 381 darker zones is enhanced. These effects that make the pro382 cessed image grayish are intrinsic in the mathematic formu383 lation of Eq. 共5兲 adopted for the local correction. To over384 come this undesirable loss in the image quality, a further 385 step of contrast enhancement, consisting of a stretching and 386 clipping procedure, and an algorithm to increase the satu387 ration are introduced in this section. The main characteristic 388 of the contrast procedure we propose is that it is image 389 dependent: stretching and thus clipping are piloted by the 390 image histogram properties and are not fixed a priori. To 391 determine the strength of the stretching and thus the num392 ber of bins to be clipped, it is considered how the darker 393 regions occupy the intensity histogram before and after the 394 LCC algorithm. The idea is that pixels belonging to a dark 373 374

AQ: #9

Journal of Electronic Imaging

1-5

area, such as a dark object, that usually occupy a narrow and peaked group of bins at the beginning of the intensity histogram will populate more or less the same bins after a contrast enhancement algorithm. On the other hand, pixels of an underexposed background that create a more spread histogram peak after the same algorithm must populate an even more widespread region of the histogram. To evaluate how these dark pixels are distributed, the algorithm proceeds as follows:

395

1. The RGB input image is converted to the YCbCr space. 共This space is chosen because it is common in the case of JPEG compression, but other spaces where the luminance and the chrominance components are separated can be adopted.兲 2. The percentage of the dark pixels in the image is computed. 兵A pixel is considered dark if its luminance Y is less than 35 and its chroma radius 关共Cb − 128兲2 + 共Cr− 128兲 2兴 1 / 2 is less than 20.其 3. If there are dark pixels, the number of stretched bins, bstr, is evaluated as the difference between the bin corresponding to the 30% of dark pixels in the cumulative histogram of the LCC output intensity, b_output30%, and the bin, b_input30%, corresponding to the same percentage in the cumulative histogram of the input intensity: bstr= b_output30% − b_input30%. In the case of dark regions that must be recovered, this percentage of pixels, which experience has suggested be set at 30%, generally falls in the first bins, under a narrow peak of the histogram, together with the rest of the dark pixels, and thus most of these

404 405 406 407 408 409 410 411

Apr–Jun 2010/Vol. 19(2)

396 397 398 399 400 401 402 AQ: #10 403

412 413 414 415 416 417 418 419 420 421 422 423 424

Schettini et al.: Contrast image correction method 425

428 429 430 431 432 433

pixels are repositioned at almost the initial values. In the case of underexposed regions, however, the same percentage generally falls under a more widespread peak, and thus only part of the dark pixels are recovered. 4. If there are no dark pixels, the stretching is done to obtain a clipping of 0.2% of the darker pixels. 5. In any case, the maximum number of bins to be clipped is set to 50.

434 435

For the brighter pixels, the stretching is done to obtain a clipping of the 0.2%, with a maximum of 50 bins.

436

2.4.2 Color saturation To minimize the change of the color saturation between the input and the output images, the following formulation is applied to the RGB channels, as suggested by Sakaue et al.25 The transformed values R⬘, G⬘, B⬘ are obtained as follows:

AQ: 426 #11 427

437 438 439 440 441

442



冋 冋 冋

册 册 册

1 Y⬘ 共R + Y兲 + R − Y 2 Y 1 Y⬘ 共G + Y兲 + G − Y G⬘ = 2 Y 1 Y⬘ B⬘ = 共B + Y兲 + B − Y 2 Y R⬘ =



,

共13兲

where Y’ is the corrected luminance obtained after the 共LCC+ clipping兲 correction module, as indicated in the pre445 vious section. 443 444

3 Results and Discussion In this section, we first present the results obtained when applying the LCC algorithm to two different kinds of images, and we show how the modules of stretching, clipping, and saturation further improve the correction depending on the image characteristics. In the second part of the results, the whole algorithm 共LCC+ clipping+ saturation兲 is applied 453 to different images and the results are compared with other 454 image enhancement techniques. 455 In Fig. 7, a photo is shown where the darker part of the 456 histogram 共first 50 bins兲 corresponds to a dark region of the 457 scene that we want to enhance. On the other hand, in Fig. 8, 458 the dark areas 共first 30 bins兲 are black regions of the image 459 that we want to preserve 共The black clothing兲. After apply460 ing the LCC module to Figs. 7 and 8, the new histograms 461 are more widespread than the original but are moved and 462 concentrated around the middle values of the range. Equa463 tion 共5兲 applied to both images has the same effect of mak464 ing these dark regions too bright. Moreover, the LCC cor465 rected images show desaturated colors 共middle row of Figs. 466 7 and 8兲. The next step of histogram stretching and clipping 467 is different for these two figures. For Fig. 7, it is necessary 468 to recover only partially the dark pixels, while for Fig. 8, all 469 the dark pixels of the clothing must be recovered. In the 470 bottom row of Figs. 7 and 8, the final elaborated images 471 共LCC+ clipping+ saturation兲 and their corresponding histo472 grams are shown. A correct redistribution of the histogram 473 is obtained, and a good contrast correction is observed in 474 both images 共bottom row of Figs. 7 and 8兲. 446 447 448 449 450 451 452

AQ: #12

Journal of Electronic Imaging

1-6

As already mentioned, a great variety of algorithms for image enhancement are available in the literature. We recall that our method is essentially a contrast correction one, and we have considered only a module for color saturation at the end of the chain 共as indicated in Sec. 2.4.2兲. In order to compare our algorithm’s performance with some other methods, we first concentrate on the case of grayscale images. Figures 9 and 10 show two example input images and the results applying our method, the Moroney correction and the retinex method. This last one has been evaluated within its Frankle-McCann version, where four iterations have been used for the calculations.26 Retinex and our method increase very well both the global and local contrasts, highlighting details that were hidden in the original image. However, we observe that the retinex results present an overenhancement effect, and we could consider our correction as a more natural one. The Moroney correction is globally darker with respect to the other results. Experimental results for other test grayscale images have shown similar tendencies.

475

3.1 Quality Assessment The improvement in images after enhancement is often very difficult to measure. To date, no objective criteria to assess image enhancement capable of giving a meaningful result for every image exist in the literature. Full referencequality assessment metrics cannot be applied, since these methods evaluate the departure of the output image with respect to an original one that is supposed to be free of distortions. In our present case of contrast enhancement, no such original “distortion-free” images exist, and the elaborated images should be an enhanced version of the input. There exist some objective metrics in the literature that aim to estimate brightness and contrast in the image, such as entropy 共H兲, absolute mean brightness error 共AMBE兲, and measure of enhancement 共EME兲.27–29 AMBE is the absolute difference between input and output mean, and EME approximates an average contrast in the image by dividing the image into nonoverlapping blocks, defining a measure based on minimum and maximum intensity values in each block and averaging them. Absolute values of these metrics 共H, AMBE, and EME兲 should be carefully analyzed, since they do not necessarily correlate with an improvement of image quality in terms of contrast enhancement. Let us note that, within this context, we are implicitly associating the image quality concept to the naturalness of the image. For example, with respect to the use of the EME metric, high values of EME should indicate regions with high local contrast, while EME values of nearly zero should correspond to homogenous regions. If an algorithm introduces noise in such homogenous regions, a higher EME value will be obtained, and this is certainly not in correspondence with an image quality improvement, as can be seen in the example shown in Fig. 11, where for the retinex processing, noise results were amplified and the corresponding EME value equals 31.70. This value is higher than that corresponding to the other two algorithms compared here 共EME_Identity = 25.09, EME_LCC= 18.09, EME_Moroney = 23.76兲. AMBE represents the distance from mean brightness of the original. In an enhancement procedure, we do not always want to preserve the original brightness. In fact, pre-

495

Apr–Jun 2010/Vol. 19(2)

476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494

496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 AQ: 529 #13 530 531 532 533 534 535

Schettini et al.: Contrast image correction method

Fig. 7 共a兲 Original image. 共b兲 Histogram of original image. In this case, the wide dark peak at the beginning of the histogram corresponds to the background 共see text兲. 共c兲 LCC output with ␣ = 2.5. 共d兲 Histogram of LCC output with ␣ = 2.5. 共e兲 Final elaborated image; LCC+ clipping+ saturation. 共f兲 Histogram of final elaborated image.

536

serving the original brightness does not always mean pre537 serving the natural look of the image. If the original images 538 are strongly underexposed and/or overexposed, we expect a 539 high AMBE value, indicating that the quality could have 540 been improved. For example, our LCC correction applied 541 to the image shown in Fig. 7 gives an AMBE value of 542 51.22. On the other hand, in the case of correctly exposed 543 images or night photos, we expect our algorithm not to 544 significantly modify the mean brightness. 共See values of the 545 metric for Fig. 11: AMBE_LCC= 18.89, AMBE_Moroney 546 = 23.86, AMBE_Retinex= 48.58.兲 Journal of Electronic Imaging

With respect to the entropy values, high values of H indicate a better occupation of all intensity levels. This will be associated with a visually pleasing image if the acquisition conditions were reasonably well balanced. Starting from images with sharply peaked histograms 共underexposed and/or overexposed or night images兲, obtaining a flat histogram could increase noise. 共See values of the metric for Fig. 11: H_identity = 5.86, H_LCC= 6.01, H_Moroney = 6.58, H_Retinex= 7.61.兲 Thus, the higher value of H 共ideal flat histogram兲 does not always represent a visually pleasing image. 1-7

Apr–Jun 2010/Vol. 19(2)

547 548 549 550 551 552 AQ: #14 553 554 555 556 557

Schettini et al.: Contrast image correction method

Fig. 8 共a兲 Original image. 共b兲 Histogram of original image. In this case, the narrow dark peak at the beginning of the histogram is due to the black suits present in the photo 共see text兲. 共c兲 LCC output. 共d兲 Histogram of LCC output. 共e兲 Final elaborated images; LCC+ clipping+ saturation. 共f兲 Histogram of final elaborated image.

558

trait兲 presenting underexposed and/or overexposed regions and 12 viewers. The preference score index is calculated for each of the preceding methods, and the greatest one 共0.79兲 is obtained for our approach, agreeing with our visual inspection observations of Figs. 9 and 10. In respect to the other two methods, the Moroney technique is positioned second for the preference score 共0.62兲, and the retinex model 共0.46兲 is third. The psychovisual test and results are detailed in Sec. 5. Let us now elaborate one of these images 共for example, the one in Fig. 9兲 in their color version. In Fig. 12, we

Similar conclusions have been pointed out by Arici et 6 559 al., while evaluating the performance of their method with 560 the same metrics. Given all the considerations cited earlier about the ob561 562 jective metrics, we note that a reliable way of assessing the 563 quality of an image is by subjective evaluation. Therefore, 564 we have implemented the Mean Opinion Score 共MOS兲 test 565 to evaluate the performance of our algorithm and compare 566 it with Moroney and retinex. The MOS test, using paired comparison, was imple567 568 mented for 10 images 共indoor, outdoor, daylight, night, porJournal of Electronic Imaging

1-8

Apr–Jun 2010/Vol. 19(2)

569 570 571 572 573 574 575 576 577 AQ: 578 #15 579

Schettini et al.: Contrast image correction method

Fig. 9 共a兲 Original image. 共b兲 Our proposed method. 共c兲 Moroney correction. 共d兲 Retinex.

580 581 582 583 584 585 586 587 588 589 590 591 592 AQ: 593 #16 594

Fig. 11 共a兲 Original image 共EME= 25.09, H = 5.86兲. 共b兲 Our proposed method 共EME= 18.09, AMBE= 18.89, H = 6.01兲. 共c兲 Moroney correction 共EME= 23.76, AMBE= 23.86, H = 6.58兲. 共d兲 Retinex 共EME = 31.70, AMBE= 48.58, H = 7.61兲.

compare our results 共we now take into account the saturation module of Sec. 2.4.2兲 with the Moroney correction and with the retinex approach in its Frankle-McCann26 version. The strength of detail enhancement is observed in all three elaborated images. The retinex shows a color change that makes the result overenhanced and unnatural. Our algorithm achieves a good contrast enhancement 共at both local and global scales兲 while affecting the colors in a more pleasing way. The Moroney correction is darker compared to the other two methods. This shows the effectiveness of our automated ␣ parameter tuning and saturation preserving treatment. Comparisons have been also made between our method and the MSRC of Jobson et al.8 共not shown in this paper兲, and similar conclusions to the comparison with retinex results presented here can be drawn.

4 Conclusions 596 In this work, we presented a local contrast correction algo597 rithm that allows for simultaneous shadow and highlight

adjustments, starting from a simple pixel-wise gamma correction, automatically piloted by image statistics analysis. In order to prevent introduction of halo artifacts, an edge preserving filter, based on bilateral low-pass techniques, has been adopted. The proposed method, compared with other solutions well known in the literature, properly enhances the dynamic range in both low-light and high-light regions of an image while avoiding common quality loss due to halo artifacts, desaturation, and grayish appearance. The Mean Opinion Score test has been carried out to evaluate the performance of different contrast correction methods for the case of grayscale images. The highest score was obtained for our proposed algorithm. Our method has shown to be robust with respect to a heuristic choice of parameters. However, a further improvement could be achieved with an optimization procedure to estimate their values starting from a proper image database.

Fig. 10 共a兲 Original image. 共b兲 Our proposed method. 共c兲 Moroney correction. 共d兲 Retinex.

Fig. 12 共a兲 Original image. 共b兲 Our proposed method. 共c兲 Moroney correction. 共d兲 Retinex.

595

Journal of Electronic Imaging

1-9

Apr–Jun 2010/Vol. 19(2)

598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614

Schettini et al.: Contrast image correction method 615

Table 1 Mean Opinion Score experiment: Preference scores with standard error mean 共SEM兲.

Appendix 616 A psychovisual experiment 共Mean Opinion Score兲 was per617 formed using a paired comparison in which viewers were 618 asked to select an image from a pair, based on their 30 During the tests, the following algorithms 619 preference. 620 were compared: 621 622 623 624

625 626 627 628 629 630 631 632 633 634 635 636 637

• • • •

Identity algorithm 共the original scene兲 Our algorithm 共LCC+ clipping+ saturation兲 Moroney4 Retinex26

1 Test Method Ten badly exposed scenes 共grayscale images兲 were used for the test. We used four different versions of each scene, one for each algorithm. On each trial of the experiment, subjects viewed a pair of images. A pair was formed by two versions of the same scene, and the subject was asked to indicate which image was the preferred one. Each version of a scene was compared with all the other versions of the same scene, for a total of 60 pairs 共10 scenes⫻ 6 combinations兲. In all the experiments, the images were shown in a random order, different for each subject. During a preliminary test, each subject was implicitly trained about the kind of images he/she was going to evaluate.

2 Test Condition 639 To perform the psychovisual test, the images to be judged 640 were shown on a web-based interface. We have adopted 641 five 19-in CRT COMPAQ S9500 display monitors. All the 642 monitors were calibrated 共D65, gamma 2.2, luminance 643 100 cd/ m2兲. Their resolution was 1600⫻ 1200 pixels that 644 correspond to 110 dpi 共using 18 in as the physical diagonal 645 of the screen, as indicated by the manufacturer兲. The re646 fresh rate was 75 Hz. The ambient light levels 共typical of647 fice illumination兲 were maintained constant between the 648 different sessions. The distance between the observer and 649 the monitors was about 60 cm 共corresponding to about 650 46 pixels per deg of visual angle兲. All the original scenes 651 utilized for the psychovisual tests were cropped and under652 sampled to fit a 600⫻ 600 pixel box. 638

AQ: #17

653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668

3 Viewers The panel of subjects involved in this study was recruited from the Computer Science Department. The subject pool consisted of 12 students inexperienced with image quality assessment. Subjects had normal or corrected-to-normal visual acuity. Each subject was individually briefed about the modality of the experiment in which he/she was involved. During the test, we evaluated a total of 720 pairs 共12 viewers⫻ 10 scenes⫻ 6 combinations兲. The preference score is defined as the percentage of times an algorithm was preferred on the total number of 共evaluated兲 pairs where it was present 共10 scenes⫻ 12 viewers⫻ 3 combinations兲. The standard error of the mean 共SEM兲 along the algorithm dimension was also estimated. The preference scores for each algorithm and the corresponding SEM are shown in Table 1. Journal of Electronic Imaging

1-10

Identity

Our algorithm

Moroney

Retinex

Preference score

0.1306

0.7944

0.6167

0.4583

SEM

±0.0241

±0.0259

±0.0626

±0.0452

References

669

1. W. Frei, “Image enhancement by histogram hyperbolization,” Comput. Graph. Image Process. 6, 286–294 共1977兲. 2. Mokrane, “A new image contrast enhancement technique based on a contrast discrimination model,” CVGIP: Graph. Models Image Process. 54, 171–180 共1992兲. 3. Y. Wang, Q. Chen, and B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Trans. Consum. Electron. 45, 68–75 共1999兲. 4. N. Moroney, “Local colour correction using nonlinear masking,” in IS&T/SID Eighth Color Imaging Conference, pp. 108–111 共2000兲. 5. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Trans. Image Process. 9, 889– 896 共2000兲. 6. T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Trans. Image Process. 18, 1921–1935 共2009兲. 7. E. H. Land and J. J. McCann, “Lightness and retinex theory,” J. Opt. Soc. Am. 61, 1–11 共1971兲. 8. D. Jobson, Z. Rahman, and G. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, 965–976 共1997兲. 9. Z. Rahman, D. Jobson, and G. Woodell, “Multiscale retinex for color image enhancement,” in Proc. Int. Conf. Image Processing, vol. 3, pp. 1003–1006 共1996兲. 10. Z. Rahman, D. J. Jobson, and G. A. Woodell, “Retinex processing for automatic image enhancement,” J. Electron. Imaging 13, 100–110 共2004兲. 11. Z. Rahman, D. J. Jobson, G. A. Woodell, and G. D. Hines, “Image enhancement, image quality, and noise,” in Photonic Devices and Algorithms for Computing VII, K. M. Iftekharuddin and A. A. S. Awwal, Eds., Proc. SPIE 5907, 164–178 共2005兲. 12. J. Starck, F. Murtagh, E. Candès, and D. Donoho, “Gray and color image contrast enhancement by the curvelet transform,” IEEE Trans. Image Process. 12, 706–716 共2003兲. 13. A. Rizzi, C. Gatta, and D. Marini, “A new algorithm for unsupervised global and local color correction,” Pattern Recogn. Lett. 24, 1663– 1677 共2003兲. 14. M. Fairchild and G. Johnson, “Meet iCAM: an image color appearance model,” presented at IS&T/SID 10th Color Imaging Conference, Scottsdale, AZ 共2002兲. 15. K. Chiu, M. Herf, P. Shirley, S. Swamy, C. Wang, and K. Zimmerman, “Spatially nonuniform scaling functions for high contrast images,” in Proc. Graphics Interface’93, pp. 245–254 共1993兲. 16. G. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction operator for high dynamic range scenes,” IEEE Trans. Vis. Comput. Graph. 3, 291–306 共1997兲. 17. J. Tumblin and G. Turk, “LCIS: a boundary hierarchy for detail preserving contrast reduction,” in SIGGRAPH 99, pp. 83–90 共1999兲. 18. S. Battiato, A. Castorina, and M. Mancuso, “High dynamic range imaging for digital still camera: an overview,” J. Electron. Imaging 12, 459–469 共2003兲. 19. L. Meylan and S. Süsstrunk, “High dynamic range image rendering with a Retinex-based adaptive filter,” IEEE Trans. Image Process. 15, 2820–2830 共2006兲. 20. F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” in SIGGRAPH 共2002兲. 21. S. Pattanik, J. Ferwerda, M. Fairchild, and D. Greenberg, “A multiscale model of adaptation and spatial visual for realistic image display,” in SIGGRAPH 98, pp. 287–298 共1998兲. 22. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. 1998 IEEE Int. Conf. Computer Vision, pp. 836– 846 共1998兲. 23. A. Capra, A. Castorina, S. Corchs, F. Gasparini, and R. Schettini, “Dynamic range optimization by local contrast correction and histogram image analysis,” in IEEE Int. Conf. Consumer Electronics, ICCE 2006, pp. 309–310 共2006兲. 24. S. Paris and F. Durand, “A fast approximation of the bilateral filter using a signal processing approach,” in European Conf. Computer Vision (ECCV’06) 共2006兲.

670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738

Apr–Jun 2010/Vol. 19(2)

AQ: #18

AQ: #19

AQ: #20

AQ: #21

AQ: #22 AQ: #23

AQ: #24

AQ: #25

Schettini et al.: Contrast image correction method

AQ: #26

AQ: #27

739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754

25. S. Sakaue, A. Tamura, M. Nakayama, and S. Maruno, “Adaptive gamma processing of the video cameras for the expansion of the dynamic range,” IEEE Trans. Consum. Electron. 41, 555–562 共1995兲. 26. J. Frankle and J. McCann, “Method and apparatus for lightness imaging,” U.S. Patent No. 4,384,336 共1983兲. 27. Beghdadi and A. L. Négrate, “Contrast enhancement technique based on local detection of edges,” Comput. Vis. Graph. Image Process. 46, 162–174 共1989兲. 28. S.-D. Chen and A. Ramli, “Minimum mean brightness error bihistogram equalization in contrast enhancement,” IEEE Trans. Consum. Electron. 49, 1310–1319 共2003兲. 29. S. Agaian, K. Panetta, and A. Grigoryan, “Transform-based image enhancement algorithms with performance measure,” IEEE Trans. Image Process. 10, 367–382 共2001兲. 30. P. Engeldrum, “Psychometric scaling: a toolkit for imaging systems development,” presented at Imcotek, Winchester, MA 共2000兲.

755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778

Raimondo Schettini is an associate professor at the University of Milano-Bicocca 共Italy兲. He is vice director of the Department of Informatics, Systems, and Communication and head of the Imaging and Vision Lab. He has been associated with the Italian National Research Council 共CNR兲 since 1987. He has been team leader on several research projects and has published more than 190 refereed papers on image processing, analysis, and reproduction and on image content–based indexing and retrieval. He is an associated editor of the Pattern Recognition Journal. He was a co-guest editor of three special issues about Internet imaging 共Journal of Electronic Imaging, 2002兲, color image processing and analysis 共Pattern Recognition Letters, 2003兲, and color for image indexing and retrieval 共Computer Vision and Image Understanding, 2004兲. He was a member of the CIE TC 8 / 3 and chairman of the 1st Workshop on Image and Video Content–Based Retrieval 共1998兲; the First European Conference on Color in Graphics, Imaging and Vision 共2002兲; the EI Internet Imaging Conferences 共2000–2006兲; the EI Multimedia Content Access: Algorithms and Systems 2007 Conference 共2007, 2009, 2010兲; and the Computational Color Imaging Workshop 共2007, 2009, 2011兲.

779 Francesca Gasparini received her degree 780 共Laurea兲 in nuclear engineering from the 781 Polytechnic of Milan in 1997 and her PhD 782 in science and technology in nuclear power 783 plants from the Polytechnic of Milan in 784 2000. Since 2001, she has been a fellow at 785 the ITC Imaging and Vision Laboratory of 786 the Italian National Research Council. Gas787 parini is currently an assistant professor at 788 the Dipartimento di Informatica, Sistemis789 tica e Comunicazione 共DISCo兲, University 790 of Milano-Bicocca, where her research focuses on image and video

Journal of Electronic Imaging

1-11

enhancement, face detection, and image quality assessment.

791

Silvia Corchs received her PhD in physics from the University of Rosario, Argentina, in 1994. She has been working for many years in the field of theoretical physics in Argentina and in the field of computational neuroscience in Germany 共Research Department of Siemens Munich and University of Ulm兲. She is now with the Imaging and Vision Laboratory of the University of Milano-Bicocca, where her research focuses on image enhancement, image quality assessment, and visual attention mechanisms applied to image processing.

792 793 794 795 796 797 798 799 800 801 802 803 804

Fabrizio Marini received his degree in computer science from the University of Milano-Bicocca 共Italy兲 in 2007. He is currently a PhD student in computer science at the Dipartimento di Informatica, Sistemistica e Comunicazione 共DISCo兲 of the University of Milano-Bicocca. The main topics of his current research concern image quality assessment and imaging device characterization.

805 806 807 808 809 810 811 812 813 814 815

Alessandro Capra received an electronic engineering degree from the University of Palermo in 1998. Since 1999, he has worked at STMicroelectronics. His main activities have been related to research innovative architectures and algorithms for digital still camera and mobile imaging applications. He is currently in charge of a team developing next-generation image processing algorithms and systems for mobile cameras. He is author of many patents and papers in these R&D fields.

816 817 818 819 820 821 822 823 824 825 826 827

Alfio Castorina received his degree in computer science in 2000 at the University of Catania. Since 2000, he has been working at STMicroelectronics in the Advanced System Technology 共AST兲 Catania Lab as a system engineer. He is author of several papers and patents in the image and video processing field.

828 829 830 831 832 833 834 835

Apr–Jun 2010/Vol. 19(2)

NOT FOR PRINT!

FOR REVIEW BY AUTHOR

AUTHOR QUERIES — 008002JEI

#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27

Au no keywords in JEI Au edits OK? sense unclear Au OK?

Au OK? Au OK? Au OK? Au edits OK? Au OK? Au OK? Au OK? Au edits OK? Au OK? Au OK? Au OK? Au OK? Au MSRCR? Au OK? Au n. 2 please give author initial Au n. 4 please give editors, publisher, vol. # Au n. 9 please give editors, publisher. Au deletion OK? Au n. 14 please give dates held, paper # Au n. 15 give editors, publisher, vol. # Au nn. 17, 20, 21-24 please give editors, publisher, vol. # Au nn. 20, 24 give page #s Au n. 27 give author initials Au n. 30 give dates held, paper #, publisher/sponsor

NOT FOR PRINT!

Suggest Documents