Image-Difference Prediction: From Grayscale to Color

9 downloads 0 Views 2MB Size Report
significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures. Index Terms—Color, image difference, image quality.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

435

Image-Difference Prediction: From Grayscale to Color Ingmar Lissner, Jens Preiss, Philipp Urban, Matthias Scheller Lichtenauer, and Peter Zolliker

Abstract— Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures. Index Terms— Color, image difference, image quality.

I. I NTRODUCTION

W

ITH respect to quality assessment, “the full-reference still-image problem is essentially solved” [1]. This recent, somewhat controversial statement by A. C. Bovik, cocreator of the SSIM index [2], sounds surprising at first. It is, however, confirmed by the excellent prediction accuracy of the multiscale SSIM index [3] on the LIVE database [4]: the Spearman correlation between subjective quality assessments and corresponding predictions is greater than 0.95 for all included distortions (lossy compression, noise, blur, and channel fading). Given that the SSIM index operates on grayscale data, color information is obviously not required to predict these distortions. Nevertheless, the above statement about image-quality assessment is only true to a certain extent: 1) Changes of image semantics cannot be detected. If, for instance, a particular distortion affects a human face in a portrait, the subjective image quality is greatly reduced. A similar change to an object in the background may not even be noticed. Manuscript received February 6, 2012; revised July 23, 2012; accepted August 15, 2012. Date of publication September 19, 2012; date of current version January 8, 2013. This work was supported in part by the German Research Foundation and the Swiss National Research Foundation under SNF Project 200021_129964. I. Lissner and J. Preiss contributed equally to this work. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Erhardt Barth. I. Lissner, J. Preiss, and P. Urban are with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt 64289, Germany (e-mail: [email protected]; [email protected]; [email protected]). M. Scheller Lichtenauer and P. Zolliker are with the Laboratory for Media Technology, Empa—Swiss Federal Laboratories for Materials Science and Technology, Dübendorf 8600, Switzerland (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2012.2216279

2) Changes in the chromatic components (chroma and hue) may not affect the lightness component. This occurs frequently in gamut-mapping [5] and tone-mapping [6] applications. The accuracy of grayscale-based quality measures in predicting such distortions has room for improvement [7]–[9]. Although several extensions of the SSIM index for color images have been proposed [10], [11], we believe that further improvements are possible. In this paper we address the color-related aspects of imagedifference assessment. We focus on full-reference measures, which predict the perceived difference of two input images. Apart from the SSIM index, many such measures have been proposed [3], [12]–[17] (to cite only a few). Ideally, they reflect the actual visual mechanisms responsible for imagedifference assessment. These mechanisms, however, are poorly understood, which applies especially to the cortical processing of complex visual stimuli. As a result, assumptions are made about how the human visual system (HVS) extracts and processes image information. Hypotheses on which information is extracted [2], [18], [19] and how it is weighted and combined [7] can be found in the literature. II. I MAGE -D IFFERENCE F RAMEWORK The image-difference framework we present in this paper normalizes the input images with an image-appearance model and transforms them into a working color space. An imagedifference prediction is then computed using so-called imagedifference features (IDFs) that are extracted from the images. An overview of our framework is provided in Fig. 1. A. Image Normalization The interpretation of an image by the visual system depends on the viewing conditions, e.g., viewing distance, illuminant, and luminance level. Consequently, the images should be normalized to specific viewing conditions before any information is extracted. So-called image-appearance models [20] have been developed for this purpose. Among the mechanisms that they model are chromatic adaptation, contrast sensitivity, and various appearance phenomena such as the Hunt effect and the Stevens effect [20]. Fig. 2 illustrates the image normalization: a subthreshold distortion may turn into a suprathreshold distortion if the viewing conditions change. Image-appearance modeling is still in its infancy. For example, the contrast-sensitivity mechanism is often modeled as a convolution in an intensity-linear opponent color space. Different filters are applied to the achromatic and chromatic

1057–7149/$31.00 © 2012 IEEE

436

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

0.7 prediction

RGB color space

Imageappearance model Fig. 1.

Working color space

Feature combination

Feature extraction

General overview of our image-difference framework.

(a)

(b)

Adapting luminance = 1000 cd/m2

Adapting luminance = 16 cd/m2

Adapting luminance = 16 cd/m2

Fig. 2. Influence of the viewing conditions. (a) Continuous-tone and (b) halftone image as seen on a display with a white-point luminance of 80 cd/m2 . Left: the adapting luminance is 1000 cd/m2 (e.g., outdoor daylight environment). Middle: the adapting luminance is 16 cd/m2 (adaptation takes place on the display; dark surround). The images were rendered pixel-wise using the CIECAM02 color-appearance model assuming an adapting luminance of 20% of the white-point luminance in the scene [20]. The subthreshold distortion (left) turns into a suprathreshold distortion (middle) if the adapting luminance is changed. At a closer viewing distance (right), the perceived image difference increases. The original image is part of the Kodak Lossless True Color Image Suite [21].

channels in the frequency domain. This involves several simplifications, because the contrast-sensitivity mechanism is orientation-dependent [22]; is age-dependent [23]; depends on the luminance level [24]; is usually measured for sinusoidal gratings instead of complex stimuli [25]. Similar limitations apply to other components of current image-appearance models. In addition, it is unlikely that the results of various individual studies on certain aspects of the HVS can be seamlessly combined into an overall model of the visual processing. In this paper we test, among other things, if simple image-appearance models can improve the prediction performance of image-difference measures.

Euclidean distances in the space match perceived color differences. This is required for an accurate representation of image features such as edges and gradients. In an RGB color space, such features may be over- or underestimated, i.e., their computed magnitudes exceed their perceived magnitudes or vice versa. Although a perfectly perceptually uniform color space does not exist [26], various approximations have been proposed [27]–[30]. Note that the underlying color-difference data was collected using uniform color patches and may not fully apply to complex visual stimuli.

B. Transformation into Working Color Space

C. Information Extraction

In the final step of the normalization process, the images are transformed into a working color space. This color space should provide simple access to color attributes — lightness, chroma, and hue — and it should be free of crosscontamination between these attributes. One of the most important properties is perceptual uniformity, meaning that

We extract image-difference features (IDFs) from the normalized input images. These features are mathematical formulations of hypotheses on the visual processing. They are combined into an overall image-difference prediction using a combination model. The parameters of this model are optimized using image-difference datasets.

LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR

In this paper we extend our previous work [8], [31] in several key aspects: 1) There are various ways of normalizing the images to specific viewing conditions. We test how a normalization to a specific viewing distance affects the prediction accuracy of our image-difference measures. 2) We derive our lightness-, chroma-, and hue-comparison terms from the SSIM luminance function and adapt it to a perceptually uniform color space. 3) The sensitivity of the HVS to visible distortions (sometimes modeled by a suprathreshold contrast-sensitivity function [32]–[34]) depends on the viewing distance. We investigate if the prediction of gamut-mapping distortions is improved by an existing multiscale approach. 4) We evaluate whether chromatic IDFs adversely affect the prediction of conventional image distortions (e.g., lossy compression, noise, and blur). III. E XTRACTING I MAGE -D IFFERENCE F EATURES An image-difference feature (IDF) is a transformation IDF: I M,N × I M,N × P → [0, 1]

(1)

where I M,N is the set of all colorimetrically specified RGB images with M rows and N columns; P is a set of parameter arrays, each of which parametrizes the employed imageappearance model. Depending on the model, P may include the viewing distance, the luminance level, and the adaptation state of the observer. According to the proposed modular framework, an IDF may be expressed as the concatenation of a transformation N that normalizes the images to the viewing conditions and a transformation F that expresses the actual feature extraction, i.e., IDF = F ◦ N

(2)

where N : I M,N × I M,N × P → W M,N × W M,N F : W M,N × W M,N → [0, 1]

(3) (4)

and W M,N is the set of images in the working color space with M rows and N columns. The feature-extraction transformation F can be realized in various ways. Each transformation used in this paper is based upon a specific image-comparison transformation t : Wk,k × Wk,k → [0, 1]

(5)

which compares pixels within corresponding k × k windows (k  min{M, N}) of the input images. The feature-extraction transformation F is computed by averaging the local differences as follows: F(X norm , Ynorm ) =

K 1  t(xi , yi ) K

(6)

i=1

where K is the number of considered windows within the normalized images X norm , Ynorm ∈ W M,N and xi and yi are the corresponding pixel arrays defined by the i -th window.

437

Although we compute the mean of the difference maps, more complex pooling methods may be in better agreement with human perception. A comprehensive analysis is provided by Wang and Li [7]. Scale-dependent IDFs include a transformation that extracts a specific image scale: S : W M,N × W M,N → W M, ´ N´ × W M, ´ N´

(7)

where M´ ≤ M and N´ ≤ N. The IDF that operates on this scale is defined by concatenation: IDF = F ◦ S ◦ N

(8)

where F is adjusted to the scale defined by S. A. Image-Comparison Transformations To ensure a high prediction accuracy, we utilize established terms describing image-difference features. We adjust these terms to our framework and extend them to assess chromatic distortions. All terms are either adopted or derived from the SSIM index [2] because of its wide use and good prediction accuracy on various image distortions. In addition, its modular structure — three comparison terms are evaluated separately and then multiplied — is well suited for our image-difference framework. The terms are computed within sliding windows in the compared images X and Y . The arguments x and y are the pixel arrays within these windows. In the working color space, each pixel x consists of a lightness and two chromatic values: x =  (L x , ax , bx ). The chroma of the pixel is defined as C x = ax2 + b 2x . 1) Lightness, chroma, and hue comparisons: l L (x, y) = lC (x, y) = l H (x, y) =

1 2

(9)

2

(10)

2

(11)

c1 · L(x, y) + 1 1 c4 · C(x, y) + 1 1 c5 · H (x, y) + 1

where f (x, y) denotes the Gaussian-weighted mean of f (x, y) computed for each pixel pair (x, y) in the window. The pixel-wise transformations used above are defined as: L(x, y) = L x − L y

(12)

C(x, y) = C x − C y (13)  H (x, y) = (ax −a y )2 +(bx −b y )2 −C(x, y)2. (14) These terms are based upon the hypothesis that the HVS is sensitive to lightness, chroma, and hue differences. Their structure is derived from the luminance function of the SSIM index [2], which is designed for an intensitylinear space. We transformed it into our perceptually uniform working space as shown in Appendix. We chose the terms L, C, and H such that they return similar results for similar perceived differences

438

in a perceptually uniform color space. Note that this applies only to small color differences [35] — for gamutmapped images, the chroma differences to the original are usually quite large. An adjustment to large color differences is possible using the parameters ci . Please note that H defined in (14) is a Euclidean rather than a hue-angle difference. This is required because the perceived hue difference of colors increases with chroma if their hue-angle difference stays constant [36]. It also serves to adjust the scaling of hue differences to that of lightness and chroma differences (in a perceptually uniform color space). 2) Lightness-contrast comparison according to [2]: 2σx σy + c2 (15) c L (x, y) = 2 σx + σy2 + c2 where σx and σy are the standard deviations of the lightness components in the sliding windows mentioned above. The term reflects the visual system’s sensitivity to achromatic contrast differences and its so-called contrast-masking property [37]. The impact of this property is modeled by adjusting the parameter c2 to the working color space. This is illustrated in Fig. 3: contrast deviations in low-contrast areas (red feathers) are highly disturbing and should be considered accordingly. 3) Lightness-structure comparison according to [2]: σxy + c3 s L (x, y) = (16) σx σy + c3 where σxy corresponds to the cosine of the angle between x − x and y − y [2] in the lightness component. The term incorporates the assumption that the HVS is sensitive to achromatic structural differences. Computing the terms in (9), (10), (11), (15), and (16) for sliding windows within the images X and Y results in five difference maps (for an example, see Fig. 5). B. Resulting Image-Difference Features Each comparison term is incorporated into an individual IDF as shown in (2) and (6). To distinguish between terms and IDFs we use L, C, and S to denote the IDFs based upon the l-, c-, and s-terms. The visual system is more sensitive to high-frequency distortions (such as noise [38]) in the lightness component than in the chromatic components. Therefore, we create three lightness-based IDFs using the l L -term shown in (9) and the terms from (15) and (16), c L and s L . The lightness-contrast and lightness-structure IDFs C L and S L are computed on several scales (see (8)), because the visual system’s response to differences in contrast and structure varies between scales [3]. On the first scale, the unaltered input images are used. They are then lowpass-filtered and downsampled by a factor of two to determine the images for the next smaller scale. C. Image-Difference Measure In the context of our framework, an image-difference measure (IDM) is a transformation that combines several IDFs

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

Original image

Distorted image

Lightness-contrast comparison (c2 = 58)

Lightness-contrast comparison (c2 = 0.5)

Fig. 3. Lightness-contrast comparison as in (15) using different parameters c2 . A small c2 emphasizes contrast differences in low-contrast regions. The original image is part of the Kodak Lossless True Color Image Suite [21]. TABLE I PARAMETERS αi AND ci OF O UR M ODEL IN (17) Scale weights of the multiscale SSIM index [3]. α1 α2 α3 α4 α5 0.0448 0.2856 0.3001 0.2363 0.1333 Proposed parameters ci . Although our results were computed with different parameters, the prediction accuracy is not significantly affected if these simple parameters are used. c1 0.002

c2 0.1

c3 0.1

c4 0.002

c5 0.008

to predict image differences. It has the same structure as an IDF (shown in (1)). All IDFs that are combined into an IDM share the same normalization transformation N (see (3)). In the following, the arguments of the IDFs and IDMs are omitted for the sake of brevity. We employ a factorial combination model: n  αi  n αn  C Li S Li · · LC · L H (17) IDM = 1 − L L i=1

where n is the number of scales used by the multiscale model, LnL is the lightness-comparison IDF on the n-th (smallest) scale, C Li and S Li are the lightness-contrast and lightnessstructure IDFs on the i -th scale, and αi is the weight of this scale. The multiscale model in (17) and the αi (see Table I) are adopted from the multiscale SSIM index [3]. The αi weight the contribution of each scale to the overall image-difference prediction and were obtained through psychophysical experiments on n = 5 scales [3]. The

product of all scales is a weighted geometric mean, i.e., αi = 1. The model can be adjusted to the working color space and the training data with the parameters ci of the individual IDFs. Additive or hybrid combination models did not yield significantly different prediction accuracies [8].

LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR

IV. E XPERIMENTS A. Experimental Data We train and test our IDMs using two types of image-difference datasets that differ primarily in the distortions they include: 1) The Tampere Image Database 2008 (TID2008) [39], [40] comprises 1,700 distorted images derived from 25 reference images. It is based on more than 256,000 paired comparisons by more than 800 observers. The distortions include, among others, lossy compression, noise, and blur. We will refer to these distortions as conventional distortions in the following. 2) Six gamut-mapping datasets collected in different paircomparison experiments [11], [41]–[44]. These datasets comprise 326 reference images — some of them show the same scene — and 2,659 distorted images. A total of 29,665 decisions were collected not counting ties. In each so-called trial of a pair-comparison experiment, the observers are shown two distorted images and the corresponding reference image side by side. They select the image that is more similar to the reference (left or right) — the resulting binary choices for all trials and all observers are the raw experimental data (we do not consider tie decisions here). B. Hit Rates A common performance indicator of image-difference measures is the correlation between human judgments and corresponding predictions. The Spearman and Kendall rank-order correlations are widely used [4], [39], [40]. The human judgments are usually expressed as mean opinion scores (MOS) that are derived from the raw data (the observers’ choices). There are, however, three main problems with this approach: 1) To convert the raw results of a pair-comparison experiment into MOS, a model of the choice distribution has to be assumed [45], e.g., Thurstone’s [46] or BradleyTerry’s [47] model. 2) It is not straightforward to include inter-observer and intra-observer uncertainties into the MOS. Some MOS may be affected by higher uncertainty than others — this information is important for an accurate interpretation of the data. 3) For most image-difference experiments, several distorted images are derived from each reference image [4], [39], [43]. Only images derived from the same reference are compared by the observers; that is, all compared images show the same scene. Consequently, subjective scores of images showing different scenes cannot be compared. This is especially important if the corresponding distortions depend on the image content — e.g., if an image with highly chromatic colors is gamut-mapped, the loss of chroma will be much greater than for an image whose colors are close to the gray axis. However, this is not reflected by the MOS — depending on the scene, the same score can be assigned to images of very different deviation from the reference. Distortions like noise and blur depend on image content to a much lesser extent.

439

For these reasons, we use hit rates to determine the prediction performance of our IDMs. The hit rate pˆ is defined as i (18) m where m is the total number of choices in an experiment and i is the number of correctly predicted choices. A choice is correctly predicted if an IDM computes a better score (smaller difference to the reference) for the image selected by the observer. Tie decisions are excluded. Since we operate on the raw visual data, no assumptions about the choice distribution are necessary. An IDM that returns completely random predictions is expected to achieve a hit rate of pˆ = 0.5. This indicates the lowest possible prediction accuracy — IDMs with lower accuracy become more accurate by inverting their predictions for all image pairs. Note that, if all image pairs are compared exactly once, the hit rate of an IDM is linearly related to the Kendall correlation of the corresponding MOS. It is particularly interesting to compare a hit rate with the maximum achievable hit rate on the same data, which we call majority hit rate ( pˆ m ). Usually, each image pair is compared by several observers whose choices may differ. An IDM reaches the majority hit rate if its predictions agree with the majority of choices for all image pairs. We define the achievable hit-rate range as the interval [0.5, pˆ m ], where 0.5 is the hit rate of random predictions. The ratio p/ ˆ pˆ m may be used to compare IDM predictions for different datasets. In addition, it is not affected by inter- and intra-observer uncertainties. All hit rates we provide in this paper are absolute hit rates, i.e., they have not been rescaled to the achievable hit-rate range. pˆ =

C. Significance Analysis Even if an IDM has a higher hit rate than another on the same data, this may have happened by chance. To determine whether the hit rates are significantly different, we assume that the IDMs’ predictions of observer choices can be modeled as binomial distributions. The respective success probabilities p1 and p2 , i.e., the probabilities of a correct prediction, are unknown. We denote m as the total number of choices; i 1 and i 2 are the correctly predicted choices by the first and second IDM. Yule’s two-sample binomial confidence interval [48] for p1 − p2 with α = 0.05 is then computed as follows:  I = pˆ 1 − pˆ 2 −ψ; pˆ 1 − pˆ 2 +ψ with ψ = z α/2 (2/m) p¯ q¯ (19) where pˆ 1 = i 1 /m, pˆ 2 = i 2 /m, p¯ = (i 1 +i 2 )/(2m), q¯ = 1 − p, ¯ and z α/2 is the upper α/2 quantile of the standard normal distribution [48]. The hit rates are assumed to be significantly different if 0 ∈ / I. D. Working Color Space We chose the LAB2000HL space [30] as our working color space, because it was designed to satisfy the requirements stated in Section II-B. Its perceptual uniformity is based upon the CIEDE2000 color-difference formula [49] and only holds

440

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

Fig. 4.

Structure of the IDMs proposed in this paper. All IDMs are based on our image-difference framework from Fig. 1.

for small color differences. In addition, this color space is hue linear with respect to the Hung and Berns data of constant perceived hue [50]. This means that the perceived hue remains constant on lines of constant predicted hue in the color space. Other possible working spaces are IPT [51] and CIECAM02 [28], [52]. E. Image-Appearance Models Some important viewing-condition parameters of the visual data were not available, e.g., the luminance level. As a result, our normalization step is limited to contrast-sensitivity filtering of the input images. However, the gamut-mapping experiments were conducted on liquid crystal displays in a typical office environment. Since the working color space LAB2000HL was designed for related viewing conditions, the normalization to an average viewing distance is probably the most important adjustment. From the variety of existing contrast-sensitivity functions (CSFs) we included two into our evaluation: 1) The chromatic and achromatic CSFs proposed for evaluating image differences with the iCAM framework [53]. As suggested by Johnson and Fairchild [54], the bandpass-shaped achromatic CSF was turned into a lowpass filter and clipped above 1 (dashed line in Fig. 3 of Ref. [54]). These CSFs were applied in two different color spaces: the working color space LAB2000HL [30] and the intensity-linear orthogonal opponent color space YCC [53]. Filtering was performed in the frequency domain. The corresponding IDMs were denoted as IDMCSF1 (filtering in LAB2000HL) and IDM-CSF2 (filtering in YCC). 2) The chromatic and achromatic CSFs used by the S-CIELAB model applied in the intensity-linear AC1 C2 opponent color space as proposed by Zhang and Wandell [55]. The images were transformed into this color space and convolved with the CSFs in the spatial domain. The corresponding IDM was denoted as IDM-CSF3. Please note that both contrast-sensitivity models apply different filters to the achromatic and chromatic channels,

respectively. To adjust the CSFs to the viewing distance we assumed a spatial frequency of 20 cycles per degree, which corresponds to a viewing distance of 75 cm for the average pixel pitch of the utilized displays. We also created an IDM without CSF filtering and denoted it as IDM-None. F. Fitting the Parameters A part of the gamut-mapping datasets (see Section IV-A) was used to determine the parameters c1 , . . . , c5 of L L , C L , S L , LC , and L H . We selected ≈50% of the reference images from each dataset and combined them into a training set. This set contained 162 reference images with 1,320 corresponding distorted images and 14,239 observer choices. The parameters of the IDMs were optimized by maximizing their hit rates on the training set. The remaining images composed the test set, on which we obtained the following results. The majority hit rate on these test data is pˆ m = 0.801. A hit-rate difference of about 0.01 or more indicates a significant difference on a 95% confidence level (see Section IV-C for details). For every IDM, the corresponding hit rates did not change appreciably if the parameters were varied to some extent. Since a unified parameter set is desirable, we propose a set of simple parameters as listed in Table I. Using these instead of the optimized parameters does not affect the hit rates significantly. Note that the results presented in the following are based on the optimized parameters. The parameters c2 , c3 cannot be compared with c1 , c4 , c5 , because they are used differently in their respective IDFs. However, since c2 = c3 , lightness-contrast and -structure differences are weighted equally. The parameter c5 is considerably greater than both c1 and c4 , indicating that deviations in hue have greater influence on the predictions than deviations in lightness and chroma of similar magnitude. This agrees with heuristics commonly employed by gamutmapping algorithms [5]. The structure of the IDMs we test in this paper is provided in Fig. 4. All difference maps computed by IDM-CSF3 for a test pair are shown in Fig. 5.

LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR

441

Fig. 5. Example of all difference maps computed for the images from Fig. 3 using IDM-CSF3 (S-CIELAB filtering). The L1L map (largest scale) illustrates this concept. However, in accordance with Fig. 4, only the smallest scale LnL is used by our IDMs. The original image is part of the Kodak Lossless True Color Image Suite [21].

V. R ESULTS AND D ISCUSSION The major aim of the experiments is to determine the impact of each IDF on the hit rate. We are also interested in how different contrast-sensitivity models and the multiscale approach affect the results. The SSIM index serves as a reference in our evaluation; it performs significantly better on the experimental data (see Section IV-A) than all other image-quality measures included in the MeTriX MuX Package [56] and the PSNR-HVS [57] measure.

4) Adding one or both chromatic IDFs (LC and L H ) to the lightness-based IDFs (L L , C L , and S L ) results in hit rates that are significantly higher than that of SSIM. The best IDM shows an improvement of ≈10% of the achievable hit-rate range (see Section IV-B) compared to the SSIM index. 5) There is still much room for improvement: the best IDM has a hit rate of 0.681, which is far below the majority hit rate (0.801). However, it is unlikely that this gap can be closed by adding low-level features without considering other factors such as image semantics.

A. How Do Our IDFs Affect the Prediction Performance?

B. Does a Multiscale Approach Improve the Predictions?

Hit rates for all combinations of single-scale IDFs are shown in Fig. 6. All IDF combinations that use a particular contrastsensitivity model share the same parameters ci . They were optimized for the IDMs with all five IDFs (last column in Fig. 6) on the training data. SSIM’s hit rate on the test data (0.650) is marked by a red line. To ensure a fair comparison, the parameters ci of the SSIM index were also optimized on the training data. However, the SSIM index with default parameters shows almost the same performance (hit rate = 0.649). Fig. 6 allows the following conclusions: 1) Most hit rates of IDMs that use CSF filtering are not significantly different. Based on these results, we cannot recommend a particular contrast-sensitivity model. Neglecting the viewing distance, however, results in an inferior prediction performance in most cases. 2) The combination of the three lightness-based IDFs (L L , C L , and S L ) performs better than the SSIM index, but not significantly better. It seems that a perceptually uniform lightness scale in combination with our adjusted IDF L L (see (9)) has a positive but minor effect on the prediction performance. 3) The lightness-contrast IDF C L is the most important IDF. Adding C L to any combination of IDFs significantly improves the hit rate in all cases.

Fig. 7 provides hit rates of the multiscale IDMs operating on 1–5 scales. All hit rates were computed on our gamutmapping test set. The hit rates of the SSIM index (red line) and the multiscale SSIM index (M-SSIM, dashed black line) are included for comparison. Unlike for conventional distortions, the prediction performance of the M-SSIM index on the gamut-mapping distortions is significantly lower (0.632) than that of its single-scale counterpart (0.650). This also applies to our multiscale approach, which uses the same concept and weighting parameters αi as the M-SSIM index. In most cases, the hit rates do not change significantly if 1–3 scales are employed. They drop if more scales are used. One possible explanation is the disagreement between the viewing conditions in the gamut-mapping studies (40 pixels per degree) and the experiment to determine the αi of the M-SSIM index (32 pixels per degree) [3]. To investigate the influence of this disagreement on the hit rates, we adjusted the αi to the gamut-mapping conditions by interpolating the original parameters. The resulting hit rate of the adjusted M-SSIM index is the same (0.632). It is therefore unlikely that this minor difference in viewing distances has great influence on the hit rates. This raises the question whether lightness distortions resulting from gamut mapping are fundamentally different from conventional distortions. To investigate this issue with respect

442

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

0.70

0.68

0.66

Hit Rate

0.64

0.62

0.60

0.58

IDM-CSF1 IDM-CSF2 IDM-CSF3 IDM-None SSIM

0.56

0.54

0.52

LL

IDFs

CL

SL

LC

LH

LL LL LL LL L L L L L L LL LL L L CL CL CL CL CL CL CL C L C L CL SL SL S L SL SL SL S L SL SL SL LC LC LC LC LC LC LC LC LC LC LH L H LH LH L H LH LH LH LH LH

LL LL CL CL SL SL LC LH

LL LL CL SL LC LC LH LH

CL SL LC LH

LL CL SL LC LH

Fig. 6. Hit rates for all possible combinations of IDFs on the gamut-mapping test data. A hit-rate difference of about 0.01 (or more) is significant. The contrast-sensitivity models are abbreviated as CSF1 (Johnson/Fairchild in LAB2000HL), CSF2 (Johnson/Fairchild in YCC), and CSF3 (S-CIELAB).

IDM−CSF1 IDM−CSF2 IDM−CSF3 IDM−None SSIM M-SSIM

0.68

Hit Rate

0.67

0.66

1

0.8 Pearson Correlation

0.69

0.6

0.4

0.2

0.65

0.64

0.63

1

2

3 Number of Scales

4

5

0

C LS L Gamut mapping C LS L TID2008 c s M-SSIM Gamut mapping c s M-SSIM TID2008

−0.2 1−1

1−2

1−3 Scale Comparison

1−4

1−5

Fig. 8. Pearson correlations between C L1 S L1 (largest scale) and C Li S Li , i = 1, . . . , 5. The correlations of corresponding scale values used by the M-SSIM index are given for comparison.

Fig. 7. Relationship between number of scales and hit rate on the gamutmapping test data. A hit-rate difference of about 0.01 (or more) is significant on a 95% confidence level.

to our lightness-based IDFs, we calculated the Pearson correlations between IDFs extracted from the first scale, C L1 S L1 , and higher scales, C Li S Li , i = 2, . . . , 5. Fig. 8 shows the results.

For the gamut-mapping database, the correlations between IDFs across scales are high. This means that the imagedifference data extracted from scales 2–5 are very similar to those extracted from scale 1.

LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR

TABLE II S PEARMAN C ORRELATIONS ON THE TID2008 Multiscale (5 scales) SSIM 0.625 0.853 SSIM09 0.775 — IDM-CSF1 0.643 0.789 IDM-CSF1∗ 0.621 0.797 IDM-CSF2 0.642 0.793 IDM-CSF2∗ 0.622 0.790 IDM-CSF3 0.750 0.790 IDM-CSF3∗ 0.736 0.798 IDM-None 0.579 0.710 IDM-None∗ 0.585 0.792 ∗ hue- and chroma-based IDFs omitted.

443

IDMs are inferior to the M-SSIM index on the TID2008, we should keep in mind that our parameters were optimized only on gamut-mapping data.

Single-scale

In contrast, correlations across scales are much smaller for the TID2008. These findings also apply to the corresponding terms of the M-SSIM index. It appears that gamut-mapping distortions in the lightness component are indeed very different from conventional distortions. Further analysis of such image degradations is required to find multiscale strategies that improve the prediction performance. As we focus on colorrelated aspects of image-difference prediction, we leave this to future research. C. Is Color Important for Judging Conventional Distortions? To investigate this question we tested our IDMs with and without the hue- and chroma-based IDFs on conventional distortions from the TID2008. We compared the results with those of the M-SSIM index and two implementations of the SSIM index, one of which (denoted as SSIM09) takes the viewing distance into account. Both SSIM implementations show similar performance on the gamut-mapping data, but differ considerably on the TID2008. As only the mean opinion scores (MOS) were available, the Spearman correlation was used as a performance indicator. The results are summarized in Table II. multiscale IDMs use five scales just like the M-SSIM index. Note that the parameters ci were the same as in the gamut-mapping evaluation. Our results allow the following conclusions: 1) Hue- and chroma-based IDFs do not considerably affect the prediction performance of IDMs that use CSF filtering. They have a negative influence on the accuracy if the viewing distance is not taken into account. 2) The performance of the single-scale IDM-CSF1 and IDM-CSF2 is comparable to that of the SSIM index. The S-CIELAB-based IDM-CSF3 performs better — it almost matches the SSIM09 index. 3) In contrast to our results on the gamut-mapping data, all IDMs benefited from the multiscale approach. The MSSIM index performs better than all proposed multiscale IDMs on the TID2008, even though the underlying concepts are similar. In conclusion, color information is neither essential for judging conventional distortions nor does it adversely affect the predictions of our single-scale IDMs. Although our multiscale

VI. C ONCLUSION We presented a framework for the assessment of perceived image differences. It normalizes the images to specific viewing conditions with an image-appearance model, extracts imagedifference features (IDFs) that are based upon hypotheses on perceptually important distortions, and combines them into an overall image-difference prediction. Particular emphasis was placed on color distortions, especially those resulting from gamut-mapping transformations. We created image-difference measures (IDMs) based on this framework using IDFs adopted from the terms of the SSIM index. They are numerical representations of assumptions about perceptually important achromatic and chromatic distortions. We tested the framework on gamut-mapping distortions using several datasets. Only the viewing distance was considered in the normalization step, because other viewingcondition parameters were not available. Our main goal was to investigate the impact of chromatic IDFs on the prediction performance. We also tested if viewing-distance normalization as well as multiscale IDFs adopted from the M-SSIM index significantly affect the prediction of gamut-mapping distortions. On gamut-mapped images, the achromatic IDFs achieve a prediction performance similar to that of the SSIM index. Our most important conclusion is that adding a chroma- or huebased IDF (or both) significantly improves the predictions on the gamut-mapping data. This illustrates the benefit of including color information into image-difference measures. The most accurate IDM proposed in this paper is 10% more accurate than the SSIM index on gamut-mapped images. Furthermore, chromatic IDFs do not adversely affect the prediction performance on conventional distortions, such as noise and blur, from the Tampere Image Database 2008 (TID2008). It should be mentioned that our best hit rate is still far below the maximum achievable hit rate — i.e., there is room for improvement in predicting gamut-mapping distortions. Our results show the importance of normalizing the input images to a specific viewing distance. The prediction performance with normalization is generally higher than without normalization. This applies to gamut-mapping distortions as well as conventional distortions. Finally, using lightness-based multiscale IDFs adopted from the M-SSIM index decreases the prediction performance on gamut-mapped images. This is in contrast to our results on conventional distortions. We performed a multiscale analysis of lightness distortions resulting from gamut mapping. Our results show that lightness-based IDFs extracted from different scales show a much higher inter-scale correlation than for conventional distortions. This suggests that a more suitable multiscale approach could further increase the prediction accuracy. We believe that our most important contributions are the image-difference framework, the chromatic difference fea-

444

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

in the intensity domain: 0

=

μ

y

0.8

l(x, y) = 60

0.9

=4

μ = y 20

1

y

80 = y

μ

0 10 =

0.5

y

μ

l(x,y)

0.6

0.4 0.3 0.2 0.1 0

0

20

Ix 2 + I y 2

.

(21)

Applying Fechner’s law [59] yields a logarithmic relation of intensity I to perceptually uniform lightness L: Iy Ix L y = L max · ln (22) L x = L max · ln Imax Imax and Ly Lx Ix = Imax · e L max I y = Imax · e L max (23)

μ

0.7

2Ix I y

40

μx

60

80

100

Fig. 9. SSIM l-function from (20) for different values of μ x and μ y . The constant c1 was set to 6.5025 as in [2].

where Imax and L max represent the maximum intensity I and lightness L, respectively. Stevens showed that Fechner’s law is not quite correct [60]. However, because the difference between Fechner’s logarithmic function and Stevens’ power function is rather small, we use Fechner’s law for the sake of simplicity. Substituting (23) into (21) leads to: Lx

Ly

tures, and the hit-rate-based significance analysis of the prediction performance. These concepts could aid the creation and testing of image-difference measures. Future research should focus on the creation of an improved image-difference database of gamut-mapped images. The images used in most gamut-mapping experiments exhibit similar distortions, e.g., reduced chroma and almost no change in hue. IDMs trained on such data may underestimate the importance of chroma changes because all images exhibit reduced chroma. For optimal results, a database with highly uncorrelated distortions is required. To test if further improvements are possible using only low-level image-difference features, both semantic and nonsemantic distortions should be included into such a database. Implementations of our IDMs are provided as MATLAB code on our website [58]. For the sake of simplicity, these IDMs can be seen as different configurations of a single IDM, which we call CID measure (“color-image-difference measure”). By default, it uses only a single scale, S-CIELAB as an image-appearance model (at 20 cycles per degree), and the proposed parameters from Table I. A PPENDIX T RANSFORMING THE SSIM L UMINANCE F UNCTION FROM I NTENSITY L INEARITY INTO P ERCEPTUAL U NIFORMITY The SSIM luminance function reads as follows [2]: l(x, y) =

2μx μ y + c1 . μ2x + μ2y + c1

(20)

It depends strongly on the intensity level, i.e., for a constant difference between μx and μ y the function value increases with increasing absolute values of μx and μ y . This is illustrated in Fig. 9. We neglect the parameter c1 , which was included to stabilize the term if the denominator is close to zero. Instead of mean values μx and μ y we use Ix and I y to emphasize that we are

l L (x, y) = Lx e2 L max

Lx

Ly

2e L max e− L max 

 = . Ly Ly Lx + e2 L max e2 L max e−2 L max + 1

2e L max e L max

(24) With L x y =

Ly Lx − L max L max

(25)

(24) reads: l L (x, y) =

1 2eL x y . = 2L x y cosh(L e +1 xy)

(26)

If L x y  1, we can make the following approximation using the first two terms of the corresponding Taylor series: 1 1 l L (x, y) = ≈ . (27) cosh(L x y ) 1 + (L x y )2 /2 Thus, in a perceptually uniform color space, the term in (27) corresponds closely to the term in (20) in an intensity-linear color space. R EFERENCES [1] A. C. Bovik. (2011). New dimensions in visual quality. presented at Electronic Imaging Conf. [Online]. Available: http://river-valley.tv/newdimensions-in-visual-quality/ [2] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. [3] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Proc. IEEE 37th Asilomar Conf. Signals, Syst. Comput., vol. 2. Pacific Grove, CA, Nov. 2003, pp. 1398–1402. [4] H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, Nov. 2006. [5] J. Moroviˇc, Color Gamut Mapping. Chichester, U.K.: Wiley, 2008. [6] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting, 1st ed. San Francisco, CA: Morgan Kaufmann, 2006. [7] Z. Wang and Q. Li, “Information content weighting for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 5, pp. 1185–1198, May 2011. [8] J. Preiss, I. Lissner, P. Urban, M. Scheller Lichtenauer, and P. Zolliker, “The impact of image-difference features on perceived image differences,” in Proc. 6th Eur. Conf. Color Graph., Imag., Vis., Amsterdam, The Netherlands, 2012, pp. 43–48.

LISSNER et al.: IMAGE-DIFFERENCE PREDICTION: FROM GRAYSCALE TO COLOR

[9] M. Scheller Lichtenauer, P. Zolliker, I. Lissner, J. Preiss, and P. Urban, “Learning image similarity measures from choice data,” in Proc. 6th Eur. Conf. Color Graph., Imag., Vis., Amsterdam, The Netherlands, 2012, pp. 24–30. [10] N. Bonnier, F. Schmitt, H. Brettel, and S. Berche, “Evaluation of spatial gamut mapping algorithms,” in Proc. 14th Color Imag. Conf., Scottsdale, AZ, 2006, pp. 56–61. [11] P. Zolliker, Z. Bara´nczuk, and J. Giesen, “Image fusion for optimizing gamut mapping,” in Proc. 19th Color Imag. Conf., San Jose, CA, 2011, pp. 109–114. [12] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process., vol. 9, no. 4, pp. 636–650, Apr. 2000. [13] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81–84, Mar. 2002. [14] H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 2117–2128, Dec. 2005. [15] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp. 430–444, Feb. 2006. [16] D. M. Chandler and S. S. Hemami, “VSNR: A wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process., vol. 16, no. 9, pp. 2284–2298, Sep. 2007. [17] J. Hardeberg, E. Bando, and M. Pedersen, “Evaluating colour image difference metrics for gamut-mapped images,” Colorat. Technol., vol. 124, no. 4, pp. 243–253, 2008. [18] J. Moroviˇc and P.-L. Sun, “Predicting image differences in color reproduction from their colorimetric correlates,” J. Imag. Sci. Technol., vol. 47, no. 6, pp. 509–516, 2003. [19] Z. M. Parvez Sazzad, Y. Kawayoke, and Y. Horita, “No reference image quality assessment for JPEG2000 based on spatial features,” Signal Process., Image Commun., vol. 23, no. 4, pp. 257–268, 2008. [20] M. D. Fairchild, Color Appearance Models, 2nd ed. Chichester, U.K.: Wiley, 2006. [21] Kodak Lossless True Color Image Suite. (2012) [Online]. Available: http://r0k.us/graphics/kodak/ [22] F. W. Campbell, J. J. Kulikowski, and J. Levinson, “The effect of orientation on the visual resolution of gratings,” J. Physiol., vol. 187, no. 2, pp. 427–436, 1966. [23] C. Owsley, R. Sekuler, and D. Siemsen, “Contrast sensitivity throughout adulthood,” Vis. Res., vol. 23, no. 7, pp. 689–699, 1983. [24] F. L. van Nes and M. A. Bouman, “Spatial modulation transfer in the human eye,” J. Opt. Soc. Amer., vol. 57, no. 3, pp. 401–406, 1967. [25] E. Peli, “Contrast in complex images,” J. Opt. Soc. Amer. A, vol. 7, no. 10, pp. 2032–2040, 1990. [26] D. B. Judd, “Ideal color space: Curvature of color space and its implications for industrial color tolerances,” Palette, vol. 29, pp. 25– 31, 1968. [27] G. Cui, M. R. Luo, B. Rigg, G. Roesler, and K. Witt, “Uniform colour spaces based on the DIN99 colour-difference formula,” Color Res. Appl., vol. 27, no. 4, pp. 282–290, 2002. [28] N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 color appearance model,” in Proc. 10th Color Imag. Conf., Scottsdale, AZ, 2002, pp. 23–27. [29] P. Urban, D. Schleicher, M. R. Rosen, and R. S. Berns, “Embedding non-Euclidean color spaces into Euclidean color spaces with minimal isometric disagreement,” J. Opt. Soc. Amer. A, vol. 24, no. 6, pp. 1516– 1528, 2007. [30] I. Lissner and P. Urban, “Toward a unified color space for perceptionbased image processing,” IEEE Trans. Image Process., vol. 21, no. 3, pp. 1153–1168, Mar. 2012. [31] I. Lissner, J. Preiss, and P. Urban, “Predicting image differences based on image-difference features,” in Proc. 19th Color Imag. Conf., San Jose, CA, 2011, pp. 23–28. [32] P. J. Bex and K. Langley, “The perception of suprathreshold contrast and fast adaptive filtering,” J. Vis., vol. 7, no. 12, pp. 1–23, 2007. [33] G. M. Johnson, X. Song, E. D. Montag, and M. D. Fairchild, “Derivation of a color space for image color difference measurement,” Color Res. Appl., vol. 35, no. 6, pp. 387–400, 2010. [34] F. Zhang, L. Ma, S. Li, and K. N. Ngan, “Practical image quality metric applied to image coding,” IEEE Trans. Multimedia, vol. 13, no. 4, pp. 615–624, Aug. 2011.

445

[35] R. S. Berns, F. W. Billmeyer, K. Ikeda, A. R. Robertson, and K. Witt, “Parametric effects in colour-difference evaluation,” Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 101, 1993. [36] R. G. Kuehni, Color Space and Its Divisions, 1st ed. Hoboken, NJ: Wiley, 2003. [37] G. E. Legge and J. M. Foley, “Contrast masking in human vision,” J. Opt. Soc. Amer., vol. 70, no. 12, pp. 1458–1471, 1980. [38] X. Song, G. M. Johnson, and M. D. Fairchild, “Minimizing the perception of chromatic noise in digital images,” in Proc. 12th Color Imag. Conf., Scottsdale, AZ, 2004, pp. 340–346. [39] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti, “TID2008—A database for evaluation of full-reference visual quality assessment metrics,” Adv. Modern Radioelectron., vol. 10, pp. 30–45, 2009. [40] N. Ponomarenko, F. Battisti, K. Egiazarian, J. Astola, and V. Lukin, “Metrics performance comparison for color image database,” in Proc. 4th Int. Workshop Video Process. Qual. Metrics Consumer Electron., Scottsdale, AZ, 2009. [41] P. Zolliker and K. Simon, “Retaining local image information in gamut mapping algorithms,” IEEE Trans. Image Process., vol. 16, no. 3, pp. 664–672, Mar. 2007. [42] J. Giesen, E. Schuberth, K. Simon, P. Zolliker, and O. Zweifel, “Imagedependent gamut mapping as optimization problem,” IEEE Trans. Image Process., vol. 16, no. 10, pp. 2401–2410, Oct. 2007. [43] F. Dugay, I. Farup, and J. Y. Hardeberg, “Perceptual evaluation of color gamut mapping algorithms,” Color Res. Appl., vol. 33, no. 6, pp. 470– 476, 2008. [44] Z. Bara´nczuk, P. Zolliker, and J. Giesen, “Image-individualized gamut mapping algorithms,” J. Imag. Sci. Technol., vol. 54, no. 3, pp. 0302011-–030201-7, 2010. [45] P. Zolliker, Z. Bara´nczuk, I. Sprow, and J. Giesen, “Conjoint analysis for evaluating parameterized gamut mapping algorithms,” IEEE Trans. Image Process., vol. 19, no. 3, pp. 758–769, Mar. 2010. [46] L. L. Thurstone, “A law of comparative judgment,” Psychol. Rev., vol. 34, no. 4, pp. 273–286, 1927. [47] R. A. Bradley and M. E. Terry, “Rank analysis of incomplete block designs: I. The method of paired comparisons,” Biometrika, vol. 39, nos. 3–4, pp. 324–345, 1952. [48] L. Brown and X. Li, “Confidence intervals for two sample binomial distribution,” J. Stat. Plan. Inference, vol. 130, nos. 1–2, pp. 359–375, 2005. [49] D. H. Alman, R. S. Berns, H. Komatsubara, W. Li, M. R. Luo, M. Melgosa, J. H. Nobbs, B. Rigg, A. R. Robertson, and K. Witt, “Improvement to industrial colour-difference evaluation,” Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 142, 2001. [50] P.-C. Hung and R. S. Berns, “Determination of constant Hue Loci for a CRT gamut and their predictions using color appearance spaces,” Color Res. Appl., vol. 20, no. 5, pp. 285–295, 1995. [51] F. Ebner and M. D. Fairchild, “Development and testing of a color space (IPT) with improved hue uniformity,” in Proc. 6th Color Imag. Conf., Scottsdale, AZ, 1998, pp. 8–13. [52] “A colour appearance model for colour management systems: CIECAM02,” Central Bureau of the CIE, Vienna, Austria, Tech. Rep. 159, 2004. [53] E. Reinhard, E. A. Khan, A. O. Akyüz, and G. M. Johnson, Color Imaging: Fundamentals and Applications. Wellesley, MA: A K Peters, 2008. [54] G. M. Johnson and M. D. Fairchild, “Darwinism of color image difference models,” in Proc. 9th Color Imag. Conf., Scottsdale, AZ, 2001, pp. 108–112. [55] X. Zhang and B. A. Wandell, “A spatial extension of CIELAB for digital color image reproduction,” in Soc. Inf. Display Symp. Tech. Dig., vol. 27. 1996, pp. 731–734. [56] MeTriX MuX Visual Quality Assessment Package. (2012) [Online]. Available: http://foulard.ece.cornell.edu/gaubatz/metrix_mux/ [57] K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and M. Carli, “Two new full-reference quality metrics based on HVS,” in Proc. 2nd Int. Workshop Video Process. Qual. Metrics Consumer Electron., Scottsdale, AZ, 2006. [58] MATLAB Implementation of the Color-Image-Difference (CID) Measure. (2012) [Online]. Available: http://www.idd.tu-darmstadt.de/color/papers [59] G. T. Fechner, Elemente der Psychophysik, Erster Theil. Leipzig, Germany: Breitkopf und Härtel, 1860. [60] S. S. Stevens, “To honor Fechner and repeal his law,” Science, vol. 133, no. 3446, pp. 80–86, 1961.

446

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 2, FEBRUARY 2013

Ingmar Lissner received the Engineering degree in computer science and engineering from the Hamburg University of Technology, Hamburg, Germany, in 2009. He is currently pursuing the Ph.D. degree with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. He is also a Research Assistant with the Institute of Printing Science and Technology. His current research interests include color perception, uniform color spaces, and image-difference measures for

Matthias Scheller Lichtenauer received the Master of Science degree in computer science from ETH, Zurich, Switzerland, in 2008, and is currently pursuing the Ph.D. degree with the Group of Joachim Giesen, Friedrich-Schiller-University, Jena, Germany. He is currently with the Laboratory of Media Technology, Empa, Dübendorf, Switzerland, where he is researching design and analysis of psychometric measurements.

color images.

Jens Preiss received the Diploma degree in physics (equivalent to the M.S. degree) from the University of Freiburg, Freiburg, Germany, in 2010, and is currently pursuing the Doctoral degree in color and imaging science with the Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. He is currently a Research Assistant with the Institute of Printing Science and Technology.

Philipp Urban received the M.S. degree in mathematics from the University of Hamburg, Hamburg, Germany, and the Ph.D. degree from the Hamburg University of Technology, Hamburg. He was a Visiting Scientist with the Munsell Color Science Laboratory, Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, from 2006 to 2008. Since 2009, he has been the Head of the Color Research Group, Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt, Germany. His current research interests include spectral-based acquisition, processing, and reproduction of color images, considering the limited metameric and spectral gamut as well as the low dynamic range of output devices.

Peter Zolliker received the Diploma degree in physics from ETH, Zurich, Switzerland, and the Ph.D. degree in crystallography from the University of Geneva, Geneva, Switzerland, in 1987. He was a Post-Doctoral Fellow with the Brookhaven National Laboratory, Upton, NY. He joined Gretag Imaging, Inc, Regensdorf, Switzerland, in 1988. Since 2003, he has been with Empa, Dübendorf, Switzerland, where he has been engaged in research on image quality, psychometrics, color management, and statistical analysis.

Suggest Documents