profiles. In a second step current light conditions are compensated for by means of an ...... Regression with NumPy,â The Python Papers Source Codes, vol. 1,.
Dynamic Colour Adaptation for Colour Object Tracking Guy K. Kloss, Heesang Shin and Napoleon H. Reyes Computer Science – Institute of Information & Mathematical Sciences Massey University at Albany, Auckland, New Zealand Email: {G.Kloss | H.Shin | N.H.Reyes}@massey.ac.nz
Abstract—This manuscript describes an implementation of a system that can adaptively cope with the task of robust colour identification in the domain of robot soccer. The main challenge is to perform this in an environment of changing light conditions and/or cameras used. The camera setup is first characterised using common techniques from colour management with ICC profiles. In a second step current light conditions are compensated for by means of an affine colour transformation determined repeatedly at runtime. Lastly, a self learning hybrid Fuzzy colour contrast fusion algorithm is used to discriminate the detected colours optimally against each other. This enables the system to robustly and efficiently identify individual players and objects on the playing field.
I. I NTRODUCTION In colour vision based systems one faces often the challenge of having to distinguish a known set of colours robustly. This challenge is getting more difficult if the system has to operate under different light conditions, with different cameras (also different models), and even under changing light conditions. The human eye together with the brain sees not just single objects, but objects in context. It uses the context to normalise its perception, and it learns how to discriminate among the expected set of present colours. This paper describes our efforts to reach the same goal, but employing technical approaches that are well known and proven to work in an innovative combination. Coloured objects appear quite different under varying illumination conditions. Beyond a change of light intensity, colour shifts occur quite obviously when the illumination composition is altered e. g. from daylight to fluorescent light. Most current widely available cameras feature an automatic white balancing to compensate for some of these effects. Additionally these cameras employ an internal “scene rendering” step that is not transparent to the user. These built-in means are usually to achieve an acceptable and pleasant appearance of the scene to the end user with the least amount of hassle. However, these are often by no means suitable for robust quantitative reasoning on colour. Actually, these steps are mostly “harmful” towards the intended task as they alter the colour appearance in unpredicted ways. A fuzzy colour contrast fusion algorithm (FCCF) [1] is employed to perform colour segmentation via contrast enhance or degrade operations [1]. This is handled by a HeuristicAssisted Genetic Algorithm (HAGA) which extracts near opti-
mum colour classifier automatically by learning characteristics of target colour [2]. This paper will first describe the process of colour adaptation in Sect. II. In the subsequent Sect. III it discusses the identification of colours through trained classifiers. In Sect. IV we present the results of the individual process steps along with an overall evaluation. II. C OLOUR A DAPTATION The goal of this undertaking is to achieve a system that can compensate for the encountered colour differences beyond a simple white point compensation (white balancing). Additionally we require features to adapt dynamically up to a certain degree of variation in the illumination conditions. An analysis of traditional video processes has been conducted by Fairchild [3]. It states with astonishment, that colour encoding and processing still work reasonably well up to a certain point, even though standards are only followed partially (as not to being ignored completely). Fortunately, a well working and proven industry standard does exist for this problem already. ICC profiles – as a defined format for device characterisation under specific conditions – have been standardised by the International Color Consortium (ICC) [4], which is also a confirmed international standard ISO 150761:2005. The process employed with this is called “colour management.” ICC white papers [5], [6] describe the process as it needs to be applied with digital photography as it can be used in this case as well. The automatic corrective processing of colour image information is quite good. A study compares manual colour corrections in image processing with automated ICC profiled work flows [7]. In most comparisons the automated work flow produces significantly better results, whereas in some no distinct superiority can be statistically derived. A. Initial Camera Characterisation/ICC Profiling The colour perception of an object through camera in principle depends on three factors: The light composition of the scene, the object’s colour, and finally the colour sensing characteristics of the camera device. These all have to be considered in spectral power distributions (over the visible range of the spectrum). If we consider the object to be of static colour, and assume the light conditions to be constant,
B. Dynamic Colour Adaptation
Fig. 1. The reference frame (exposure of 20 ms) after ICC based correction. The ICC profile was computed through characterisation using the 24 patch GretagMacbeth ColorChecker present in the image. The five different colour samples occurring repeatedly are of the same coloured card stock as used for manufacturing our colour markers of the soccer robots. (left) Indication of all seven colour areas used to compute the corrective affine transformation. (right) Only the areas of the blue colours for averaging to determine an affine transformation.
we can then determine the distinct capturing characteristics of the camera into an ICC profile. To do this we need to capture an image of a so called “test target” containing a series of precisely known colours. By computing a transformation from a priori known canonical colours to the perceived colours of the camera we can derive an ICC profile by encoding this information into the standard conforming format. Fig. 1 illustrates the scene under analysis. The 24 patch GretagMacbeth ColorChecker has been placed in the scene (yellow frame in left image). Its colour values are used for computing the initial characterisation to an ICC profile. This profile is computed using the open source profiler Argyll CMS [8]. We have chosen one set of conditions and settings as a reference, for which the profile was derived. This input profile is applied to every frame captured by the camera using the open source colour management system Little CMS [9], [10] to a normalised representation. Next to the fact that the obtained colour after applying the ICC input profile is in a normalised – and therefore device independent – colour representation, it is also encoded in the so called “profile connection space” (PCS). This can be chosen to be CIE LAB or L∗ a∗ b∗ , as standardised by the International Commission on Illumination (Commission Internationale de ´ L’Eclairage, CIE). It represents a colour space that is visually linearised, while other colour spaces like RGB are highly nonlinear and often encoded with an unknown gamma coefficient. Therefore, we are obtaining this particular colour space – which is due to its linearity very suitable for colour analyses – “free of charge.” Its lightness is solely encoded in the L∗ component, and the chromaticity is contained in the orthogonal chromaticity plane a∗ b∗ . For some analyses it can be converted into the equivalent L∗ C ∗ h∗ab polar notation containing the ∗ chromaticity Cab (saturation) and hue h∗ab .
The initial colour characterisation in Sect. II-A describes a static colour correction process. Due to the course of a running session however the illumination conditions may change. Bystanders may shadow more or less secondary light sources, indirect daylight through open windows may change its quality due to clouds, etc. Methodologies on obtaining colour stability in these environments by means of ICC profile based colour correction and de-coupled profile adaptation have been described previously [11]. This paper focuses on the actual algorithm to obtain an updated ICC profile. As mentioned previously in this Sect. II white balancing is the simplest form of a chromatic adaptation transformation. It is based on a (linear) von Kries type transformation (mostly Bradford a transformation) in the cone response domain. For higher accuracy we extend this type of approach to an affine transformation. For a white point adaptation we just need one known point, the white point, to determine the transformation. For an affine transformation we must have knowledge of at least four linearly independent colour points in R3 . This transformation can compensate for translation, rotation, dimension independent scaling as well as shearing. It is fairly simple to use many of the approaches from colour constancy [12], [13] to estimate a white point. It is trickier to determine further independent points in colour space. Fortunately, in robot soccer we are facing a domain in which we can employ some a priori knowledge, as the playing field, the markings, the ball, and the colour markings of the individual players do not change during the course of an event. After the initial static characterisation (see Sect. II-A) a set of colour perceptions for these objects can be determined as a reference, and they can be identified again during the course of the game (e. g. through tracking). Therefore, it is fairly easy to establish a set of n ≥ 4 colour points that can be used to determine an affine transformation. So even in the absence of identification of one (or a few) colour samples during the game it should be possible to have an over determined linear system to solve. We have been using an n-dimensional regression type of solution to this problem [14]. Single measurements with a high error may have a high negative influence on the overall result. Stating and solving the problem in a sufficiently over determined way avoids that issue. Other authors have used non-linear polynomial approaches for similar types of colour space matching, but applied to the non-linear RGB colour space [15]. For our experiments we have used the same card stock as used for marking our robots. The samples in Fig. 1 are arranged repeatedly in nine zones of the playing field. Colour information of these samples is averaged in the indicated areas (red frames). For clarity in this particular case, the white and black information has just been extracted from the white and black patches of the ColorChecker. The redundant arrangement has been chosen for reasons of the slightly inhomogeneously illuminated scene, as it can be seen by just the blue stock samples as illustrated in the right image of the figure (red
frames). In the example data set used for this publication, we have used the existent diffuse fluorescent office illumination without any “daylight influences.” For an easily reproducible comparison of different illumination qualities we have altered the camera’s exposure setting to emulate varying light intensities. For the domain of robot soccer varying light intensities are much more commonly encountered than varying qualities (which have also been analysed outside the scope of the robot soccer domain). One “good” exposure setting has been determined and used as a reference. From that reference the exposure was increased as well as decreased in steps towards over and under exposed images respectively. If image pixels reach either end of the dynamic range of the camera in its current settings, they are “clipped” and assume a value of either 255 or zero. If pixel values for channels are clipped, their chromaticity is beyond the detectable range of the camera, and the use of such pixels leads to distorted results. The reference setting therefore needs to be chosen in a way to contain only few clipped pixels to be of adequate use. In this case an exposure time of 20 ms resulted in such good intermediate results. Further images were acquired with settings of 3, 5, and 10 ms for under exposed images, and 30, 40, 50, 60, 70 and 80 ms for over exposed images. All images are first statically colour corrected against the reference profile obtained with the 20 ms exposure time case to obtain an L∗ a∗ b∗ image. Then the affine colour transformation is computed by using the seven identified colour values (red, orange, yellow, blue and green of the card stock as well as white and black on the test chart of Fig. 1) against their reference values. This affine colour correction is applied to the image. Colour difference metrics are extracted (see Sect. IV-A), and a corrected colour image as well as the images colour data in L∗ C ∗ h∗ab colour space is passed on to the colour segmentation algorithm (see Sect. III). For comparison an equivalent process has been conducted without applying the affine transformation. Yellow exposes a very high lightness, and therefore tends to be subject to pixel clipping in its channels earlier than others. For that reason we have also conducted an analogue test run including the affine transformation above, but using all colours except for yellow. III. C OLOUR S EGMENTATION The colour segmentation is performed on an object’s chromaticity. That means that the full colour space in R3 (L∗ a∗ b∗ or RGB) is reduced by lightness to contain chromaticity information only, consisting (in a polar description) of hue and saturation in R2 . For L∗ a∗ b∗ this can be obtained by dropping the L∗ component yielding a∗ b∗ (or the mathematically equivalent polar description C ∗ h∗ab ) for the chromaticity plane. For RGB the R and G components are divided by the sum R + G + B to form an rg chromaticity plane. For accurately classifying the colours comprising the target colour objects, we employ Fuzzy Colour Contrast Fusion (FCCF) with Heuristic-Assisted Genetic Algorithm (HAGA) [1], [2], [16]. FCCF-HAGA is a colour correction
algorithm that effectively compensates for the effects of spatially varying illumination intensities in the scene. Given a set of target colour objects exposed under varying illumination conditions, the algorithm is able to automatically extract colour descriptors for each target object, including the fuzzy contrast rules that are needed for performing the compensation. One of the strengths of FCCF-HAGA is that the system calibration needs to be performed only once for any particular exploratory environment. In addition, for real-time execution, the colour descriptors for all target objects are stored in a Variable Colour Depth Look-Up Table (VLUT) [16]. For automatic training using HAGA, an empirically determined fitness function (1) is used. This fitness function is a combination of sigmoid functions that increasingly starts rewarding at a certain threshold in the false positive ratio (nf approx. 0.1), provided that the true positive ratio (nt ) is increasing at the same time. The input values are relative to the total number of pixels processed ntotal . nf ntotal # 1 − 1+e−75 1(ft −0.05) 1 1 f (ft , ff ) = + 2 e−10 (ff −0.5) 1 + e−10 (ff −0.4) ft =
nt , ntotal "
ff =
(1)
We have trained and tested our FCCF-HAGA colour classifiers for each type of scene: colour adapted image and original image. The focus of investigation is on the efficacy of the algorithms in terms of classifying colour objects. IV. R ESULTS We have collected results for evaluation on both the colour adaptation as well as the subsequent colour segmentation. Each is compared as applied to a process that circumvents corrective measures for reference. A. Colour Adaptation First of all we are going to have a visual inspection of one of the samples through the various steps of colour correction (Fig. 2). Quantitative colour information is very important, but can be at times deceptive towards the true accuracy of the correction. As one can see the dark base board of the field has got a slightly different colour tint, but the colour patches fit quite ∗ well visually. The mean ∆Eab value to the reference over the seven colour patches has dropped from 24 (second image) to 6.5 (third image) after the corrective transformation. The ∗ L∗ a∗ b∗ colour space has been set up so that a ∆Eab value (Euclidean distance) of one is visually just distinguishable if the coloured areas are directly adjacent. To get a feeling for the effectiveness of the different ∗ corrective transformations we have plotted the obtained ∆Eab values for the different colour corrections against the camera’s exposure setting as illustrated in Fig. 3. As an exposure of 20 ms was used as a reference, the colour difference is by default equal to zero. The solid lines illustrate the change in the mean colour difference for all seven colour patches under
corrected image - rg chromaticity
0.8
0.8
0.6
0.6
0.4 0.2 0.0
0
original image - rg chromaticity
0.8
0.8
0.6
0.6
0.4
0
10 20 30 40 50 60 70 80 exposure time [ms] original image - ab chromaticity
1.0
score
score
0.0
10 20 30 40 50 60 70 80 exposure time [ms]
0.2
Fig. 2. The individual steps of the colour correction. (top left) Frame as captured from the camera at 10 ms exposure time. (top right) After application of the reference ICC profile. (bottom left) After application of the corrective affine transformation. (bottom right) The reference image (also colour corrected).
0.4 0.2
1.0
0.0
corrected image - ab chromaticity
1.0
score
score
1.0
0.4 0.2
0
10 20 30 40 50 60 70 80 exposure time [ms]
0.0
0
10 20 30 40 50 60 70 80 exposure time [ms]
Fig. 4. Fitness scores of the trained classifiers plotted against the camera’s exposure settings. The solid curves show the mean value over the five colour samples placed within the image, the vertical bars indicate the range from minimal to maximal score for the samples. (top row) Results using colour corrected input images. (bottom row) For comparison results using the raw camera images. (left column) Training performed on rg chromaticity values. (right column) Training performed on a∗ b∗ chromaticity values.
80 (mean) all patches (min/max) all patches (mean) all but yellow (min/max) all but yellow (mean) none (min/max) none
70
colour difference [∆E*ab]
60 50 40 30 20 10 0
0
10
20
30
40
50
60
70
80
exposure time [ms]
∗ of the colour Fig. 3. Plot of the linear colour difference measure ∆Eab patches used for defining the transformation. The mean (solid lines) is plotted against the camera’s exposure setting. Minimum and maximum colour differences (dotted lines) are shown to indicate the range of spread. The sets of three curves are determined by using different numbers of colour patches in obtaining an affine transformation: all seven colours, all colours except for yellow, and no corrective transformation.
scrutiny, whereas the dotted lines indicate the maximum and minimum respective values of the set. Some observations become obvious when studying the diagram: •
•
If no colour correction is applied, the mean and maximum errors are lower for over exposed images. In contrary for any of the applied colour corrections the error becomes lower – or effectiveness of the correction better – for under exposed images. Starting from shutter speeds of around 40–50 ms significant pixel clipping starts to happen within the areas of our colour samples. The chromaticity/saturation of the yellow patches starts to converge towards the white point, which
limits the maximum error, and the dotted curves take a bend and “flatten out” towards an upper bound. • The influence between using the patch set with and without yellow on the average is minute. Of course the accuracy of yellow colours will be lower when omitting ∗ that patch, but the other colours with low ∆Eab values will gain between 2–5 in accuracy. If an application has got a certain demand in colour detection quality the workable range of the camera exposure and tolerable light conditions can be significantly extended when using the adaptive colour correction as presented in this paper. If the demand for robust colour separation can for example ∗ be estimated to a threshold of ∆Eab < 10, we can estimate the acceptable exposure range. Using no dynamic adaptive ∗ corrections the range (intersection with the ∆Eab = 10 line) is approximately 16–28 ms (12 ms). Using dynamic correction that range increases to approximately 7–32 ms (25 ms). Usable exposure tolerances therefore could be more than doubled. Finally we might get slight further improvements in colour correction quality by discarding colour patches as they start to show a certain level of pixel clipping. This would result in hybrid results choosing the better approach towards the selection of patches to use for computing the affine transformation. B. Colour Segmentation The experiments reflect the results of application of FCCFHAGA on two types of scenes, namely the colour adapted ones and the original scene, using the modified rg chromaticity colour space [1] and a∗ b∗ chromaticity colour space separately. The performance of the algorithm is measured in terms of the fitness score. Table I shows the colour classification results on each target environment, for each target colour. These same results of the classification training are plotted in Fig. 4.
TABLE I C OLOUR C LASSIFICATION R ESULTS ON T EST E NVIRONMENT Test Environment Shutter Colour Speed [ms] Correction corrected 3 uncorrected corrected 5 uncorrected corrected 10 uncorrected corrected 20 uncorrected corrected 30 uncorrected corrected 40 uncorrected corrected 50 uncorrected corrected 60 uncorrected corrected 70 uncorrected corrected 80 uncorrected
Fitness Score for Colour Classification Colour Space rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗ rg a∗ b∗
Green 0.980504 0.980784 0.978379 0.979497 0.981204 0.981296 0.979022 0.980399 0.981475 0.981561 0.980410 0.980614 0.980991 0.981071 0.981323 0.980965 0.980837 0.981595 0.981293 0.980705 0.980751 0.981362 0.981225 0.981029 0.971255 0.978410 0.980677 0.975971 0.955302 0.949111 0.964354 0.896187 0.811107 0.836162 0.833282 0.496227 0.496654 0.496959 0.496654 0.496213
Red 0.971354 0.976634 0.976149 0.977762 0.977919 0.978480 0.978196 0.978610 0.979044 0.979830 0.979758 0.979961 0.979264 0.980787 0.980761 0.980438 0.977757 0.979981 0.979915 0.980173 0.976846 0.979207 0.972791 0.979920 0.971992 0.979128 0.973589 0.979673 0.960196 0.969078 0.973571 0.972811 0.912728 0.940325 0.960505 0.968111 0.916925 0.924651 0.955742 0.956361
Orange 0.976543 0.978615 0.977636 0.976266 0.978938 0.980227 0.979420 0.980491 0.979831 0.979003 0.981133 0.981459 0.979655 0.981820 0.980785 0.982056 0.978150 0.981517 0.980091 0.980540 0.978387 0.977904 0.978890 0.978557 0.973259 0.969662 0.973183 0.977524 0.955428 0.968642 0.975368 0.915938 0.935779 0.945457 0.977304 0.707525 0.899200 0.937963 0.975758 0.390703
Yellow 0.980676 0.975172 0.979257 0.979323 0.980770 0.970982 0.978540 0.979804 0.981368 0.980502 0.981284 0.981323 0.981119 0.981301 0.981631 0.980597 0.981351 0.981107 0.981603 0.041886 0.979957 0.956908 0.981372 0.014014 0.897306 0.642684 0.919584 0.507861 0.600204 0.611677 0.688870 0.526336 0.577732 0.577554 0.602570 0.509335 0.517188 0.511223 0.524232 0.497741
Light Blue 0.977102 0.901885 0.977971 0.959642 0.977888 0.909464 0.977149 0.972442 0.978987 0.975024 0.978126 0.971244 0.980004 0.905682 0.979951 0.249719 0.975671 0.978458 0.979250 0.012436 0.978732 0.978791 0.979721 0.012133 0.977647 0.977465 0.979860 0.012133 0.962518 0.975434 0.979532 0.499930 0.955144 0.965260 0.969755 0.496965 0.915754 0.913104 0.930855 0.012133
Generally, the results of colour classification using the modified rg chromaticity colour space show satisfactory results for all experiments performed under 60 ms exposure time (shutter speed). This is true for both colour corrected and original target images. However, in the a∗ b∗ chromaticity colour space, the classification results on the colour corrected target images have clearly improved. We found that FCCF-HAGA works particulary well on colour corrected images, results in two effects on the clustering: The clusters appear in the right place, and the pixel colours are more compacted than in the uncorrected (original) image. This can be seen in Fig. 5. The two clusters of the light blue sample (left) are rotated to the left to fall into the right places, and the cluster of the yellow sample is rotated slightly to the right and “drawn” towards the origin in the plot. Additionally many “outliers” were confined to the expected regions after the correction (not visible in the figure). The fitness scores in Fig. 4 show the range for the colour classifier on each different test scene. The scores generally show that the brighter the image is, the harder to segment the colours accurately due to the colour clipping effect. Interestingly, it can be observed that there is a big improvement of colour correction in the a∗ b∗ chromaticity space; both on the original image and colour corrected image. The varying fitness scores observed in experiments performed using the a∗ b∗ chromaticity space, as shown in Table I are due to the inaccurate yellow and light blue colour segmentation results. Nevertheless, the performance clearly improved after the application of the colour adaptation technique.
20 ms exposure - light blue
20 ms exposure - yellow
+b*
+b*
12 10 8 6 4 2 +a* 0 12 10 8 6 4 22 0 0 2 4 5 6 8 10 10 12 4 6 8 10 corrected 12 original 30 ms exposure - light blue
12 10 8 6 4 2 +a* 0 12 10 8 6 4 22 0 0 2 4 5 6 8 10 10 12 4 6 8 10 corrected 12 original 30 ms exposure - yellow
+b*
+b*
12 10 8 6 4 2 +a* 0 12 10 8 6 4 22 0 0 2 4 5 6 8 10 10 12 4 6 8 10 corrected 12 original
12 10 8 6 4 2 +a* 0 12 10 8 6 4 22 0 0 2 4 5 6 8 10 10 12 4 6 8 10 corrected 12 original
Fig. 5. Effects of FCCF-HAGA on the cluster formations of pixels depicting the “yellow” and “light blue” target objects for a∗ b∗ chromaticity. The shutter speed was set to 20 ms and 30 ms in these experiments.
Fig. 5 illustrates the effects of the algorithms on colour pixel clustering. The original colour pixel values of the target colours yellow and light blue are relatively scattered in the a∗ b∗ chromaticity space. This makes it difficult to confine them within the pie-slice decision region. However, upon the application of colour adaptation, the cluster formation was influenced to become more compact, particularly due to the confinement of some extremely scattered outliers. In the colour segmentation process, the colour correction clearly improves colour segmentation accuracy by influencing the formation of the colour clusters. The clusters are located more stably in a pie-slice decision region of the a∗ b∗ chromaticity colour space. This can be observed by examining the curve of the average score in Fig. 6. The wider shape of the curve for the corrected image (top row) indicates a wider band in which the colour segmentation works well. Furthermore, it is visible that the colour segmentation works better on corrected images using the a∗ b∗ chromaticity (right column) over the rg chromaticity colour space. Also the tolerances for the scores of the colour segmentation stay smaller over a wider band while using a∗ b∗ chromaticities. V. C ONCLUSION The goal of this research was to improve colour discrimination for the identification of known coloured objects in the domain of robot soccer. The process employed can be split into two significant stages: Adaptive colour correction and colour segmentation. The colour correction uses the a priori knowledge of a set of re-occurring colours in conjunction with an initial
corrected image - rg chromaticity
0.8
0.8
0.6
0.6
0.4 0.2 0.0
0
0.0
10 20 30 40 50 60 70 80 exposure time [ms] original image - rg chromaticity
0.8
0.8
0.6
0.6
0.4 0.2
0
10 20 30 40 50 60 70 80 exposure time [ms] original image - ab chromaticity
1.0
score
score
0.4 0.2
1.0
0.0
corrected image - ab chromaticity
1.0
score
score
1.0
0.4 0.2
0
10 20 30 40 50 60 70 80 exposure time [ms]
0.0
0
10 20 30 40 50 60 70 80 exposure time [ms]
Fig. 6. Fitness scores of the matching performance using the reference classifier for exposure time 20 ms. Computation cases for the four images as in Fig. 4.
static characterisation of the system. This processing step is to “feed” a colour normalised image representation to the subsequent segmentation process. As shown in this paper the robustness of colour appearance in changing light conditions could be greatly enhanced. Future work in this field will need to examine conditions and cases in which the necessity of a priori knowledge can be reduced. Furthermore, the corrective computations can be sped up by smeltering the resulting adaptive colour correction transformation into the initially used ICC profile. This will result in very efficient, one-step colour correction step, which just demands to receive updated ICC profiles as needed. The colour correction clearly improves the colour segmentation accuracy by influencing the formation of the colour clusters more stably in a pie-slice decision region of the a∗ b∗ chromaticity colour space. The combined colour adaptation steps illustrate a doubling in acceptable exposure ranges for the examined setup from under- to over-exposed images.
R EFERENCES [1] N. H. Reyes and P. E. Dadios, “Dynamic color object recognition using fuzzy logic,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 8, pp. 29–38, 2004. [2] H. Shin, “Finding near optimum colour classifiers : Genetic algorithmassisted fuzzy colour contrast fusion using variable colour depth,” Master’s thesis, Massey University, 2009. [3] M. D. Fairchild, “A Color Scientist Looks at Video,” in Proceedings of the 3rd International Workshop on Video Processing and Quality Metrics (VPQM), Scottsdale, AZ, 2007. [4] Specification ICC.1:2004-10 (Profile version 4.2.0.0) [ISO 150761:2005], International Color Consortium Std., Rev. 1:2003-09, with errata incorporated, 5/22/2006. [Online]. Available: http://color.org/icc specs2.xalter [5] P. Green, “White Paper #17: Using ICC profiles with digital camera images,” http://www.color.org/ICC white paper 17 ICC profiles with camera images.pdf, International Color Consortium, Tech. Rep., April 2005. [Online]. Available: http://www.color.org/whitepapers.html [6] ——, “White Paper #20: Digital photography color management basics,” http://www.color.org/ICC white paper 20 Digital photography color management basics.pdf, International Color Consortium, Tech. Rep., April 2005. [Online]. Available: http://www.color.org/whitepapers. html [7] B. Chung and D. Sa-areddee, “CMS for Digital Photography, A Case Study,” in Proceedings of the 9th Congress of the International Colour Association, Rochester, NY, USA, 2001. [8] G. W. Gill, “ArgyllCMS project,” http://argyllcms.com/, last accessed January 2009. [9] M. Maria, “LittleCMS: A free color management engine in 100K,” Large format printer division of Hewlett-Packard, Barcelona, Spain, Tech. Rep., December 2006, last accessed June 2009. [Online]. Available: http://www.littlecms.com/about.html [10] G. K. Kloss, “Automatic C Library Wrapping – Ctypes from the Trenches,” The Python Papers, vol. 3, no. 3, pp. –, December 2008, [Online available] http://ojs.pythonpapers.org/index.php/tpp/issue/view/10. [11] G. K. Kloss, N. H. Reyes, M. J. Johnson, and K. A. Hawick, “Gaining Colour Stability in Live Image Capturing,” in Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision (ICARCV 2008), Hanoi, Vietnam, December 2008. [12] K. Barnard, “Practical Colour Constancy,” Ph.D. dissertation, Simon Fraser University, School of Computing, 1999. [Online]. Available: http://kobus.ca/research/publications/PHD-99/ [13] M. Ebner, Color Constancy, ser. Imaging Science and Technology, M. A. Kriss, Ed. West Sussex, England: Wiley-IS&T, 2007. [14] G. K. Kloss and T. F. Kloss, “n–Dimensional Linear Vector Field Regression with NumPy,” The Python Papers Source Codes, vol. 1, August 2009, [accepted for publishing]. [15] A. Ilie and G. Welch, “Ensuring color consistency across multiple cameras,” in Proc. Tenth IEEE International Conference on Computer Vision ICCV 2005, vol. 2, 17–21 Oct. 2005, pp. 1268–1275. [16] H. Shin and N. Reyes, “Variable colour depth look-up table based on fuzzy colour processing,” in ICONIP 2008, Lecture Notes in Computer Science, 2009.