A Database of High Dynamic Range Visible and ... - SCIEN, Stanford

2 downloads 0 Views 1MB Size Report
common lenses with change in aperture size. We found for the Nikon 50mm f/1.8D AF Nikkor Lens and Tamron. SP Autofocus 90mm f/2.8 DI 1:1 Macro Lens that ...
A Database of High Dynamic Range Visible and Near-infrared Multispectral Images Manu Parmar,a Francisco Imai,b Sung Ho Park,b and Joyce Farrellc a Electrical

Engineering Department, Stanford University, CA 94305, USA; Samsung Information Systems America, San Jose, CA 95134 USA; c Stanford Center for Image Systems Engineering, Stanford University, CA 94305, USA. b

ABSTRACT Simulation of the imaging pipeline is an important tool for the design and evaluation of imaging systems. One of the most important requirements for an accurate simulation tool is the availability of high quality source scenes. The dynamic range of images depends on multiple elements in the imaging pipeline including the sensor, digital signal processor, display device, etc. High dynamic range (HDR) scene spectral information is critical for an accurate analysis of the effect of these elements on the dynamic range of the displayed image. Also, typical digital imaging sensors are sensitive well beyond the visible range of wavelengths. Spectral information with support across the sensitivity range of the imaging sensor is required for the analysis and design of imaging pipeline elements that are affected by IR energy. Although HDR scene data information with visible and infrared content are available from remote sensing resources, there are scarcity of such imagery representing more conventional everyday scenes. In this paper, we address both these issues and present a method to generate a database of HDR images that represent radiance fields in the visible and near-IR range of the spectrum. The proposed method only uses conventional consumer-grade equipment and is very cost-effective.

1. INTRODUCTION Software tools that allow simulation of the digital imaging pipeline are invaluable in the design and analysis of imaging systems and the evaluation of image quality. Some tools are available that permit simulation of all the elements of a typical imaging chain including the optics, sensor, image processing, color transforms, and display (e.g., image systems evaluation toolkit (ISET)1 ). The dynamic range of an image depends on the dynamic ranges of multiple components of the imaging pipeline including the sensor, signal processor, ADC, display device, etc. For simulations that are consistent with real world values and that will allow for accurate quantitative comparisons of simulation results against real world data, it is critical to have scene data that is accurate and of a sufficiently high dynamic range. High dynamic range (HDR) scene data also enables the simulation and design of HDR rendering and tone-mapping algorithms.2

Normalized response

Another important feature in scene data that is typically neglected in the context of image quality evaluation is the presence of energy in the IR regions 1 of the spectrum. Typical photodetectors are Si-based semiconductor devices and the nature of the technology leads to sensors that are sensitive in both the 0.8 visible and infrared regions of the spectrum. Figure 1 shows the spectral re0.6 sponse of a typical photodetector. Note that the photodetector has significant Visible range 0.4 response up to about 950 nm. This infrared energy is not visible to the human Infrared visual system (HVS) and causes problems with accurate color reproduction. 0.2 range Digital imaging systems designed for the consumer market typically use an IR 0 400 500 600 700 800 900 blocking filter or “hot-mirror” to protect against the deleterious effects of IR Wavelength (nm) energy on color accuracy. Scene spectral information with support across the Figure 1. Normalized response of a sensitivity range of the imaging sensor is required for the analysis and design typical photodetector shown as a function of wavelength. of imaging pipeline elements that are affected by IR energy. In this paper, we address both these issues (HDR and IR-range data) and present a method to generate a database of HDR images that represent radiance fields in the visible and near-IR Send correspondence to Manu Parmar; E-mail: [email protected] Digital Photography IV, edited by Jeffrey M. DiCarlo, Brian G. Rodricks, Proc. of SPIE-IS&T Electronic Imaging, SPIE Vol. 6817, 68170N © 2008 SPIE-IS&T · 0277-786X/08/$18 SPIE-IS&T/ Vol. 6817 68170N-1

range of the spectrum. Many methods to generate multispectral scenes have been proposed in the literature.3–6 These methods can be broadly classified into two main categories: The first category includes methods that use tunable liquid crystal filters to capture narrow-band ranges of the spectrum. Tominaga and Okajima3 describe such a system that uses a birefringent filter which is tuned to pass 21 different narrow bands in the range 400– 700 nm. These 21 images are then used to estimate both the reflectance and illuminant fields. The primary drawback of such methods is the high cost of equipment. The second category includes methods that use a number of optical filters in conjunction with a monochromatic camera to capture several spectral bands across the spectrum range. These methods rely on linear models of reflectances to reconstruct spectra from a limited number of samples acquired with a monochromatic sensor and broadband optical filters. Imai et al.6 compare the spectral reconstruction performance of a trichromatic camera used in conjunction with several optical filters when the reconstruction is carried out in different color spaces. This second approach is attractive since it uses conventional imaging equipment and is very cost-effective. We propose a method to acquire multispectral images in the visible and infrared ranges using a consumer grade digital SLR (Nikon D200) in conjunction with a few commonly available optical filters.

2. SPECTRAL RECONSTRUCTION AND LINEAR MODELS Spectral radiances are energy distributions expressed as a function of wavelength. A radiance x(λ) is typically considered to have support in the range 400 ≤ λ ≤700 nm, which corresponds approximately to the visible range of wavelengths. Radiance functions may be discretized as finite-dimensional vectors and an N may be found such that a vector x ∈ RN well models a radiance.7 It is typical to sample the visible range of wavelengths at every 10 nm such that N = 31. Further, it has been shown that most natural radiances are fundamentally lower-dimensional signals,8, 9 in that they can be approximated in a k-dimensional subspace such that k  N . A radiance vector may then be found as a linear combination of the k basis vectors of this subspace.10 In our application, we also include the near infrared region and consider reflectances with support in the range 380 ≤ λ ≤1070 nm. We have collected a set of spectral measurements (> 1200 measurements) using a spectraradiometer (Photo Research∗ PR-715) that measures radiance spectra in the interval [380, 1068] nm at every 4 nm. This instrument gives us discretized measurements of scene radiances in the form of length 173 vectors. Spectral reflectance measurements are found by dividing the illuminant SPD (also measured using the same measurement on a pressed Halon target) from the radiance measurements. Let X be this set of reflectance measurements; X has measurements of various types of objects including standard color test charts (GretagMacbeth ColorChecker rendition chart and Colour Test Chart for Scanners (S:IEC 61966-9) model TE221 manufactured by Image Engineering† ), plastic objects, fabrics, paint swatches, vegetation, fruits, common illuminants, skin, etc. We assume that X is representative of all reflectances that we are interested in measuring. We have elected to use a three color digital camera in conjunction with a set of optical filters to measure scene reflectances. The digital camera used for our experiments is a Nikon D200 digital SLR camera that has been modified by removing its IR-blocking filter. The image sensor used in this camera is a 10.2 Megapixel CCD overlaid with a color filter array (CFA) arranged in the Bayer pattern in an RGGB configuration. The experimental camera measures three distinct wavelength ranges. The sensor-array is overlaid with color filters that are sensitive in the red, blue, and green ranges as well as in the IR band. The signal acquired at the three channels at a particular pixel, ci , i = R, G, B, may be modeled as  ci =

λ2

λ1

si (λ)x(λ) dλ + ηi ,

i = R, G, B

(1)

where si (λ) is the sensitivity of the ith color channel as a function of wavelength; si (λ) takes into account both the sensitivity of the detector and the efficiency of the color filter. x(λ) is the irradiance at the pixel, and ηi is the measurement noise. The camera is sensitive in the wavelength range (λ1 , λ2 ). We have found experimentally that λ1 ∼ 380 nm and λ2 ∼ 1100 nm. The intensity measured at each color channel is then the inner product ∗ †

Photo Research Inc., 9731 Topanga Canyon Place, Chatsworth, CA-91311, USA Image Engineering Dietmar W¨ uller, Augustinusstrasse 9d, 50226 Frechen, Germany

SPIE-IS&T/ Vol. 6817 68170N-2

si (λ), x(λ), and the signal acquired by the camera for a particular pixel with reflectance x(λ) is the projection of x(λ) to the space spanned by si (λ), i = R, G, B. In the discrete form, the signal acquired at each pixel is described by the matrix-vector equation c = S T x,

(2)

where {.}T denotes matrix transpose. The spectrum is sampled N times; x ∈ RN is the sampled radiance spectrum of the measured pixel, the columns of S ∈ RN ×3 are the vectors si ∈ RN that contain samples of the sensitivity functions, and c ∈ R3 has the measured camera-RGB tristimulus values. camera-RGB space is then the subspace of RN spanned by the columns of S. Since we have assumed that X contains all possible scene radiances of interest, we can state that radiances may only be recovered perfectly from camera-RGB data if span(X) ∈ span(S). It is of course unreasonable to assume that all radiances in X reside in the subspace span(S). We aim to improve the capabilities of the camera by augmenting camera-RGB space by using some optical filters. Instead of one acquisition that yields 3 color measurements we will make multiple acquisitions with k different optical filters attached to the camera to yield a total of 3k color measurements at each pixel. Let f i ∈ RN , i = 1, · · · k, be the transmittance functions of the k different optical filters. The augmented camera space is then given by span(SF ), where   (3) SF = F 1 S F 2 S · · · F k S , where SF ∈ RN ×3k and Fi , i = 1, · · · k are diag(fi ). The signal acquired at each pixel due to a radiance x is now the augmented color vector ca = SF T x

(4) 3k

The condition for accurate estimation of spectra from color measurements c ∈ R

is that span(X) ∈ span(SF ).

It is helpful to observe the effective dimensionality of X. For the elements of X, the eigenvectors of its correlation matrix Rxx corresponding to the largest eigenvalues are the principal components that may be used to approximate any element in x. Eigenvectors associated with the trailing eigenvalues correspond to the directions along which x exhibits least variance. This information is redundant in describing x and may be discarded. The first four principal components of X found using the procedure detailed above are shown in Fig. 2(a). For the elements of X, the energy associated with each direction is related to the corresponding eigenvalue. Figure 2(b) shows a representation of the cumulative energy as a function of number of principal components. It is clear that most of the energy of X is concentrated along a limited number of directions (> 99% in 6 components and > 99.9% in 8 components). We choose to retain only the first K columns of P ; let the ˆ = P K y. matrix thus formed be denoted by P K . An approximation of an element of X may be obtained as x Figure 2(c) shows three reflectances from X and the corresponding estimates of reflectances obtained by using K = 8. A very close correspondence between original and estimated reflectance is seen from the plots. In fact, for a normalized error measure defined as    ˆ ||2 E ||x − x , (5) ∆X = 10 log10 E {||x||2 } ˆ is its approximation obtained with P K , the estimation errors where x is an element of our data set and x with K = 6 and K = 8 are -38.4934 dB and -60.7310 dB respectively. These experiments establish that the space spanned by typical reflectances may be very well approximated by a lower dimensional subspace. The feasibility of camera-RGB measurements for accurately estimating reflectances may be reduced to the condition that span(P K ) ∈ span(S) and we may proceed with the task of deriving the K directions that will yield the best approximation to the elements in X.

3. SPECTRAL MEASUREMENTS WITH A TRICHROMATIC CAMERA To be able to accurately measure reflectances from X, we need to find the set of k optical filters that yields an SF which is in some sense at a minimum distance from X. In this section we define a measure of goodness for such a set of optical filters and find the best set from the optical filters available to us.

SPIE-IS&T/ Vol. 6817 68170N-3

0.25

105

0.7

100

0.6

95

0.5

Solid line − measured reflectance Dashed line − estimated reflectance

p

1

0.2

p2 p

0.15

3

p

Energy (percentage)

0.05 0 −0.05

Reflectance

4

0.1

90

85

0.4 0.3

80

0.2 75

−0.1

−0.2

0.1

70

−0.15

1

25

50

75

100

125

150

175

65 0

0 1

2

Sample number

(a)

3 4 5 6 Number of pricnipal vectors

7

8

9

400

10

500

600 700 800 Wavelength (nm)

(b)

900

1000

1100

(c)

Figure 2. (a)The first four principal vectors found using a set of spectral data obtained by measuring common objects with an IR-enabled spectroradiometer. (b) Sum of eigenvalues of Rxx (c) Measured (solid lines) and estimated reflectance factors using the first 8 principal components (dashed lines).

For a large number of camera-RGB measurements, the expression in (2) allows for the formulation of three overdetermined systems of equations of the form

−4

x 10 6

ci = M si ,

i = R, G, B

(6)

that independently relate the three color measurements of m stimuli at each channel to the corresponding reflectance spectra. M ∈ Rm×173 is formed by stacking xTk , k = 1, 2, · · · , m, the reflectance spectra corresponding to the single color measurements in ci ∈ Rm . Estimates of si can be obtained as

Watts/Sr/m2

5 4 3 2 1 0

400

500

600 700 800 Wavelength (nm)

900

1000

(a)

(7)

The systems of equations in (7) are ill-posed and can not be reliably solved as least-squares problems. A number of methods have been proposed in the literature to recover device sensitivities.11–13 We use the standard Tikhonov regularization14 solution to (7) by introducing a constraint due to prior information, viz., the smoothness of the quantum efficiency functions. The three efficiency functions are obtained as ˆi = arg min ||ci − M si ||22 + αi ||Lsi ||22 , s si

1 0.9

Normalized transmittance

ˆi = arg min ||ci − M si ||. s si

Red

0.8

Green

0.7

Blue

0.6 0.5 0.4 0.3 0.2 0.1 0

400

500

600

700

800

900

1000

Wavelength (nm)

(b)

(8)

Figure 3. (a) Spectra obtained with an Oriel monochromator that are used as stimuli to characterize the experimental camera. The ordinate shows radiance in Watts Sr−1 m−2 (b) The color filter sensitivity functions of the experimental camera.

where L is the Laplacian operator that provides a penalty on the roughness of si , αi are regularization parameters and are chosen using generalized-cross-validation (GCV).14 Figure 3(a) shows the stimuli that compose M . These particular spectra are obtained by using an Oriel‡ monochromator in conjunction with an enhanced range Tungsten lamp that effectively covers the wavelength interval of interest. Also, stimulus from a monochromator is useful since the columns of M formed with monochromator spectra will be linearly independent and a solution to (8) will be stable. Sensitivity functions for the camera RGB channels obtained by solving (8) are shown in Fig. 3(b). ‡

LOT-Oriel GmbH & Co. KG, Im Tiefen See 58, D-64293 Darmstadt, Germany

SPIE-IS&T/ Vol. 6817 68170N-4

3.1 Optimal optical filters We propose to use multiple exposures with k different optical filters to get a set of color measurements that may be used to reconstruct an incident radiance spectrum. Let ca ∈ R3k be a vector of color signals obtained at the color channels due to a spectrum x and let SF be the augmented color filter matrix. The color signal is then given by ca = SF T ∆x  SF T LP a,

(9)

where P ∈ R173×8 has the first 8 principal vectors of X and a contains the coefficients that describe the spectrum x in terms of the bases in P . The diagonal matrix ∆ has the sampled spectral distribution of the illuminant along its diagonal. A closed form solution for the coefficients for the basis functions can be found as ˆ = arg min ||ca − SF T LP a|| a a  −1 = P T LT SF SF T LP P T LT SF ca . = Qca ,

(10)

 −1 ˆ be its estimate obtained as a where Q = P T SF SF T P P T LT SF . Let x be a spectrum in X and let x linear combination of the basis functions in P . An error vector may be formed as ˆ || e = ||x − x = ||x − P QSF T x||

(11)

An objective criterion is formed as the expected value of the squared norm of the error measure in (11), and is given by

Φ = E ||x − P QSF T x||2

= E xT (I − P QSF T )T (I − P QSF T )x

(12) = trace Rxx (I − P QSFT )T (I − P QSFT ) We have available a set of optical filters with differing transmittance functions (Fig. 4(a)). Optimal filters from this set that are best able to recover typical reflectances are found as SˆF = arg min ||Φ||, SF

(13)

by a direct search of the solution space. We used Tungsten illuminant for our experiments and will acquire multispectral scenes under the same illuminant. The optical filters are thus optimized for this particular illuminant. Optimal filters found using (13) are shown in Fig. 4(b). These filters are used to acquire three exposures of a scene. A fourth image of the scene is acquired with no optical filters. These four exposures yield 12 intensities corresponding to the 12 color channels associated with the optical filter-camera color filter combinations at each spatial location. ˆ that These data correspond to the vector ca in (9) and may be used in (10) to estimate the coefficients a approximate spectral reflectance. Figure 5 shows measured and estimated reflectances of four different objects.

SPIE-IS&T/ Vol. 6817 68170N-5

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

Tiffen Red 29 Tiffen 80A

Transmittance

Transmittance

X−Nite 830

0.6 0.5 0.4

0.5 0.4 0.3

0.3 0.2

0.2

0.1

0.1 0

0

400

500

600

700 800 Wavelength (nm)

900

1000

400

500

600 700 800 Wavelength (nm)

(a)

900

1000

(b)

Figure 4. (a) Transmittances of all available filters; includes Tiffen, Hoya, Nikon, and X-Nite optical filters (b) Set of filters that allow for optimal recovery of reflectances. Note that ordinates here are transmittance factors. 0.8

0.4

0.7

0.35

0.5

0.5

0.45

0.45 0.4

0.4

0.6

0.35

0.3

0.25 0.2

Reflectance

0.4

Reflectance

0.35 Reflectance

Reflectance

0.3

0.5

0.3 0.25 0.2

0.1

0.1 0 300

400

500

600 700 800 Wavelength(nm)

(a)

900

1000

1100

0.05 300

400

500

600 700 800 Wavelength(nm)

(b)

900

1000

1100

0.2 0.15

0.15

0.2

0.3 0.25

0.15

0.1

0.1

0.05

0.05 300

400

500

600 700 800 Wavelength(nm)

(c)

900

1000

1100

0 300

400

500

600 700 800 Wavelength(nm)

900

1000

1100

(d)

ˆ = Pa ˆ. Figure 5. (a)-(d) Measured reflectance factors shown with corresponding reflectance factors estimated using x Solid lines represent original reflectance factors and dashed lines represent estimated values. Note that the ordinates are shown in different scales.

4. EXPERIMENTS For our experiments, we have used a Nikon D200 digital SLR modified by removing its IR filter. The camera was characterized as described in Section 3. Using the procedure detailed in Section 3.1, it was determined that 3 optical filters (Tiffen Red 29, Tiffen 80 A, and X-Nite 830) and an exposure without any optical filters will yield color bands that can be used to reconstruct reflectances. Care was taken to use only digital values that were unsaturated and not affected by noise. The Nikon D200 camera makes available camera RAW images that have not been processed for color-balancing, gamma, etc. Fig. 6(b) shows the response of the camera to a constant illumination at different exposure times. The linear range of the camera sensor was determined experimentally. Only pixel values in the interval [200,3900] were used. The dynamic range of most natural scenes is of the order of 5000:1 when specularities are excluded.15 It has been shown that a capture system based on a 12-bit ADC that uses multiple exposure durations to bracket the range can acquire almost all of the information in most natural scenes.16, 17 To estimate the intensity recorded by each pixel, we used the last sample before saturation (LSBS) algorithm.15 A single high dynamic range image was created from this estimate. Another important issue that was revealed in our experiments was the variability in the transmittance of common lenses with change in aperture size. We found for the Nikon 50mm f/1.8D AF Nikkor Lens and Tamron SP Autofocus 90mm f/2.8 DI 1:1 Macro Lens that there is significant variability in the transmittance between the visible and IR regions of the spectrum for larger f-stops. Figure 6(a) shows the transmittances at various f /#s for the Nikkor 50mm lens. For f-stops greater that f /8, there is a significant drop in transmittance in the interval [675,725]. This is an important interval for organic objects that have a reflectance heavily influenced by

SPIE-IS&T/ Vol. 6817 68170N-6

0

10

4

10

f /# 1.8 f /# 2.8

f /# 5.6 f /# 8 f /# 11 f /# 16

Digital count

Transmittance

f /# 4

3

10

f /# 22

−1

10

2

10

400

500

600

700 800 900 1000 1100 Wavelength (nm)

−4

−3

10

10

−2

10

Exposure duration (sec)

(a)

(b)

Figure 6. (a) The transmittance factors of the Nikkor 50 mm f-1.8 prime lens at various f /#s; (b) Set of filters that allow for optimal recovery of reflectances. (b) The response of the sensor for a constant illumination at various exposure durations.

the absorption of water. Although lower f-stops offer fairly flat transmittances, the larger apertures lead to very small depth of field. For our experiments we used a Nikkor 50mm prime lens set at an f-stop of f /8 which offers a compromise between good transmittance characteristics and depth of field. In previous sections we have established that spectral reflectances can be well approximated using linear reconstruction from color measurements obtained with optical filters that are optimal in the MMSE sense. The linear system can be modeled as x = T c,

(14)

where c ∈ R12 are the color measurements, T is a matrix that describes the cumulative effect of the color filters, the optical filters, lens transmittance, and the illuminant. x ∈ R173 is the associated reflectance spectrum. We used a calibration target with known reflectances and obtained the corresponding color measurements. The calibration target shown contained the Esser test chart (S: IEC 61966-9, model TE221) and a number of other objects of interest including plastic blocks, crayons, etc. The system matrix T was derived as Tˆ = arg min ||Xc − T Cc ||, T

(15)

where the columns of Cc were color measurements obtained for the reflectance spectra in the corresponding columns of Xc . Only reflectance measurements of the color patches in the Esser color chart were used for deriving T . The reflectance of each pixel in a multispectral scene was then determined from its color measurements c as ˆ = Tˆ c x

(16)

Figure 7. sRGB representation of the calibration target.

Figure 7 shows the sRGB representation of the calibration target derived from the spectral image reconstructed by using T . Figure 8 shows the results obtained for a scene with an MCC test chart and few vegetables. Spectral image bands corresponding to a number of wavelengths in the visible and infrared ranges are shown with an sRGB

SPIE-IS&T/ Vol. 6817 68170N-7

II

(a) 380 nm

(b) 420 nm

(c) 460 nm

••

(d) 500 nm

(e) 540 nm

• .....

U. •U • •ii•

U. ••U

...

(f) 580 nm

(g) 620 nm

(h) 660 nm

(i) 700 nm

(j) 740 nm

(k) 780 nm

(l) 820 nm

(m) 860 nm

(n) 900 nm

(o) 940 nm

(p) 980 nm

(q) 1020 nm

(r) sRGB

Figure 8. Scaled images showing image bands for a scene containing a Macbeth color test chart and a few vegetables. From top left, the bands are at every 40 wavelengths 380, 460, 540, 620, 700, 780, 860, 960, and 1020 nm. The final image (bottom right) is the sRGB representation of the scene.

representation obtained for the multispectral scene. Figure 9 shows the measured and estimated reflectance functions for the color patches of the MCC seen in the scene from Fig. 8. The mean percentage RMS error for the MCC patches in this scene was found to be 6.77 %. The maximum percentage RMS error was 23.48 % for patch number 24 and the minimum percentage RMS error was 2.44 % for patch number 16. Figure 10 shows sRGB representations of two other scenes obtained using the proposed method.

5. CONCLUSIONS In this paper we present a method to acquire multispectral images with two features: the multispectral images have information about both the visible and near infrared regions of the spectrum, and the multispectral images are obtained as high dynamic range scene data. The proposed method only uses conventional consumer-grade equipment and is very cost-effective.

ACKNOWLEDGMENTS This research was funded by the Samsung Advanced Institute of Technology (SAIT). The authors would like to thank Ms. Wonhee Choe, Dr. SeongDeok Lee and Dr. ChangYeong Kim for their advice and support for this research.

REFERENCES 1. J. E. Farrell, F. Xiao, P. B. Catrysse, and B. A. Wandell, “A simulation tool for evaluating digital camera image quality,” in Proceedings of IS&T/SPIEs Electronic Imaging 2004: Image Quality and System Performance, 5294, pp. 124–131, Jan. 2004. 2. E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging: Acquisition, Display and Image-Based Lighting, Morgan Kaufmann Publishers, Dec. 2005. 3. S. Tominaga, “Multichannel vision system for estimating surface and illumination functions,” J. Opt. Soc. Am. A 13, pp. 2163–2173, 1996.

SPIE-IS&T/ Vol. 6817 68170N-8

patch 1

patch 2

patch 3

patch 4

patch 5

patch 6

1

1

1

1

1

1

0.5

0.5

0.5

0.5

0.5

0.5

0

500 1000 patch 7

0

500 1000 patch 8

0

500 1000 patch 9

0

500 1000 patch 10

0

500 1000 patch 11

0

1

1

1

1

1

1

0.5

0.5

0.5

0.5

0.5

0.5

0

500 1000 patch 13

0

500 1000 patch 14

0

500 1000 patch 15

0

500 1000 patch 16

0

500 1000 patch 17

0

1

1

1

1

1

1

0.5

0.5

0.5

0.5

0.5

0.5

0

500 1000 patch 19

0

500 1000 patch 20

0

500 1000 patch 21

0

500 1000 patch 22

0

500 1000 patch 23

0

1

1

1

1

1

1

0.5

0.5

0.5

0.5

0.5

0.5

0

500

1000

0

500

1000

0

500

1000

0

500

1000

0

500

1000

0

500 1000 patch 12

500 1000 patch 18

500 1000 patch 24

500

1000

Figure 9. Measured (solid) and estimated (dashed) reflectances for the MCC patches from the scene in Fig. 8. MCC patches are numbered in order from left to right and top to bottom. Ordinates are reflectance factors, while abscissae show wavelength in nm.

(a) Vegetables

(b) Fruits

Figure 10. sRGB representations of two multispectral scenes acquired using the proposed method.

4. S. Tominaga and R. Okajima, “A spectral-imaging system and algorithms for recovering spectral functions,” in 4th IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 278–282, 2000. 5. P. J. Miller and C. C. Hoyt, “Multispectral imaging with a liquid crystal tunable filter,” Society of PhotoOptical Instrumentation Engineers (SPIE) Conference 2345, pp. 354–365, Jan. 1995. 6. F. Imai, R. Berns, and D. Tzeng, “A comparative analysis of spectral reflectance estimated in various spaces

SPIE-IS&T/ Vol. 6817 68170N-9

using a trichromatic camera system,” J. Imaging Sci. Tech. 44, pp. 280–287, 2000. 7. B. A. Wandell, Foundations of Vision, Sinauer Associates, Inc., 1995. 8. D. H. Marimont and B. A. Wandell, “Linear models of surface and illuminant spectra,” JOSA A 9(11), pp. 1905–1913, 1992. 9. C. Chiao, T. Cronin, and D. Osorio, “Color signals in natural scenes: characteristics of reflectance spectra and effects of natural illuminants,” J. Opt. Soc. Am. A (17), pp. 218– 224, 2000. 10. M. J. Vrhel, R. Gershon, and L. S. Iwan, “Measurement and analysis of object reflectance spectra,” Color Research and Application 9, pp. 4–9, Feb. 1994. 11. G. Sharma and H. J. Trussell, “Characterization of scanner sensitivity,” in IS&T and SID’s Color Imaging Conference: Transforms and Transportability of Color, pp. 103–107, 1993. 12. G. Sharma and H. J. Trussell, “Set theoretic estimation in color scanner characterization,” Journal of Electronic Imaging 5(4), pp. 479–489, 1996. 13. G. Finlayson, S. Hordley, and P. M. Hubel, “Recovering device sensitivities with quadratic programming,,” in IS&T/SID Sixth Color Imaging Conference: Color Science, Systems and Applications, pp. 90–95, 1998. 14. P. C. Hansen, Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1998. 15. F. Xiao, A system study of high dynamic range imaging. PhD thesis, Stanford University, Stanford, California, 2006. 16. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Computer Graphics 31(Annual Conference Series), pp. 369–378, 1997. 17. D. X. Yang, A. E. Gamal, B. Fowler, and H. Tan, “A 640 × 512 CMOS image sensor with ultra wide dynamic range floating-point pixellevel ADC,” IEEE Journal of Solid State Circuits (12), pp. 1821–1834, 1999.

SPIE-IS&T/ Vol. 6817 68170N-10

Suggest Documents