A New Convolution Kernel for Atmospheric Point Spread Function Applied to Computer Vision S. Metari Université de Sherbrooke, Sherbrooke, Québec, Canada.
F. Deschênes Université du Québec en Outaouais, Gatineau, Québec, Canada.
[email protected]
[email protected]
Abstract
in several domains such as transportation, surveillance, teleoperation, etc.
In this paper we introduce a new filter to approximate multiple scattering of light rays within a participating media. This filter is derived from the generalized Gaussian distribution GGD. It characterizes the Atmospheric Point Spread Function (AP SF ) and thus makes it possible to introduce three new approaches. First, it allows us to accurately simulate various weather conditions that induce multiple scattering including fog, haze, rain, etc. Second, it allows us to propose a new method for a cooperative and simultaneous estimation of visual cues, i.e., the identification of weather degradations and the estimation of optical thickness between two images of the same scene acquired under unknown weather conditions. Third, by combining this filter with two new sets of invariant features we recently developed, we obtain invariant features that can be used for the matching of atmospheric degraded images. The first set leads to atmospheric invariant features while the second one simultaneously provides atmospheric and geometric invariance.
Existing works can be classified in two great families, those based on single scattering of light [3, 9, 10, 14, 15] and those based on multiple light scattering [16]. The concept of single scattering means that scattered light cannot be dispersed again towards the imaging system and only the directly transmitted light is taken into account. This assumption is valid in the cases of light fog and rain, for instance. In the case of dense fog, the single light scattering model is not adequate [11]. For this reason, it is necessary to take into account the multiple light scattering phenomenon which takes place when incident light rays get scattered several times and reach the imaging system from several directions [13]. In this paper, we present a new way of modeling the AP SF in the case of multiple scattering of luminous rays. Our approach introduces a new filter derived from the generalized Gaussian distribution and is inspired from experimentations made in [16]. The proposed kernel allows us to introduce three new techniques. The first one allows a realistic simulation of different weather conditions on images. The second one is a cooperative and simultaneous estimation of atmospheric parameters. Finally, this filter when used in conjunction with invariant features we recently developed [12] allows matching of atmospheric degraded images. Experimental results confirm both the accuracy of our multiple scattering filter and its usefulness for computer vision. Moreover experimentations using the invariant features show the efficiency of those invariants in the case of matching of atmospheric degraded images.
1. Introduction The medium in which light rays travel influence both the level of contrast and colors because light rays that reach the imaging system have been scattered by particles suspended in the diffusing medium [13]. The majority of existing computer vision systems neglect light scattering effects on images. They assume that luminous rays reflected by an object of the scene travel in the medium without any attenuation or change of direction. Obviously, this may lead to erroneous results in practice. Only few works are devoted to modeling vision through participating media and to inherent techniques such as 3D structure estimation from weather degraded images [3, 14, 9], contrast restoration of atmospheric degraded images [15], removal of the degradation effects in underwater vision [17], etc. Such techniques could be used
In the next section, we briefly review the multiple light scattering phenomenon and we give a synopsis of literature. In section 3, we introduce a new way of modeling the AP SF in the case of multiple light scattering and we present three approaches derived from it. Finally, experimental results are shown in section 4. 1
978-1-4244-1631-8/07/$25.00 ©2007 IEEE
2. Multiple light scattering Contrary to single light scattering, multiple scattering takes into account both the portion of light which is directly transmitted towards the imaging system, and the portion of scattered light which can then be dispersed again towards the observer by other particles suspended in the medium (cf. Figure 1). The multiple scattering of light is affected by three parameters which are the forward scattering parameter q, the phase function of particles and the optical thickness T of the atmosphere.
Greenstein phase function is valid for various particle types [8] and so for different media. Optical thickness: The optical thickness T is a dimensionless quantity which indicates the amount of depletion that a beam of radiation undergoes as it passes through a layer of the atmosphere. The optical thickness T is given by [13]: T = ηz,
(2)
where z is the distance from the viewer and η is the global extinction coefficient of the atmosphere and it is related to the atmospheric visibility V by [13]: η=
3.912 . V
(3)
The diffusion of light in participating media is often described by the radiative transfer theory [2]. In the case of a spherically symmetric atmosphere, the Radiative Transport Equation RT E which describes the change in flux through an infinitesimal volume is given by [2]: Figure 1. Multiple light scattering phenomenon. Light rays may be scattered several times by several particles suspended in the air. A portion of those scattered light rays reaches the imaging system.
µ
I(T, µ) ∂I 1 − µ2 ∂I + =− ∂T T ∂µ 4π
2π+1 P (cos α)I(T, µ )dµ dφ , 0 −1
(4)
Forward scattering parameter: The forward scattering parameter q is inversely proportional to the density of the medium. As will be shown in what follows, it varies between 0 and 1 generating phase functions for the majority of weather conditions [13]. It hence allows the identification of most of the atmospheric conditions, as shown in Table 1. 0.0-0.2 Air
0.2-0.7 Aerosols
0.7-0.8 Haze
0.8-0.85 Mist
0.85-0.9 Fog
0.9-1.0 rain
Table 1. Correspondence table between forward scattering parameter q and atmospheric condition types.
Phase function: The phase function characterizes the angular distribution of scattered light by the particles suspended in the medium. There are several forms of the phase function according to both the shape and the size of the particles [7]. The most common form is the Henyey Greenstein phase function which is given by the following formula [8]: P (cos α) =
1 − q2 (1 − 2q cos α + q 2 )
3 2
,
(1)
where α is the angle between a given pair of incident and scattered light rays. It has been shown that the Henyey
where P (cos α) is the phase function of the particles suspended in the atmosphere, cos α is the cosine of the angle between the incident light in the direction (θ , φ ) and the scattered one in the direction (θ, φ), µ = cos θ and µ = cos θ . As far as we know the work of Narasimhan and Nayar [16] is the first and the only one to exploit the multiple light scattering phenomenon in the field of computer vision. Specifically they suggest to approximate the solution of equation (4) using the following equation: I(T, µ) =
∞
(gm (T ) + gm+1 (T )) Lm (µ),
(5)
m=0
where I(T, µ) is the intensity of an isotropic point source for a given radial direction θ, Lm (µ) is the Legendre polynomial of order m, gm (T ) = I0 e−βm T −αm log T , βm = 2m+1 m−1 ), αm = m + 1 and g0 (T ) = 0. Using m (1 − q equation (5) for different values of the angle θ, the resulting I(T, µ) corresponds to the AP SF . It can thus be used to establish a relationship between the ideal image I0 and the observed image IR [16]: IR (x, y) = I0 ∗ AP SF (x, y),
(6)
where ∗ is the convolution operator. Figure 2 shows various cross-section forms of the AP SF obtained using equation (5) with various parameter settings. These forms change
(a)
Figure 2. AP SF cross-sections normalized to [0 − 1] using equation (5). (a) In the case of mild atmosphere (T = 1.2), different values of the forward parameter q generate various forms of the AP SF s which model different atmospheric conditions. (b) In the case of a strongly dense atmosphere (T = 4), the different weather conditions generate similar (wide) shape of AP SF s. (Image taken from [16]).
according to two parameters which are the optical thickness T and the forward scattering parameter q. The authors mention that the suggested solution (equation (5)) does not, however, converge for all values of the optical thickness T [16]. Specifically T must be greater than 1. Some real situations are thus not taken into account. In what follows, we propose a new AP SF model which is valid for any value of T . As will be shown, this new model allows us to introduce new ways of dealing with atmospheric degraded images.
3. A new AP SF kernel for multiple scattering As shown in the previous section the AP SF may take various shapes (cf. Figure 2). A careful look at those shapes suggested to us that the AP SF could be approximated by the generalized Gaussian distribution (GGD) [4]: x−µ
p
e−| A(p,σ) | , x ∈ R, GGD(x; µ, σ, p) = 2Γ(1 + p1 )A(p, σ)
(7)
Γ(.) is the Gamma function, i.e., (Γ(z) = where ∞ −t z−1 e t dt, z > 0). The parameter µ is the mean and 0 p ∈ R∗+ is the shape parameter which is inversely proportional to the decreasing rate of the peak. The scale param1 eter A(p, σ) is equal to [σ 2 Γ( p1 )/Γ( p3 )] 2 . Figure 3 shows various shapes that can be obtained by varying the parameters p and σ of the GGD. By comparing this figure to Figure 2 it is clear that the GGD may produce shapes that are similar to those of the AP SF . In two dimensions, the generalized Gaussian distribution is given by:
(b)
Figure 3. Sample plots of generalized Gaussian. The form of the generalized Gaussian distribution changes according to two parameters: σ and p.
Let us now establish a relationship between the GGD parameters, σ and p, and the atmospheric parameters: the optical thickness T and the forward scattering parameter q. According to figure 2, we can notice that the optical thickness T determines the peak shape of the AP SF . For instance, if T = 1.2 the peak of the AP SF is narrow while it is broad for T = 4. In a similar manner the parameter p determines the peak shape of the GGD. We thus propose to assume that p is proportional to T , that is: k ∈ R∗+ .
p = kT,
About σ, by comparing Figure 2.a to Figure 3.a, we can note that the resulting AP SF when q ∼ 1 (respectively q ∼ 0) is similar to the GGD produced when σ ∼ 0 (respectively σ sufficiently large). Based on those observations, we propose to link σ to the forward scattering parameter q using the following relation: σ=
1−q . q
x, y ∈ R. (8)
(10)
The above relation fulfills the previously mentioned observations, that is, 1−q 1−q = +∞, lim σ = lim = 0. q→0 q→0 q→1 q→1 q q (11) From equations (8), (9) and (10) and by assuming that µx and µy are equal to zero, we suggest to model the multiple scattering AP SF using: lim σ = lim
−
h(x, y; q, T ) =
e 4Γ2 (1
|x|kT +|y|kT 1−q kT )| q
|A(kT ,
+
1 kT
2 )A(kT, 1−q q )
, k ∈ R∗+ .
(12) However this function is not invariant to rotation. In order to circumvent this problem, we suggest to slightly modify the exponent as follows:
|x−µx |p +|y−µy |p
|A(p,σ)|p e− , GGD(x, y; σ, p) = 2 4Γ (1 + p1 )A(p, σ)2
(9)
−
AP SF (x, y; q, T ) =
kT (x2 +y 2 ) 2 1−q kT )| q
|A(kT ,
e 4Γ2 (1 +
1 kT
2 )A(kT, 1−q q )
.
(13)
Equation (13) constitutes a new way of modeling the AP SF which represents the convolution kernel of atmospheric veil on images. Note that in addition the suggested solution converges for any real value of T .
3.1. Simulation of weather conditions The simulation of various atmospheric conditions is straightforward once you have generated the AP SF . For a given point (x, y) the resulting intensity is given by equation (6). Figure 4 shows different samples of the AP SF for different weather conditions and atmosphere types. They are obtained by varying the parameter q for T = 0.7, 1.2 and 4, respectively. Recall that T = {0.7, 1.2} correspond to mild atmosphere while T = 4 is a highly dense atmosphere. For any value of T , we can notice that the AP SF which models the haze, that is q = 0.75 (images b, e, h), is broader than the one which models the rain, q = 0.95 (images a,d,g), and narrower than the one which models small aerosol effects, q = 0.2 (images c, f, i). Correspondances between q and atmospheric conditions are given in Table 1. If we refer to research works of Narasimhan and Nayar [16],
scene acquired under poor and unknown weather conditions. First, let us consider the following system of equations: (14) Ii = I0 ∗ AP SFqi ,Ti , where i = 1, 2. From the above system, we will now establish a relationship between the two degraded images. To this end, let us first apply the Fourier transform (F(·)) to the above system of equations and taking their ratio we obtain: F(I2 ) = F(I1 )
F(AP SFq2 ,T2 ) . F(AP SFq1 ,T1 )
(15)
We now need to compute the ratio of the AP SF s. For the sake of simplicity, let us approximate the Fourier transform of the AP SF by: F(AP SFσ,T )(u, v) e−
A(kT ,σ)kT 2kT
(u2 +v 2 )
kT 2
.
(16)
Recall that σ = 1−q q . This approximation of the Fourier transform of the AP SF is inspired by three particular Point Spread Functions (P SF s) and their Fourier transforms. The first one is the Fourier transform of a P SF of atmospheric turbulence which is used in remote sensing and astronomy, and which is expressed as [6]: 2
F(P SF 1)(u, v) = e−c(u
5
+v 2 ) 6
,
(17)
where c is a parameter which depends on the type of the turbulence and is usually found experimentally. The power 5 6 models a specific type of atmosphere (i.e., mild or dense). The second one is the Fourier transform of a P SF of particular atmospheric degradations which is given by [5]: F(P SF 2)(u, v) = e−3.44(
λ(u+v) α ) r0
,
(18)
where λ is the mean wavelength of observation, r0 is the Fried parameter 1 and α is the power index which identifies the type of the observation. The third one is the Fourier transform of the standard Gaussian distribution (SGD), a well-known P SF for blurring effects: Figure 4. AP SF cross-sections. (a,d,g)- rain, (b,e,h)- haze, (c,f,i)small aerosols. The smaller the parameter q, the broader the AP SF graph and inversely.
one can note that the observed forms of the APSF are in harmony with those experimentally estimated by those authors.
3.2. Cooperative and simultaneous estimation of visual cues In this subsection, we propose a new approach for a cooperative and simultaneous estimation of weather condition types and optical thickness between two images of the same
F(P SF 3)(u, v) = e−
σ2 2
(u2 +v 2 )
.
(19)
It can be shown that equation (16) is equivalent to equation (19) if kT = 2. For all of those three P SF s both the shape parameter and the scale parameter can be mapped with the shape and scale parameter of our approximation (equation (16)). Our approximation can thus be seen as a generalization of those existing P SF s. In other words, the proposed AP SF allows to represent a wider range of visual degradations. 1 Fried parameter is the parameter describing the quality of a wave that is propagated through atmospheric turbulence.
Using equation (16), we can now estimate the ratio in equation (15) as follows: F(I2 )
F(I1 )
A(kT2 ,σ2 )kT2 − 2kT2
e
−
e
A(kT1 ,σ1 )kT1 2kT1
kT2 (u2 +v 2 ) 2
(u2 +v 2 )
kT1 2
. (20)
Let us now assume that T1 = T2 = T . This supposition is based on the fact that for any given optical thickness the forward scattering parameter value, which is related to the weather conditions, determines the size of the support of the filter but is not a function of the optical thickness T . Thus various values of q produce different filter sizes which can be associated to atmospheric conditions. Experimentations have confirmed that this assumption does not have negative impact on the identification of weather conditions between degraded images. Additionally, referring to the Weather and Illumination Database (WILD)2 and the associated ground truth, one can note that different images of the same scene acquired under different weather conditions may reveal the same optical thickness. We hence obtain:
F(I2 ) F(I1 )e−
A
kT −σ kT kT , σ2 1 2kT
kT 1 kT
(u2 +v 2 )
⇒ I2 (x, y) (I1 ∗ AP SFσβ ,T )(x, y).
kT 2
,
(21)
Thus, for a given density of particles, the two degraded images I1 and I2 are related by the AP SFσβ ,T with: σβkT = σ2kT − σ1kT .
(22)
The particular case kT = 2 corresponds to the difference between two blurred images under the assumption of a Gaussian P SF . Based on the relationship between I1 , I2 and the AP SFσβ ,T , we can now propose a new method for the estimation of the parameter σβ using the Mellin transform of the AP SFσβ ,T as well as the Mellin transforms of the two degraded images I1 and I2 . The Mellin transform of a function f (x, y), is defined as follows: +∞ +∞ Msv (f (x, y)) = xs−1 y v−1 f (x, y)dxdy, 0
we can show that: Msv (I2 (x , y )) Msv (I1 (x , y ))Msv (AP SF (x , y )), Msv (I1 (x , y ))
m where x = x − m 2 , y = y − 2 and m is the filter size. Note that since the proposed AP SF is centered at the origin (0, 0), we simply apply a spatial shift equivalent to the half of the filter size to both the image and the filter. In order to simplify notation let us omit (x, y) in what follows. By taking arbitrary values for s and v, we can now determine the value of σβ . For example, if we set s = 3 and v = 1, we obtain:
M31 (I2 ) M31 (I1 ) ⇒ σβ
(23)
s, v ∈ C. Similar to the convolutional property of Laplace and Fourier transforms, the Mellin transform convolution of two functions f and h, can be obtained as follows: Msv ((f ∗M el h)(x, y)) = Msv (f (x, y))Msv (h(x, y)), (24) where ∗M el is the Mellin convolution operator. Note that it has been proven that Mellin convolution and ordinary convolution are equivalent [1]. Using equations (21) and (24), 2 WILD is a data base of high quality images of outdoor scene elaborated by the Narasimhan and Nayar research group.
4 1 )Γ( kT )π Γ( kT 3 2 16kT Γ( kT )Γ (1 +
M (I )16kT Γ( 3 )Γ2 (1 + 31 2 kT 4 1 M31 (I1 )Γ( kT )Γ( kT )π
1 kT 1 kT
)
σβ2 .
) 12
,
(26)
where M31 (Ii ) can be computed using equation (23). Once σβ is calculated, we can easily identify the atmospheric condition types of images I1 and I2 . From equations (14), (22) and (26), we have: M (I )16kT Γ( 3 )Γ2 (1 + 1 ) kT 2 31 2 kT kT 4 1 M31 (I0 )Γ( kT )Γ( kT )π M (I )16kT Γ( 3 )Γ2 (1 + 1 ) kT 2 31 1 kT kT − . 4 1 M31 (I0 )Γ( kT )Γ( kT )π
σβkT =
(27)
Let k = 12 . This value allows a perfect mapping between the peak shape of our APSF model and the one of Narasimhan and Nayar [16]. The optical thickness T can take any value belonging to the interval [0, 4]. Recall that the value of T does not influence the estimate of q. Hence, M31 (I0 ) is the only unknown in equation (27). we may thus compute the values σi , i ∈ {1, 2} using : σi =
0
Γ( (s+v) )A(kT, σβ )s+v−2 Γ( 12 s)Γ( 12 v) kT , (25) 1 8kT Γ2 (1 + kT )Γ( 12 (s + v))
M(I )(3, 1)16kT Γ( 3 )Γ2 (1 + i kT 4 1 M(I0 )(3, 1)Γ( kT )Γ( kT )π
1 kT
) 12
.
(28)
It then allows us to identify the atmospheric conditions of images Ii using equation (10) and Table 1. Once the parameters σ1 and σ2 are calculated and the weather condition types are identified, the next step consists of retrieving the optical thickness value of each image. For this purpose let us consider equation (14). By applying the Mellin transform to this equation, we obtain: Msv (Ii ) = Msv (I0 )Msv (AP SFσi ,Ti ), i ∈ {1, 2}. (29) In equation (29) for a given pair (s, v), the only unknown parameter is the optical thickness Ti since σi and M(I0 ) can be calculated using the above technique.
Thus, the proposed method allows us both to identify the weather conditions and to approximate the optical thickness values between two images of the same scene imaged under unknown atmospheric conditions.
3.3. New invariants to weather degraded images In [12] we have introduced two new classes of invariants. The first class includes radiometric descriptors and the second one contains combined radiometric-geometric descriptors. By combining those invariants to the proposed AP SF filter it can be shown that we obtain descriptors that are invariant to atmospheric degradations. The resulting invariant features are:
4.1. Weather conditions rendering Figure 5 shows several weather condition simulations using our model and the one of Narasimhan and Nayar [16]. In both sets of simulations, we set the optical thickness T to 1.2 and we varied the forward scattering parameter q from 0.2 to 0.95 to generate some atmospheric conditions. In
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Invariant to weather degradations and contrast changes: k f (s, v) =
M(f (x, y))(s, v) = k ϕ(f ∗AP SF ) (s, v), (30) M(f (x, y))(v, s)
I2 I1 I0 ϕ ∈ R+ ∗ . Consequently, k (s, v) = k (s, v) = k (s, v).
Invariant to weather degradations, contrast changes and translations: P f (x,y) (s, v) =
f (x,y)
µsv
f (x,y)
µvs
= P ϕ(f ∗AP SF )(ι(x,y)) (s, v),
(31) + where x, y ∈ R and is the (s + v)th order central moment of a function f (x, y). Thus, P I2 (s, v) = P I1 (s, v) = P I0 (s, v). Note that ι denotes horizontal and vertical translations. f (x,y) µsv
Invariant to weather degradations, contrast changes and geometric transformations: B f (x,y) (s, v)
f (x,y)
f (x,y)
=
Figure 5. First row: simulation of different weather conditions using our model. a- original image, b- rain (q = 0.98), c- haze (q = 0.75), d- small aerosols (q = 0.2). Second row: the same simulations using Narasimhan and Nayar modeling [16]: e- rain (q = 0.98), f- haze (q = 0.75), g- small aerosols (q = 0.2).
µsv
f (x,y)
µvs
×
µv+n,s+n f (x,y)
Figure 5, we notice that the glow effect in image (c) is more significant than the one of image (b) and less important than the one of image (d). These observations correspond to reality. Obtained results allow us to conclude that our approach, which is based on a probability distribution and the one of Narasimhan and Nayar, which is based on the radiative transport theory, provide similar results for T > 1. Figure 6 shows simulation results obtained using our AP SF kernel and the one of Narasimhan and Nayar [16]. In both sets of simulations we set the scattering parameter q
,
µs+n,v+n
= B ϕ(f ∗AP SF )(τ (x,y)) (s, v), (32) n ∈ N∗ . It follows that B I2 (s, v) = B I1 (s, v) = B I0 (s, v). Note that τ denotes geometric transformations (translations, uniform scaling and stretching). Further details about the demonstrations of the invariant features can be found in [12].
(a)
(b)
(c)
(d)
(e)
(f)
(g)
4. Experimental results In this section, we present results related to all of the proposed techniques: 1- simulation of weather condition effects on images, 2- cooperative and simultaneous estimation of atmospheric parameters between two weather degraded images, 3- matching of atmospheric degraded images based on invariant features.
Figure 6. Simulation of the glow around a spiral lamp in hazy conditions. First row (using our AP SF kernel): a- original image, bT = 0.8, c- T = 1.2, d- T = 2. Second row (using Narasimhan and Nayar modeling [16]): e- T = 0.8, f- T = 1.2, g- T = 2.
to 0.75 and we varied T from 0.8 to 2 to generate the same atmospheric condition (haze) under different types of atmosphere. Obtained results confirm that our AP SF model allows a realistic simulation of different weather conditions on images. In addition, it allows the simulation of a wider range of degradations (i.e., for T ≤ 1).
4.2. Cooperative and simultaneous estimation of visual cues Our technique for a cooperative and simultaneous estimation of atmospheric parameters provides a significant intermediate result which is the relationship between weather degraded images. Let us thus first present results in that sense.
4.2.1
Relationship between weather degraded images
Let us show that the simulation result of given weather conditions may be obtained from a clear image but also from another degraded images using the proposed AP SF . For instance, a hazy image can be generated from a clear image but also from a rainy image. Figure 7 shows an example.
(a)
(b)
(c)
(d)
Figure 7. Relationship between atmospheric degraded images. aclear image of the scene, b- rainy image, c- first hazy image, dsecond hazy image.
Figure 7.a represents a clear image and figure 7.b a lamp under rainy conditions obtained by applying a convolution kernel to the clear image using the AP SFT =1.2,σ1 =0.05 . Figure 7.c represents a lamp obtained using a convolution product between a clear image and the AP SFT =1.2,σ2 =0.33 , that is under hazy conditions. Figure 7.d presents a hazy image obtained by applying a convolution kernel (AP SF1.2,σβ =0.17 ) on a lamp under rainy conditions (figure 7.b). The parameter σβ is obtained using equation (22), knowing that the forward parameter values which model the effects of rain and haze on images are respectively 0.95 and 0.75. From these experimental results, we can notice that both hazy lamps (figure 7.c and figure 7.d) are almost identical. Obtained results allow us to validate the established relationship between degraded images.
4.2.2
Cooperative and simultaneous estimation of atmospheric parameters
To validate the technique for a cooperative and simultaneous estimation of atmospheric parameters, we apply it to several pairs of real images (a-e) of the same scene extracted from the "WILD", we then estimate q and T . Ground truth of real images is given in Table 2. Image Visibility (miles) weather condition
(a) 10 rain
(b) 6 mist
(c) 5 mist
(d) 4 mist
(e) 4 haze
Table 2. Ground truth of real images extracted from the "WILD".
In Table 3, we notice that estimated values of the forward parameter q belong to their membership intervals (cf. Table 1), i.e., the identification of weather condition types between real images was made successfully. In Table 3 - line 3, estimated values of the optical thickness T are obtained as described in section 3.2. In order to show the accuracy of the estimated values, we calculate the relative values of T (Table 3 - line 4). The latter are obtained using equations (2, 3), and the estimated ground truth from "WILD", more specifically the relative depth of the scene z. In "WILD" the depth of scene points can reach up to 3.10 miles. For instance, if we set the relative distance z equal to 3 miles, we obtain the results shown in Table 3 - line 4. Comparaison between relative values and estimated values of the optical thickness reveal a high degree of accuracy. Image Estimated q weather Estimated T Relative T
(a) 0.96 rain 1.08 1.17
(b) 0.82 mist 1.71 1.95
(c) 0.82 mist 2.09 2.34
(d) 0.80 mist 2.76 2.93
(e) 0.73 haze 2.62 2.93
Table 3. Results of the cooperative and simultaneous estimation of visual cues.
4.3. Matching of images based on invariant features In this subsection, we test the efficiency of the proposed invariant features. For this reason, we apply each feature to the appropriate subset of images in figure 8. The feature K(s, v) is applied to images (a,b,d), P (s, v) is applied to images (a,b,c,d) and B(s, v) is applied to images (a,b,c,d,e). Results in Table 4 show that for arbitrary order (s, v) the numerical values obtained by applying each invariant feature to the original image (a) and to its weather degraded and/or geometric transformed versions (images (c,d,e)) are almost identical while they are different from the numerical value of the foreign image (b). Note that for every invariant features exhaustive experiments have been done using several
images and different orders (s, v). The symbol "-" means that corresponding feature is not invariant to degradation types contained in corresponding image (cf. section 3.3).
(a)
(b)
(d)
(c)
(e)
Figure 8. a- original image, b- foreign image, c- translated and weather degraded image, d- weather degraded image, e- geometrically transformed and weather degraded image.
Image K(6, 3) P (9, 3) B(5, 2)
(a) 1.3062 7.6571 1.5790
(b) 3.0147 -138.04 -0.1690
(c) 8.1086 1.5779
(d) 1.2361 8.1086 1.5779
(e) 1.5763
Table 4. Results of the application of the invariant features to images in Figure 8.
All of those experiments confirm that the proposed features have a high discriminant power in matching of geometrically transformed and/or atmospheric degraded images.
5. Conclusion Particles are often suspended in the medium in which light rays travel (e.g. atmosphere, water, etc.). However the effect of those particules on the image formation process is usually neglected. In this paper, we introduced a new multiple light scattering model inspired by the generalized Gaussian distribution. From this model we derived an analytical expression of the Atmospheric Point Spread Function and we propose a new convolution kernel. Based on this AP SF we proposed three new techniques. The first one simulates different weather conditions on images. The second one is a cooperative and simultaneous estimation of visual cues. Finally, we introduced a new technique for matching weather degraded images based on invariant features. All of the experiments confirm both the accuracy of the proposed AP SF kernel and its usefulness.
References [1] J. Belzer, A. G. Holzman, and A. Kent. Encyclopedia of Computer Science and Technology, volume 9. Marcel Dekker, Inc., 1978. 5 [2] S. Chandrasekhar. Radiative transfer. Dover publications, Inc., 1960. 2 [3] F. Cozman and E. Krotkov. Depth from Scattering. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Puerto Rico, volume 31, pages 801–806, 1997. 1 [4] J. A. Domínguez-Molina, G. González-Farías, and R. M. Rodríguez-Dagnino. A practical procedure to estimate the shape parameter in the generalized Gaussian distribution. Investigation Center in Mathematics (CIMAT), Guanajuato, Mexico. Technical Report, I-01-18, 14.09.2001. 3 [5] D. L. Fried. Optical resolution through a randomly inhomogeneous medium for very long and very short exposures. Journal of the Optical Society of America, 56:1372–1379, 1966. 4 [6] R. E. Hufnagel and N. R. Stanley. Modulation transfer function associated with image transmission through turbulent media. Journal of the Optical Society of America, 54(1):52– 61, 1964. 4 [7] V. D. Hulst. Light Scattering by small Particles. John Wiley and Sons, 1957. 2 [8] A. Ishimaru. Wave Propagation and Scattering in Random Media. IEEE Press, 1997. 2 [9] D. Lévesque and F. Deschênes. Sparse Scene Structure Recovery from Atmospheric Degradation. In Proceedings of the 17th International Conference on Pattern Recognition, volume 1, pages 84–87, 2004. 1 [10] D. Lévesque and F. Deschênes. Detection of Occlusion Edges from the Derivatives of Weather Degraded Images. In Proceedings of the Second Canadian Conference on Computer and Robot Vision, pages 114–120, 2005. 1 [11] E. J. McCartney. Optics of the Atmosphere. Scattering by molecules and particles. John Wiley and Sons, 1975. 1 [12] S. Metari and F. Deschênes. New Classes of Radiometric and Combined Invariants inspired by Mellin Transform . Technical Report no. 20, Département d’Informatique, Université de Sherbrooke, 27 pages, 2007. 1, 6 [13] W. E. K. Middleton. Vision through the Atmosphere. University of Toronto Press, 1952. 1, 2 [14] S. G. Narasimhan and S. K. Nayar. Vision and the atmosphere. International Journal of Computer Vision, 48(3):233–254, 2002. 1 [15] S. G. Narasimhan and S. K. Nayar. Contrast Restoration of Weather Degraded Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(6):713–724, 2003. 1 [16] S. G. Narasimhan and S. K. Nayar. Shedding light on the weather. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, USA, 1:665–672, 2003. 1, 2, 3, 4, 5, 6 [17] Y. Y. Schechner and N. Karpel. Recovery of underwater visibility and structure by polarization analysis. IEEE Journal of Oceanic Engineering, 30(3):570–587, 2005. 1