Probability models for clutter in natural images - Semantic Scholar

1 downloads 0 Views 3MB Size Report
Ulf Grenander and Anuj Srivastava, Member, IEEE. Abstract╨We propose a framework for modeling clutter in natural images. Assuming that: 1) images are ...
424

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

Probability Models for Clutter in Natural Images Ulf Grenander and Anuj Srivastava, Member, IEEE AbstractÐWe propose a framework for modeling clutter in natural images. Assuming that: 1) images are made up of 2D (projected) views of 3D (real) objects and 2) certain simplifying conditions hold, we derive an analytical density for natural images. This expression is shown to match well with the observed densities (histograms). In addition to deriving multidimensional densities, several extensions are also proposed. Index TermsÐImage models, object recognition, clutter, transported model.

æ 1

INTRODUCTION

MODEL-BASED, statistical image analysis requires probabilistic

descriptions of the underlying image variability. Even though accurate probability models governing pixel values are highly desirable, such models, for a broad range of visual images, are clearly prohibitive. For some specific families, it is possible to derive specific probability models, but for general scenarios, the models tend to be rather basic. In the case of modeling textures, there has been reasonable success using Markov random field (MRF) models (see [11] for a treatment). Image analysis tasks, such as segmentation, are often devised assuming random field models. Perhaps suitable for low-level image processing, these models are inadequate for high-level tasks such as object recognition, tracking, and image understanding. High-level image understanding requires detailed and structured probability models, models that are physics-based and retain significant contextual information. For example, the task of detecting and recognizing an object (e.g., a tank or a human face), from its remotely sensed image, is performed based on its representations (3D templates, features, landmarks, etc.) extracted from a training set of its collected images. There are two approaches to reconciling low- and highlevel vision: One is to start at the pixel level and build upwards by finding edges, regions, etc., and studying their compositions into complex shapes. The other is to start from the known objects (3D shapes) and match their images (projections) to the observed pixels, allowing for the incidental variability in pose, location, etc. The second approach has been followed to develop Bayesian procedures for object tracking, pose estimation [3], recognition [10], and analytical performance analysis [5]. These models are built on physical principles as they rely on a prior knowledge of the physical characteristics (such as shapes, textures, motion patterns, etc.) of the targets. Their application is restricted to a limited number of targets called the targets of interest (TOIs). An important issue in image understanding is that TOIs seldom exist alone in a scene; there are other objects present, in background or foreground, that can a pose significant challenge. These other objects are labeled as the clutter objects and the image pixels falling on them are called the clutter pixels. To facilitate statistical

. U. Grenander is with the Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912. E-mail: [email protected]. . A. Srivastava is with the Department of Statistics, Florida State University, Building OSB, Room 210B, Tallahassee, FL 32306. E-mail: [email protected]. Manuscript received 17 July 2000; revised 14 Nov. 2000; accepted 24 Jan. 2001. Recommended for acceptance by P. Meer. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 112527. 0162-8828/01/$10.00 ß 2001 IEEE

VOL. 23,

NO. 4,

APRIL 2001

approaches for automated vision, one needs reasonable probability models for both the TOIs and the clutter objects. We propose a mathematical tool for developing clutter models in the context of automated target recognition (ATR). ATR seeks systems for automated recognition of TOIs from their remotely sensed images. To derive a principled approach, we need tractable probability models for the image variability. An important question in clutter modeling is: Should the clutter be analyzed at a high level by physically modeling sources of clutter (through 3D representations as is done for TOIs), or at a low level by studying the patterns of pixels? The first approach is definitely fundamental but the variability associated with the clutter objects is far too large to result in tractable models. The second approach seems more tractable but it works with a reduced knowledge as all the physical considerations are lost. Since our goal is object recognition and not clutter recognition, this reduced representation may perhaps be sufficient. Our objective behind deriving these clutter models is to recognize a TOI embedded in a cluttered environment. These models, in their simple forms as stated in this paper, may not be detailed enough for the synthesis of clutter images. Deriving pixel-based models for clutter is not straightforward. For an observed image of size l  l and with real-valued pixels, the 2 underlying image space is IRl even though only a very small l2 subset of IR are images that are natural. Not all patterns or combinations of pixels values result in natural images and the actual variation, associated with the natural images, is bound to be much lower dimensional. The question is: How to capture this variation through probability models? A common starting point has been to choose an orthonormal basis (such as orthogonal wavelets, or Fourier decompositions, or covariance components) and project the image space down to a low-dimensional subspace containing only the significant components. Many studies have shown that sample statistics, under any such representation, do not support Gaussian models. The tails are heavier and the peaks are often sharper than a normal curve. This behavior is present irrespective of the choice of basis. Additionally, sample statistics of the projected images (or the image coefficients) are found to exhibit certain patterns that are invariant to image scaling, shifting, and rotation. For instance, the histograms of these coefficients often display sharp peaks at a mean value and decrease steadily for higher magnitudes. Shown in Figs. 1, 2, and 3 are some natural images (top panels) and the relative frequencies (on a logarithmic scale) of the differences between vertical (nearest) neighbors (bottom panels, in broken lines). Any realistic image model should support these patterns; comparisons between the predicted and the observed patterns on image statistics can become an important tool for model diagnostics. One way to model these patterns is to fit known models to the observed statistics, see for example, [9]. These models have the advantage of being simple even though the resulting densities may not be consistent, i.e., a joint-density function on two pixels, obtained via curve fitting, may not reduce to a marginal density on one of the pixels, obtained similarly. In a recent approach, Zhu and Mumford [12] have constructed probability models using statistics of the filtered images as sufficient statistics. These models successfully capture the texture variation in natural images and even lend to synthesis. Our model relates to this idea in that analytical forms for any such filtered image can be derived. This model supports Field [2], where the role of higher order statistics (beyond the first, second orders) is highlighted.

2

TRANSPORTED GENERATOR MODEL FOR CLUTTER

We propose a physically-motivated model for the image formation and use it to derive probability models for the visual images. In

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 23, NO. 4,

APRIL 2001

425

Fig. 1. Subexponential clutter: 1) top panels: original images and 2) bottom panels: estimated and observed densities on the log scale. Estimated parameters …p; c†: (0.376, 20.48), (0.249, 42.24), and (0.431, 31.51).

this paper, certain additional assumptions have been made to keep the treatment simple, with some suggestions for relaxing them. The starting point for this development is a transported generator model (introduced in [4]). This model formalizes the concept that any image is made up of projected (2D) profiles (views, signatures) of real 3D objects; these profiles interact with each other nonlinearly through occlusion, scaling, and superposition, to form a 2D image. There is an emerging body of work that follows this concept and studies the models that are derived from such considerations [1], [6], [8], [7]. We simplify the analysis by assuming that: 1) the image is made up of a random number of the profiles of the same object and 2) the image pixels are obtained

as a linear combination of these profiles, weighted randomly. Mathematically, an image pixel is modeled as X ai g…z ÿ zi †; z ˆ ‰x yŠT ; zi ˆ ‰xi yi ŠT 2 IR2 : …1† I…z† ˆ i

Here, I is the image, z is the variable for pixel location, g is the (deterministic) profile of an object, and ai s are random weights associated with different profiles. ai s are modeled as i:i:d: standard normal and the locations zi s are modeled as samples from a 2D Poisson process, with a uniform intensity  (independent of ai s). These assumptions render the proposed model rather simplistic, and later, we suggest extensions to make it more realistic.

Fig. 2. Exponential clutter: 1) top panels: images and 2) bottom panels: log densities. Estimated parameters: (1.086, 19.31), (1.277, 21.81), and (1.097, 39.10).

426

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 23,

NO. 4,

APRIL 2001

Fig. 3. Superexponential clutter: 1) top panels: images and 2) bottom panels: log densities. Estimated parameters: (2.370, 3.74), (1.826, 36.44), and (4.286, 9.70).

"

Consider a pixel in the gradient image Ix ,

2

E‰u Š ˆ E

X @I @g Ix …z† ˆ …z† ˆ …z†: ai gx …z ÿ zi †; where gx …z† ˆ @x @x i

i

…2†

As a first step, our goal is to derive an analytical form for the marginal density of Ix …z† and compare it to the empirical results. The conditional density of Ix …z†, given the Poisson points fzi g, is P N…0; i g2x …z ÿ zi ††. The (conditional) characteristic function of this normal random variable is !   1 2X 2 1 …!jfzi g† ˆ exp ! gx …z ÿ zi † ˆ exp ÿ !2 u ; 2 2 i where u is defined as u ˆ

P

2 i gx …z

ÿ zi †. Before we characterize u,

we first establish a useful relationship between the statistics of the image and u. Lemma 1. For an image modeled by (1) and the variable P u ˆ i g2x …z ÿ zi †, we have E‰Ix …z†Š ˆ 0; E‰Ix …z†2 Š ˆ E‰uŠ; E‰Ix …z†4 Š ˆ 3E‰u2 Š; 3V ar…u† : and Kurt…Ix …z†† ˆ …E‰uŠ†2

ˆ 3E

i

#

1.

2.

ÿ zi † ‡ 6E

"

XX i

k6ˆi

# g2x …z

ÿ

zi †g2x …z

ÿ zk † ;

since E‰a4i Š ˆ 3, for all is and the terms with odd powers of ai have zero means. Also,

i

k6ˆi

# g2x …z

ÿ

zi †g2x …z

ÿ zk † : …4†

This lemma is significant in that it relates certain statistics of u, namely, the mean and the variance directly to the statistics of the observed image. Consequently, if u is selected from a parametric family, then its parameters can be estimated using the image statistics. Also, this relation proves that Ix …z† modeled using the transported generator model will be leptokurtic, that is, its kurtosis will always be positive. In other words, the tails are heavier as compared to a normal curve with the same variance. Next, we seek a probability model for the random variable u. There are at least two ways to model u:

…3†

!4 3 X 4 E‰Ix …z† Š ˆ E 4 ai gx …z ÿ zi † 5 g4x …z

ÿ zi † ‡ 2E

XX

u has the characteristic function given by  Z  u …!† ˆ exp  E‰exp…j!g2x …z ÿ z1 † ÿ 1†Šdz1 ; R2

is given by: 2

i

"

E‰Ix …z†4 Š ÿ 3…E‰Ix …z†2 Š†2 ˆ 3…E‰u2 Š ÿ …E‰uŠ†2 † ˆ 3V ar…u†:

first two equations follow from (1). The fourth moment of Ix …z†

X

# g4x …z

Hence, E‰Ix …z†4 Š ˆ 3E‰u2 Š and

Proof. Since E‰a2i Š ˆ 1, for all i, and ai s are independent of zi s, the

"

X

using an argument from the renewal theory. If the object profile, g…† and, hence, gx …† are known, then one can calculate u …† which will lead to …† and the probability density of Ix …z†. This approach relies on incorporating the knowledge of gx …† and, therefore, is limited only to the prestored objects. For more general cases, with completely unknown objects in the image, a broad family of distributions not relying on a prior knowledge of gx …†, is required. u has some distribution on the positive real line and, motivated by empirical studies, we propose to model it by a scaled ÿ-density: fu …u† ˆ

1 …u=c†pÿ1 exp…ÿu=c†; 0 < p < 1; cÿ…p†

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 23, NO. 4,

This gives R … IR2 g2x …z†dz†2 pˆ R ; 4 IR2 gx …z†dz

Taking the second approach, we model u as a scaled ÿ-distributed random variable. Now, we can integrate over the Poisson variation and derive the marginal density of Ix …z†. The characteristic function of Ix …z† becomes Z IR‡

Z …!ju†fu …u†du ˆ

1 0

u t Remark. If p > 1=2, so that the RHS is (absolutely) integrable, this formula makes sense. Otherwise, it has to be replaced by R1 1 1 with the classical definition of 2 V :P :‰ ÿ1 exp…ÿjt!† …1‡c!2 †p d!Š, R1 RT the ºvaleur principaleº V :P :‰ ÿ1 g…t†dtŠ ˆ limT !1 ÿT g…t†dt. Theorem 1. Under the transported generator model, the density function of Ix …z† is: r !  c ÿ p2 ÿ 14 1 1 2 1 f…t† ˆ p …2†ÿp‡2 tp ÿ 2 Kp ÿ 12 t ; for p > 0; …6† c ÿ…p† 2 where K is the modified Bessel function given by K …xy† ˆ

… ‡ 12†…2y† 1

2 x

Z 0

and the subexponential clutter occurs for an object when R 4 2 g …z†dz x  < R IR : … IR2 g2z …x†dz†2

  !2 exp ÿ u fu …u†du: 2

1 Since the Laplace transform of fu …†, denoted by Lu …!†, is …1‡c!† p , 1 2 1 we get …!† ˆ Lu …2 ! † ˆ c!2 p . The inverse Fourier transform …1‡ 2 † (IFT) of …!† gives the density function of Ix …z†. Denote by f…t† the density whose Fourier transform is …!† so that ! Z 1 1 1 f…t† ˆ d! exp…ÿjt!† 2 2 ÿ1 …1 ‡ c!2 †p …5† Z 1 1 1 ˆ cos…t!† d!: 2  0 …1 ‡ c!2 †p

1

cos…xz†dz 1

…z2 ‡ y2 †‡2

The right term, in this inequality, is a quantity expressing how peaked g2x …† is. When the shapes are well-defined, with clear delimitation, the right term is large and we can expect subexponential clutter unless the Poisson intensity  is also large. For profiles with blurred edges, the right term is low and we expect superexponential clutter. u t

3

Proof. The result follows from (5) and the definition of modified Bessel function. The value of shape parameter p provides some idea about the nature of the clutter. For p ˆ 1, …!† is the Fourier transform of the double exponential distribution; we will call this exponential clutter. In general, ft is the pth convolution power of the double exponential density. If p > 1, call it superexponential clutter, we get closer to the Gaussian, especially if p >> 1. On the other hand, if p < 1, call it subexponential clutter, the cusp of the density at zero becomes more pronounced. A natural question is: For what kind of clutter objects should we expect a subexponential clutter model? To answer this question, consider (3) and a result (taken from [4]) that the kurtosis of Ix …z†, modeled as a homogenous, filtered Poisson process, is given by R 4 2 g …z†dz 3 x Kurt…Ix …z†† ˆ R IR : 2 … IR2 gx …z†dz†2

DENSITY ESTIMATION FOR NATURAL IMAGES

In this section, we illustrate the estimation of the marginal densities for natural image pixels. Since f takes a parametric form with parameters p and c, the task reduces to estimating them under an appropriate criterion. We will utilize a maximum-likelihood estimation (MLE) procedure to estimate p and c. For a given n  n image I, let Ix be the image of differences between the (nearest) vertical (or horizontal) neighbors. Using (3) and parameters of fu , k2  E‰Ix …z†4 Š ÿ 3…E‰Ix …z†2 Š†2 ˆ 3Var…u† ˆ 3pc2 ; k1  E‰Ix …z†2 Š ˆ pc ) c ˆ

k2 3k2 ; pˆ 1: 3k1 k2

To compute MLEs, calculate the sample averages: E‰Ix …z†4 Š 

;

1 for ÿ1 2 , x > 0, and j arg…y†j < 2 .

427

Equating with the expression for kurtosis in (3), we obtain R 3 IR2 g4x …z†dz 3Var…u† 3 ˆ ˆ : R p …E‰uŠ†2 … IR2 g2x …z†dz†2

where p is called the shape parameter and c is called the scale parameter of u. Any other choice of f that supports the observed patterns and leads to analytical solutions, can also be adapted. Elementary calculations give the first two cumulants: E‰uŠ ˆ pc; V ar…u† ˆ pc2 . This model will not accurately represent all possible object profiles but has the advantage of being simple and not dependent upon prestoring gx s.

…!† ˆ

APRIL 2001

n 1 X Ix …i; j†4 ; 2 n i;jˆ1

E‰Ix …z†2 Š  n2

n X

Ix …i; j†2 ;

i;jˆ1

and substitute them in (7). The observed density function (or the normalized histogram) and the estimated density function f are then plotted together. Shown in Figs. 1, 2, and 3 are some examples of the three classes of clutter. Fig. 1 displays examples from the subexponential clutter, where p is significantly less than 1.0. The top row shows three natural images while the bottom row shows the corresponding densities on a logarithmic scale. The estimated densities are drawn in solid lines while the observed histograms are drawn in broken lines. Estimated values of p and c are indicated on the plots. Results in Fig. 2 depict exponential clutter (p  1:0) and in Fig. 3 depict superexponential clutter (p > 1:0). These results demonstrate that this simple model does capture some of the variation in the observed pixels.

4

MULTIDIMENSIONAL DENSITIES

So far, we have derived a marginal density function for the image pixels, but extending this derivation to higher-order densities is straightforward. A brief sketch of the extension follows.

428

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 23,

NO. 4,

APRIL 2001

Fig. 4. Estimated joint densities (with m ˆ 2 and z1 ˆ ‰1; 1Š) for the corresponding images on the left.

1 k^1 ˆ 2 n

Let z ‡ z1 ; z ‡ z2 ; . . . ; z ‡ zm be m pixel locations in the image with Ix …z ‡ z1 †; . . . ; Ix …z ‡ zm † being the associated pixel values.

b21

Our approach is to specify the joint-density, of these m variables, linear combinations. More precisely, the Cramer-Wold device P states that if the densities of m kˆ1 bk Ix …z ‡ zk † are known, for all

1 k^2 ˆ 2 n

m

m-tuples b ˆ ‰b1 ; b2 ; . . . ; bm Š 2 IR , then the joint probability density of Ix …z ‡ z1 †; . . . ; Ix …z ‡ zm † is completely specified. Consider the

Y …z† ˆ

bk Ix …z ‡ zk † ˆ

ai

m X

iˆ1

kˆ1

!

m X

so that Y …z† can be rewritten as Y …z† ˆ

bk gx …z ‡ zk ÿ zi † :

‡

Pn

iˆ1

ai h…z ÿ zi †. The random

variable Y …z†, for a fixed value of b, has the same form as Ix …z† in (2), the only difference being that the function gx is replaced by the function h. Therefore, the density of Y …z† can be derived similarly. The conditional characteristic function of Y …z†, given the Poisson 2 placements zi s, is given by exp…ÿ1 2 ! u†, where this time P 2 u ˆ i h …z ÿ zi †. As in Section 2, we model u as a scaled-ÿ random

variable with parameters p and c. Note that both c and p depend upon the vector b through the definition of h. Integrating out u, the unconditional characteristic function of Y …z† is given by: Y …!; b† ˆ

1 …1 ‡ 12 c…b†!2 †p…b†

:

Since Y …!; b† is the characteristic function of Y …z†, the IFT of Y …!; b† gives the density function of Y …z†, for a fixed value of b. Interestingly, for ! ˆ 1, Y …1; b† is the m-dimensional Fourier transform of the joint density function, call it f, with b now being the frequency vector. Hence, IFT gives f…t1 ; t2 ; . . . ; tm † ˆ ! Z Z m X 1 . . . Y …1; b† exp ÿj bk tk db1 db2 . . . dbm : …2†m kˆ1 To actually compute this joint-density function, the first step is and then take IFT. We illustrate these steps for m ˆ 2. For each value of the pair b ˆ ‰b1 b2 Š, p and c can be estimated according to:

^

c ˆ 3kk^2 , p ˆ 1

k^2

, where

z

z

!

Ix …z†Ix …z ‡ z1 †

; and

4b31 b2 4b1 b32

X z

X z

X z

z

2

Ix …z† Ix …z ‡ z1 †2 Ix …z†3 Ix …z ‡ z1 † Ix …z†Ix …z ‡ z1 †

3



 ! :

For each pair …b1 ; b2 †, one can estimate p and c that specifies the characteristic function Y …1; b†. Taking IFT of Y …1; b† gives the joint density function f. Shown in Fig. 4 are two examples of estimating the density f…Ix …z†; Ix …z ‡ z1 †† for z1 ˆ ‰1; 1Š. The images are shown on the left and the contour plots of the estimated 2D densities are plotted on the right.

5

CONCLUSION

We have presented a mathematical framework to model the clutter in natural images. Using this framework, we have derived a parametric form for the probability densities associated with the natural images. Experiments demonstrate that the estimated densities match well with the observed histograms. We have also sketched an extension to the derive m-dimensional densities with an illustration of the m ˆ 2 case. While this simple model captures some essential variability in the natural images, it can be extended in by increasing the model complexity in several ways. Intuitively, the model performance should improve with the increase in the model complexity. The proposed extensions are: 1.

2. 3.

to estimate the characteristic function Y …1; b† for all values of b

3k^21

z

X

X   Ix4 …z† ‡ b42 Ix …z ‡ z1 †4

kˆ1

bk gx …z ‡ zk †;

kˆ1

X

‡

Define a function h : IR2 ! IR, according to h…z† ˆ

b41

‡ 6b21 b22

random-variable: n X

z

X   Ix2 …z† ‡ b22 Ix …z ‡ z1 †2

‡ 2b1 b2

through the marginal densities associated with their all possible

m X

X

4.

In (1), the profiles are placed according to a homogeneous Poisson process. Relax this assumption by assuming a Poisson cluster process, (e.g., Neyman-Scott type), or some other appropriate nonhomogeneous process. The profiles can be allowed to be the translates of randomly chosen objects. The Gaussian assumption for the amplitudes can be relaxed by adapting variables that are often more suitable. For example, in the case of range images, a Rayleigh model may be appropriate. The linear superposition of the profiles can be replaced by a certain nonlinear operation. For example, by a using ªminº operation, an image pixel I…z† can be solely

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

5.

attributed to the object nearest to the camera. For different pixel locations, we will have different profiles, but a region of neighboring pixels can be from the same profile (regions of trees). The whole image can be partitioned into disjoint regions, each region corresponding to a specific model. Instead of the derivative image Ix , any other filtered form of the image can be substituted.

ACKNOWLEDGMENTS This work was supported in part by ARO-MURI DAAH04-96-10445, ARO DAAG55-98-1-0102, NSF-9871196, and ARO DAAD1999-1-0267.

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

Z. Chi, ªProbability Models for Complex Systems,º PhD thesis, Division of Applied Math., Brown Univ., 1998. D.J. Field, ªRelations between the Statistics of Natural Images and the Response Properties of Cortical Cells,º J. Optical Soc. Am., vol. 4, no. 12, pp. 2379-2394, 1987. U. Grenander, M.I. Miller, and A. Srivastava, ªHilbert-Schmidt Lower Bounds for Estimators on Matrix Lie Groups for ATR,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 790-802, Aug. 1998. U. Grenander, M.I. Miller, and P. Tyagi, ªTransported Generator Clutter Models,º Monograph of Center for Imaging Sciences. Johns Hopkins Univ. 1999. U. Grenander, A. Srivastava, and M.I. Miller, ªAsymptotic Performance Analysis of Bayesian Object Recognitionmº IEEE Trans. Information Theory, vol. 46, no. 4, pp. 1658-1666, 2000. J. Huang and D. Mumford, ªStatistics of Natural Images and Models,º Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 541-547, 1999. A.B. Lee, D. Mumford, and J. Huang, ªOcclusion Models for Natural Images: A Statistical Study of Scale-Invariant Dead Leaves Model,º Int'l J. Computer Vision, 2000. D. Mumford, ªEmpirical Investigations into the Statistics of Clutter and the Mathematical Models it Leads to,º A Lecture for the Review of ARO Metric Pattern Theory Collaborative, 2000. E.P. Simoncelli, ªHigher-Order Statistical Models for Visual Images,º Proc. IEEE Signal Processing Workshop on Higher Order Statistics, pp. 54-57, June 1999. A. Srivastava, M.I. Miller, and U. Grenander, ªBayesian Automated Target Recognition,º Handbook of Image and Video Processing, pp. 869-881, Academic Press, 2000. G. Winkler, Image Analysis, Random Fields, and Dynamic Monte Carlo Methods. Springer, 1995. S.C. Zhu and D. Mumford, ªPrior Learning and Gibbs Reaction-Diffusion,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 11, Nov. 1997.

VOL. 23, NO. 4,

APRIL 2001

429