The Iterated Minimum Distance Estimator and the ...

3 downloads 0 Views 321KB Size Report
The Iterated Minimum Distance Estimator and the Quasi-Maximum Likelihood Estimator. Author(s): P. C. B. Phillips. Source: Econometrica, Vol. 44, No. 3 (May ...
The Iterated Minimum Distance Estimator and the Quasi-Maximum Likelihood Estimator Author(s): P. C. B. Phillips Source: Econometrica, Vol. 44, No. 3 (May, 1976), pp. 449-460 Published by: The Econometric Society Stable URL: http://www.jstor.org/stable/1913973 Accessed: 31-03-2015 03:15 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

The Econometric Society is collaborating with JSTOR to digitize, preserve and extend access to Econometrica.

http://www.jstor.org

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

Econometrica, Vol. 44, No. 3 (May, 1976)

THE ITERATED MINIMUM DISTANCE ESTIMATOR AND THE QUASI-MAXIMUM LIKELIHOOD ESTIMATOR BY P. C. B. PHILLIPS' A multiple equation nonlinear regression model with serially independent disturbances is considered. The estimation of the parameters in this model by maximum likelihood and minimum distance methods is discussed and our main subject is the relationship between these procedures. We establish that if the number of observations in a sample is sufficiently large, the iterated minimum distance procedure converges almost surely and the limit of this sequence of iterations is the quasi-maximum likelihood estimator. 1. INTRODUCTION IN THIS PAPER we

(1)

consider the model (t = 1,2,.. .)

Yt = gt(xo) + ut

where yt is an m x 1 vector of observable random functions of discrete time (t) and gt is a vector of known functions which depend on a p x 1 vector of unknown parameters whose true value is denoted by xo. In general, g, is a function of a number of exogenous variables as well as xo so that the function is time dependent. The last component of the model is the vector ut of additive stochastic disturbances. The non-linear regression model (1) has recently been the subject of discussion by various authors [2, 3, 4, 5, 6, and 7]. The purpose of these investigations has, in the main, been to suggest procedures for estimating the unknown parameters in (1) and to derive the asymptotic properties of these estimators under varying assumptions about the stochastic properties of ut. The aim of the present paper is to consider, in particular, the iterated minimum distance estimator (MDE) proposed by Malinvaud [5] and to discuss the relationship between this estimator and the quasi-maximum likelihood (QML) estimator when the disturbances in (1) are serially independent. We start with the following assumption: ASSUMPTION 1: (i) The parametervector xo lies in a compact set / in p-dimensional Euclidean space RP. (ii) The disturbance vectors {ut: t = 1, 2,.. .} are stochastically independentand identically distributedwith zero meanand positive definitecovariance matrix Q. (iii) The elements of gt are continuousfunctions on 0.

Any vector

XT(S)

in 0 which minimizes the quadratic form T

RT(X) = T 1

t= 1

Yt- gt(x))'S(yt-gt()),

given the observations {yt:t = 1, .. ., T} and some positive definite matrix S, is called a MDE of xo. The fact that XT(S) exists under Assumption 1 as a measurable I wish to thank a referee for his helpful comments and suggestions on earlier drafts of this paper. 449 This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

P. C. B. PHILLIPS

450

function of Y. YT follows directly from Lemma 2 of Jennrich [3]. On the other hand, the QML estimator cT of ao is obtained by maximizing with respect to in 0 what would be the likelihood function if the u, were normally distributed. Concentrating the likelihood function with respect to a, we find that QTminimizes a

(

log det {T

T

T t= I

(Y

-

gOa))(y'

-

0a))3

It has been suggested [5 and 7] that if we iterate the MDE aT(S) by replacing S at each iteration with the inverse of the moment matrix of residuals {ut*= 1 . *T} from the previous iteration, we will arrive eventually at t- gt(2T(S)) t-l, the QML estimator aT. This conjecture raises two questions: (i) whether the iteration converges; (ii) whether the outcome of the iteration (given that it does converge) depends on the choice of S used at the start of the iteration. We will study both questions and show that, at least for large enough T, the iteration does converge and the limit point, which is independent of S, is CT. 2. STRONG CONVERGENCE OF YXT(S)AND

AT

Our notation is based on Jennrich [3]. Thus, if {xt, yt: t = 1, . . . T} are sequences of real vectors, then we represent T S=t 1 xty by (x, Y)T and if the elements of (xI Y)T converge to finite limits as T-- oG the limit matrix is denoted by (x, y) and is called the matrix tail product. Moreover, if tf(a): t = 1,..., T} is a sequence of real valued vector functions on 0 and (f(x), If(fl))T - (f (a), f(fl)) uniformly for all and f in 0, then we say that the matrix tail product of f exists in 4. The tail product in this case is a matrix function on 0 x 0 and its elements are continuous if the functions ft are continuous for all t. We now add the following assumption: a

2: (i) The matrix tail product (g(a), g(f/)) exists for all (a, fl) in ? x 0. ASSUMPTION (ii) The matrix tail product (g(a) - g(xo), g(a) - g(cxo)) is positive definitefor all % a aO in 4. Part (ii) of Assumption 2 requires that the vector ao be identifiable in gt(ao): t= 1,2,...}

and this implicitly imposes conditions

on the components

of the model which make up the systematic component gt(a0). For instance, in the constrained linear model yt = A(x0)zt + ut where the elements of the matrix A are subject to a number of restrictions and can therefore be regarded as functions of a smaller set of parameters comprising the vector caand zt is a vector of exogenous variables, Assumption 2(ii) implies that if the matrix tail product of z is nonsingular then ao is identifiable in A(cxo).Similar assumptions have been made by other writers [3, 4, and 6]. For a in b and some positive definite matrix ST we define QT(CL)

=

tr

{ST(g(C)

-

g(CXO), g(a)

-

g(CO))T}-

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

451

MINIMUM DISTANCE ESTIMATOR

The asymptotic behavior of QT(X) is given by the following result whose proof is straightforward and is omitted. LEMMA1: If Assumption2 is satisfied and ST converges almost surely to a positive definite matrix S, then QT(X) converges almost surely to

Q(ac)= tr{ S(g(x) - g(oxo),g(x) -g(O)l and Q is continuous on 0. Moreover, Q(x) is positive for all x in 0 not equal to xo. We now have the following theorem: THEOREM1: If Assumptions 1 and 2 are satisfied anidif S is an arbitrary positive definite matrix, then the MDE CXT(S)converges almost surely to xo and (y -g(T(S)), y-

g(XT(S)))T

convergesalmostsurelyto Q.

This theorem is a simple extension of Jennrich's Theorem 6 [3] and can be established in essentially the same way. Using the strong law of large numbers2 we know that RT(X) -+ Q(X) + tr (SQ) almost surely and uniformly for x in 0. Hence, if x* is a limit point3 of the set of points in the sequence {XT(S)}, it follows from Lemma 1 and the inequality RT(XT(S)) < RT(a?) that Q(x*) + tr (SQ) < tr (SQ). Since Q attains its minimum only at xo, we have x* = xo and XT(S) -XO almost surely. The fact that (y - g(XT(S)), y - g(XT(S)))T -+ Q almost surely now follows because Q(X0)= 0. 2: If Assumptions1 and 2 are satisfied then the QML estimator aT THEOREM almost surely. PROOF:Writing DT(X)

(2)

DT(

T) =

log det (y

-

g(cx),y

-

g())

XO

we know that

inf DT(X) ae4>

and the existence of XT is assured by the continuity of the elements of g and the compactness of )T'=

aT({

T

)

}

and (4)

S(n) = (y -

g((n)), y

-

g(a(n)))

are all strongly consistent, given S(T)= S, an arbitrary positive definite matrix. The fact that the estimators $p(n) converge almost surely to the same value xo as T -+ o suggests that the iterative procedure involved in (3) and (4) should be numerically stable for large enough T. We consider this question in the next section. 3. NUMERICAL STABILITY OF THE ITERATED MDE

First of all, we introduce a number of additional assumptions that will be needed in this section. ASSUMPTION 3: The derivatives of g,(o) up to the second order exist and are continuous on 0. The matrix tail products involving gj(c), agj,(a)/1a, and a2gi,(a)/1aa' exist for all i =1, . . ., m. We denote by W4(Lx)' the matrix whose (i, j)th element is wijj(x)= Ogj,(a)/,ai and define for every positive definite matrix S of order n the matrix T

MT(S,CX)=

T-' E N(cL)'S(cx) t= 1

ASSUMPTION 4: For any posit-ivedefinite matrix S of order m, MT(S, Xo) has a positive definite limit as T -* oo. Note that Assumption 4 implies that MT(S, ao) is positive definite for large enough T. 5: The vector oo lies in the interior of $. ASSUMPTION In view of Assumption 3, each estimator (4) satisfies the necessary conditions (5)

W-1(a1)'{S?r 1)}

T-' t=

X(n)in

- gtg

an

the sequence defined by (3) and

-

0

1

where STT 11 = (y-g(aV')), For (5) to be well defined we need y-g(cxnl)))T. So - 1) to be nonsingular. Later in the paper (in the proof of Theorem 3) we will show that this is so for large enough T. Since S(n-1) depends on the previous

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

MINIMUM DISTANCE ESTIMATOR

member (6)

c

453

we can write (5) as the implicit function iteration of the sequence { so(')} (n = 2, 3,...). x(n)) HT(n(), = 0

-1)

To ensure that at each stage X(n)is a minimum distance estimator it is usual to require that x(n)minimizes q (n)(aC)= tr [{SOT

}

(Y -

g(x), y

-

00T]

in 4. This brings us back to the iteration defined by (3) and (4). An alternative which we adopt in the following is to combine (6) with the requirement that both a(n)and x(n- 1)are restricted to an open sphere in 4 that contains oo, has center -T where HT(XT, XT) = 0, and fixed radius independent of T. Such a sphere is constructed below and it is shown that, for large enough T, 4(n) is then uniquely defined by (6) in this region and is, moreover, the MDE for which ca(n)(a) attains its global minimum in 4. The initial condition on a(T) can be met by requiring a") to be a MDE (with, for example, S(0) = IQ) so that, for large enough T, a() lies in a suitable neighborhood of xo. For large T, there will, therefore, be no real difference between this formulation of the iteration and that based on (3) and (4). THEOREM 3: If Assumptions 1 through5 are satisfied then there exists a neighborhood N of aO in 0 such that for almost every y = {yt:1, 2,.. .} there is an integer T(y) Jor which the iteration (6) will converge from any starting value X() in N for the QML estimator of o1. all T > T(y). The point of attraction of this iteration is aT

STRUCTUREOF THE PROOF:Since the proof of the theorem is lengthy it may be useful to comment on the main steps in the development of the argument: (i) We first note that the QML estimator aT is one point of attraction of the iteration (6). Later on in the proof we verify that this point of attraction is unique in a fixed neighborhood of otOfor large enough T. (ii) We then investigate the properties of the derivatives of the implicit function HT and use these to establish that (a) for starting values of the iteration in a suitable fixed neighborhood of ao (which we construct in the proof) x(n)can be expressed uniquely in the explicit form a(T) = fT(cx(- 1)), and that (b) this explicit iteration is numerically stable and converges to the same point of attraction for all starting values in this neighborhood. The verification of (a) which occupies the main body of the proof requires more than local uniqueness in a neighborhood of aT where HT(5aT, aT) = O. Instead, we need to establish global uniqueness for all points in a neighborhood of ao, which remains fixed for all values of T considered. This is done in the theorem by appealing to one of the Gale-Nikaido univalence theorems [1].4

PROOF:By the statement "almost every y" we mean almost every y with respect to the probability measure induced by the stochastic properties of the u, on the space of all realizations of the yt process. 4 Use of the implicit function theorem alone ensures only a unique solution in a neighborhood of AT. This neighborhood cannot be guaranteed to include a fixed neighborhood of ao for all large T as is required by the theorem.

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

454

P. C. B. PHILLIPS

To prove the numerical stability of (6) we must first verify that there is a point of attraction 5T in 0 for which

(7)

HT(OCT, OT) =

0.

The equation system (7) provides the necessary conditions for DT(C) to have a stationary point at aT in the interior of 0. According to Theorem 2 and Assumption 5, such a point is given by the estimator aT when T is sufficiently large. More precisely, for almost every y, there is a T1(y)such that &Tlies in the interior of 0 for all T > T1(y).Hence, the existence of at least one point of attraction is assured . We will show later in the proof and in the argument that follows we treat -T that this point of attraction is unique in a fixed neighborhood of oo for large enough T. But, first, we turn to investigate the derivatives of HT. From (5) and (6) we obtain' =

aHT(a , a

(8)

)

A

-

(n- 1))

(n

(n)

where T

AT(( ,

1)) =

-

T'

1 E

(O(T))'{s(

)

1n)}-

t= 1

and

UT

is the matrix whose (i, j)th element is UJT(oc,

(na(

T1

1)) =

-e{S(n

l)}-

(Yt -

gt(a(n)

We let ,Band y be two vectors in 0 and consider OHT(fl,y)/ (.) The (i, j)'th element of this matrix is (9)

-

T_T

E E t kl

Wikt(fl)ST()WJIf(f) + T

z

g() ST( t Ooaj00C

(Y -

(1)

where ST(y) = (Y - g(y), Y - g(y))T and ST(y)f' - (SiTj()). Writing y - g(y) = y - g(Qo) + g(aco)- g(y), it is clear from the strong law of large numbers and Jennrich's Theorem 4 [3] that ST(y) converges almost surely and uniformly in y to S(y) = Q + (g(ao)

-g('),

g(o)

-

g(y))

as T -* oo. S(y) is positive definite for all ^yand it follows from the uniform convergence of ST(y) that ST(y) is nonsingular for T sufficiently large. Hence, (9) is well defined for large enough T. By Assumption 3, (9) converges almost surely as T

(10)

-+

oo to

-tr {S(y)-'(g(f)1/8ai, ag()/laj)} + tr {S(y)

(g(O)

- g(f),

82g(fl)/DXi

0ax)}

In much of what follows we omit the subscript T on x(n to simplify the notation. This is unlikely to cause confusion.

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

MINIMUM DISTANCE

ESTIMATOR

455

uniformly for (/5,y) in j x b. Hence, aHT(/3, y)/ac(n)converges uniformly in (,B,y) as T- cc to H'(f3, 7) = A(/, 7) + U(/, 7) where the (i,j)th elements of A and U are the first and second terms, respectively, of (10). The elements of U(oco,oxo)are clearly zero and, since S(xo0)= Q, it follows from Assumption 4 that H' is nonsingular at (/3,y) = (oh, ot). But the elements of H' are continuous functions of : and y, so there exists a closed sphere S* with center oxoin 0 such that H' is nonsingular for all (/3,y) in S* x S*. Moreover, (n)aHT(13,Y)/0C H'(,B,7) uniformly in (,B,y) and, therefore, there exists an integer (n) is nonsingularin S* x S* for all T > T2(y).We T2(y)such that aHT(3, 7)/oC select the radius of S* in such a way that the boundary of S* lies in the interior of 0. Then, from the strong convergence of OCT to oOO we know that there exists an integer T3(y)> T,(y) for which OC(y) is an interior point of S* when T > T3(y). T)/o(n) is nonsingular for all T ) T4(y)= max {T2(y),T3(y)}. a Hence, OHT(T, n1-. The i'th row (which, for convenience, we We now consider aHT(3, 7)/aOC write as a column vector) of this matrix is (11)

-T'1

EWY()S T(y) a1s

j

ST(y)

(Yt

-

OR))

for T large enough to ensure that ST(y) is nonsingular. Since 0ST( )/ai

=

-

(9g(y)/0i,

Y

-

9()T

-

(Y

-

g(A), ag(;')/a00)T

it follows from Assumption 3 that aST(V)/acXi converges almost surely and uniformly in y to

(12)

Si(G) -(g(y)/ai,

Y - g(y))- (y - g(y),ag(7)/d0a).

- gt(o(o)+ gt(ao) - gt(/3) in (11), we see that the (i, j)th Writing y, - g,(f) = 1) converges almost surely and uniformly in (,B,y) to element of aHT(/3, 7)/aC(n-

(13)

-tr {S(T)- Si(y)S(f)- (y

-

g(j3),ag(ftl)ao)}.

The square matrix of order p whose (i, j)th element is given by (13) we denote by H2(/3, 7). It then follows from (12) and (13) that the matrix H2 is zero when = o /= From the continuity of (13) and the uniform convergence of aHT(f3, )/Naoc 1), we can find for any c > 0 an integer T5(y)and an open sphere R* with center a% and radius r such that 110HT(fl,7^)1a0C('-1)II
T5(y)and for all ,Band y in R*. The double vertical bars are used to denote the Euclidean norm. We select 3/4

sup T- T2(Y)

--~~~~~~~~~~~~~n { sup 11{aHT(/3, 7)/aO( )}-I } O,YES*

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

456

P. C. B. PHILLIPS

so that, taking R* to be within S*, we have {aH(13 11

(14)

)/c()}

- 1)} )1,7)/c(n

- 1 {HT(3

< 1{aHT(f, 7)/a0,n}-|1

I

aHT(13,7)/a0,(n-1)
T2(y) for which the eigenvalues of AHT(/X, y)/a/3 are positive whenever T > T8(y)and (/B,y)E S* x S*.Thus, AHT(Xy)/a/3and, therefore,aFT(/3,y)/a(f, y) are P matrices on S* x S* when T > T8(y). By the Gale-Nikaido univalence theorem [1, Theorem 4], the sequence of functions {FT:T > T8(y)} are 1:1 mappings of the corresponding sequence of closed rectangles {ET x ET: T > T8(y)} onto {FT(ET x ET) T > T8(Y)} We can write the sequence of inverse functions as

{#

=

GT(Z,W),y = w; (z, w) E FT(ET x

ET)},

and the elements of GT are continuously differentiable by the inverse function theorem of advanced calculus. Since ST c ET c R* when T ) T7(y),we have (19)

{/I = GT(Z,W), y

=

w; (z, w) E FT(ST x ST)}

for T ) T*(y) = max {T7(y), T8(y)}.ST contains -T and o5Tsatisfies HT(OT, oT) = 0, so that (0, w) E FT(ST X ST) for all w E ST. Setting z = 0 in (19) we have, for all. T ) T*(y), AT = GT(O,AT) and HT(GT(O, Y), y) = 0 for all y E ST. Thus, conditions (ii) and (iii) given earlier in the proof are satisfied. The above argument is valid for almost all y and the theorem is proved with the spherical neighborhood N of ocoand the integer T(y) = max {T6(y),T*(y)}. Q.E.D. If the starting point of the iteration (6) is o(l)T= OT(S)for some positive definite matrix S, then Theorems 1 and 3 have the following corollary: () for some positive COROLLARY:If Assumptions 1-5 are satisfied and a( ) definite matrix S, then for almost all y there is an integer T(y, S) such that the iteration (6) converges to aT whenever T ) T(y, S).

4. CONCLUSION

Theorem 3 tells us that, for any starting point ot") in a sufficiently small neighborhood of the true value ao, the iterated MDE converges to the QML estimator with probability approaching one as T -* oo. Given that a(T)in the implicit function iteration (6) is itself a MDE, the condition that aT") be sufficiently close to %o can be translated into a further restriction on the sample size, as indicated by the Corollary of Theorem 3. Intuition suggests that a poor choice of the arbitrary matrix S in ( ) = 2T(S) may necessitate a large sample size before convergence to the QML estimator is assured. Nevertheless, given two different starting points AT(Sl) and OT(S2) to the iteration, where S1 0 S2 are two arbitrary positive definite matrices, we can be sure that the iterations converge on the same point (PT) provided T ) T(y) = max {T(y, S1), T(y, S2)}. In this sense, the iterated MDE is independent of the choice of the positive definite matrix S used at the start of the iteration.

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

MINIMUM DISTANCE

ESTIMATOR

459

Another implication of the proof of Theorem 3 is that, if we start with an MDE ol) T(S), the unique solution 7.0) of (6) is indeed the MDE which minimizes provided T is sufficiently large. To show this we consider t(?2)which satisfies q(?n)(a), HT(OC,~(?2),)= 0 and thus the necessary conditions for q(2) (o) to have a local minimum at o(T2). But S(1)almost surely and G2q T(2)O)

02 q(?x)

_~ ocxocx'

doc ocx'

a

almost surely and uniformly in cx,where q(oa)= tr{Q- '(g(ao) - g(a), g(ox) - g(a))} + n.

Moreover, from the proof of Theorem 3 we know that 02

q(ax) Oa aoc

-

A(ax,xo) - U(c, xo)

and this matrixis positivedefinitefor all cxE S*. Hence,02qp(2)(1)/0a is positive aOc' definite for cxE S* and large T. In particular, it is positive definite for cx= c(2) which lies in ST C S* whenever Oc(')lies in ST. Thus, q(2) is a local minimum of Since oC(2)is the only turning point of q(2)(CX)in ST., and since the global q(?2)(CX). minimum of q(2)(CX) converges almost surely to oc, it follows that oC(2)is the global minimum of q(2')(c) for large enough T. The proof for n > 2 follows in the same way. Before concluding this paper, two comments are in order. First, our discussion has been rather abstract so that we have not yet mentioned that cx(')itself must, in general, be found by an iterative procedure, and a starting value for this primary = aT(S) does minimize tr {S 1 x iteration will be needed. However, provided ?TP (yg(oc),Y - g(O))T}, this fact does not affect our results. Finally, it may be worth emphasizing that the equivalence of the iterated MDE and the QML estimator for sufficiently large T has been established under conditions which, in themselves, ensure that the QML estimator is strongly consistent. This fact is of some importance (c.f. [5, pp. 338-340]). For, although the estimator QT can be formally regarded as the MDE AT(ST') where ST = (Y - g(90)) Y g(aT))T the fact that QT o almost surely does not follow from Theorem 1 because, in the absence of Theorem 2, we cannot assert that ST converges almost surely as T -+oo to a positive definite matrix. University of Essex Manuscript received August, 1974. REFERENCES [1] GALE,D., AND H. NIKAIDO: "The Jacobian Matrix and Global Univalence of Mappings," MathemiatischeAnnales, 159 (1965), 81-93. (Reprinted in Readings in Mathematical Economics, Vol. I, ed. by P. Newman, Baltimore: Johns Hopkins Press, 1968.) [2] HANNAN, E. J.: "Non-Linear Time Series Regression," Journal of Applied Probability, 8 (1971), 767-780. [3] JENNRICHI, R. I.: "Asymptotic Properties of Non-Linear Least Squares Estimators," Annals of Mathematical Statistics, 40 (1969), 633-643.

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

460

P. C. B. PHILLIPS

[4] MALINVAUD, E.: "The Consistency of Nonlinear Regressions," Annals oJ Mathematical Statistics, 41 (1970), 956-969. : Statistical Methods of Econometrics, second edition. Amsterdam: North Holland, 1970. [5] [61 ROBINSON,P. M.: "Non-Linear Regression for Multiple Time-Series," Journal of Applied Probability, 9 (1972), 758-768. [7] SARGAN,J.: "The Identification and Estimation of Sets of Simultaneous Stochastic Equations," mimeograph. London: London School of Economics, 1972.

This content downloaded from 130.132.173.53 on Tue, 31 Mar 2015 03:15:43 UTC All use subject to JSTOR Terms and Conditions

Suggest Documents