Technical Appendix for “Sequentially Optimal ... - Google Sites

0 downloads 44 Views 179KB Size Report
Step 1c. We now demonstrate that a bounded and continuous function over a sequentially compact set has a maximum. First
Technical Appendix for “Sequentially Optimal Mechanisms” Vasiliki Skreta UCLA October 2005

Abstract This document contains a proof of Proposition 2 and a more “formal” proof of the main theorem of the paper "Sequentially Optimal Mechanisms."

1. Omitted Proof Proof of Proposition 2 Step 1 We start by proving existence of the solution of the seller’s problem when T = 1. The seller seeks to solve max p∈=

Z

0

1

p(v)vdF (v) −

Z

0

1

p(v)[1 − F (v)]dv

where = = {p : [0, 1] → [0, 1] , increasing} . Step 1a. (Sequential Compactness) In order to show sequential compactness of = we will refer to the following results. Theorem (pointwise convergence) A sequence pn of functions from X to W converges to a function p in the topology of pointwise convergence1 if and only if for each s ∈ X(= [0, 1] in our problem), 1

Definition 1 (Topology of pointwise convergence.) Given a point x of [0,1] and an open set U of space [0,1] let q r S(x, U) = p | p ∈ [0, 1][0,1] and p(x) ∈ U

The sets S(x, U ) are a subbasis for a topology on [0,1][0,1] which is called the topology of pointwise convergence. The typical basis element about a function p consists of all functions g that are close to p at finitely many points.

1

the sequence pn (s) of points of W (= [0, 1] in our problem) converges to the point p(s). (For a proof see Munkres “Topology: A first Course” page 281.) Let {pn } be a sequence of elements of =. Then, from Helly’s Selection Principle, (see Kolmogorov and Fomin p. 372), it follows that there exists p ∈ = and a subsequence of {pn } that converges pointwise to p. From the previous Theorem it also follows that there exists p ∈ = and a subsequence of {pn } that converges to p. Hence every sequence in = has a convergent subsequence. It follows that = is sequentially compact.2 Step 1b. (Continuity) We want to show that the objective function is continuous on = in the topology of pointwise convergence. In order to accomplish this we will use Lebesque’s Dominated Convergence Theorem. Theorem (Lebesque’s Dominated Convergence Theorem). Let g be a measurable function over a measurable set E, and suppose that {hn } is a sequence of measurable functions on E such that |hn (s)| ≤ g(s) and for almost all s ∈ E we have hn (s) → h(s). Then Z Z h = lim hn . E

E

(For a proof see Royden (1962) p.76.) Take E = [0, 1] which is a measurable set, let

g =

sup |p(s)s|

s∈[0,1]

gˆ =

sup |p(s)[1 − F (s)]| .

s∈[0,1]

Note that g, gˆ are measurable, since they are constant functions, and g, and respectively gˆ, is an upper bound for every function ˆ h(s) = p(s)s and h(s) = p(s)[1 − F (s)]. 2

Definition 2 (Sequential Compactness). A topological space X is said to be sequentially compact if every infinite sequence from X has a convergent subsequence.

2

ˆ are measurable functions. Take hn (s) = pn (s)s and h ˆ n (s) = pn (s)(1 − F (s)) and Observe that h and h apply Lebesque’s Dominated Convergence Theorem with g and gˆ defined as above: Z 1 Z 1 pn (s)sdF (s) = p(s)sdF (s) lim 0

and lim

Z

0

hence

0

1

pn (s)[1 − F (s)]ds =

R(pn ) =

Z

0

1

pn (s)sdF (s) −

Z

1

0

Z

0

p(s)[1 − F (s)]ds

1

pn (s)[1 − F (s)]ds,

is continuous in pn . Step 1c. We now demonstrate that a bounded and continuous function over a sequentially compact ¯ = supp∈= R(p) and let pn be a sequence set has a maximum. First note that R(p) is bounded by 1. Let R in = such that ¯ − 1 , n ∈ N. R(pn ) ≥ R n Since = is sequentially compact, every sequence has a convergent subsequence, therefore {pn }n∈N , has a convergent subsequence, {pn(1) }n(1) ∈N , that converges to p¯. Since R is continuous at p¯, we have that ¯ Hence the maximum exists. R(¯ p) = limn(1)→∞ R(pn(1) ) = R. Step 2. We now proceed to show that the maximizer is of the form ( 1 if s ≥ z (1) pz (s) = 0 if s < z. The objective function is linear in the choice variable so the maximizer will be an extreme point of the set of =. The set of extreme points of = is K = ∪z∈[0,1] pz where pz is defined in (1). Every increasing, non-negative function p with p(1) = 1 can be written as a convex combination of functions as defined in (1) Z 1 pz (s)dp(z). G(v) = 0

3

Let p∗ be a maximizer of the problem defined in (4), in the main text. Let R∗ denote the maximum value of the objective function. Then using the above representation and Fubini’s theorem we have Z 1 Z 1 p∗ (v)vdF (v) − p∗ (v)[1 − F (v)]dv 0 0 ¾ ½Z 1 ¾ Z 1 ½Z 1 Z 1 = pz (v)dp(z) vdF (v) − [1 − F (v)] pz (v)dp(z) dv 0 0 0 0 ¸ ¸ Z 1 ·Z 1 Z 1 ·Z 1 pz (v)vdF (v) dp(z) − [1 − F (v)]pz (v)dv dp(z) = 0 0 0 0 ¸ Z 1 ·Z 1 Z 1 = pz (v)vdF (v) − [1 − F (v)]pz (v)dv dp(z) = R∗ . 0

0

0

This is a convex combination of functions of the form given in (1). Hence one of these functions is a maximizer. Step 3. Now we turn to show that the optimal cutoff is given by ½ ·Z v˜ ¸ ¾ Z v˜ ∗ sdF (s) − [1 − F (s)]ds ≥ 0, for all v˜ ∈ [v, 1] (2) v ≡ inf v ∈ [0, 1] such that v

First note that

v

v∗

is well-defined because the set ¸ ¾ ½ ·Z v˜ Z v˜ sdF (s) − [1 − F (s)]ds ≥ 0, for all v˜ ∈ [v, 1] v ∈ [0, 1] such that v

v

v∗

does not characterize the optimal mechanism when is non-empty since it contains 1. Suppose that ∗ T = 1. With some abuse of notation, let R(v ) denote the seller’s expected revenue given an allocation rule described by (1) with cutoff v∗ . If v ∗ is not optimal then there exists another cut-off v˜ such that R(˜ v) > R(v∗ ). First, suppose that v˜ < v ∗ . Then by the definition of v ∗ , there exists a v 0 ∈ [˜ v , v ∗ ]3 , such that Z v0 Z v0 sdF (s) − [1 − F (s)]ds < 0. v˜

(3)



In this case expected revenue is given by Z v˜ Z v˜ Z v0 Z v0 R(˜ v) = 0 · sdF (s) − 0 · [1 − F (s)]ds + sdF (s) − [1 − F (s)]ds v˜ v˜ 0 0 Z 1 Z 1 + sdF (s) − [1 − F (s)]ds v0

v0

Actually, from the definition of v ∗ it follows that there exists vˆ ∈ [v 0 , b] such that reveal that we can take vˆ ≤ v ∗ without any loss. 3

4

U vˆ

v0

φ(t)dt < 0. A moment’s thought will

From (3) it follows that R(˜ v)
R(v∗ ). Then, Now suppose that v˜ > v ∗ and R(˜ R(˜ v) =

Z

Z

v∗

v∗

0 · sdF (s) − 0 · [1 − F (s)]ds + 0 0 Z 1 Z 1 + sdF (s) − [1 − F (s)]ds. v˜

R(˜ v) ≤



v∗

0 · sdF (s) −

Z



0 · [1 − F (s)]ds

v∗



From the definition of v∗ it follows that Z

Z

v∗

R v˜

v∗

Z

sdF (s) − v∗

R v˜

v∗ [1 − F (s)]ds

Z

≥ 0, hence



0 · sdF (s) − 0 · [1 − F (s)]ds + sdF (s) − v∗ 0 0 Z 1 Z 1 sdF (s) − [1 − F (s)]ds = R(v∗ ), + v˜

Z



v∗

[1 − F (s)]ds



contradiction.

2. An Alternative Proof of the Main Result From Propositions 4 and 8 in the main text, we have that: if (p, x) are implemented by an assessment that is a P BE they satisfy (i) Lemma 1 and (ii) p(v) = r for v ∈ [0, zˆ2 ) p(ˆ z2 ) ∈ [r, r + (1 − r)δ] p(v) = r + (1 − r)δ for v ∈ (ˆ z2 , v¯) p(¯ v) ∈ [r + (1 − r)δ, 1] v), where z2 (¯ v ) is the optimal price given for some v¯ ∈ [0, 1], r ∈ [0, 1], and zˆ2 ≤ z2 (¯ ( F (v) ¯] F (¯ v) , for v ∈ [0, v , for some v¯ ∈ [0, 1]. F2 (v) = 0, otherwise 5

(4)

Here, with the a slight abuse of notation, we call P the set of allocation rules that satisfy Lemma 1 and (4). The set P ∗ is defined in Definition 2 in “Sequentially Optimal Mechanisms.” We start by verifying that the maximum of R over P and P ∗ indeed exists. In order to prove existence of a maximum we must establish that the spaces of functions P and P ∗ are sequentially compact and the objective function is continuous in the same topology. Lemma A 1. The maximum of R over P and over P ∗ exists. Proof. We will prove that the maximum of R over P exists. Using an identical procedure one can show that the maximum of R over P ∗ exists. The proof will be done for the case that the game lasts for two periods. All the arguments are valid, with more complicated notation, for the case that T > 2. Continuity. Continuity of R follows from an identical argument as the one used in Step 1b, in the proof of Proposition 2. In order to prove that the maximum exists it remains to demonstrate that P is sequentially compact in the topology of pointwise convergence. Sequential Compactness. We will first show that every sequence pn ∈ P, n ∈ N has a subsequence that converges pointwise to p ∈ P. Recall that pn :[0, 1] → [0, 1], increasing and of the form pn (v) = r for v ∈ [0, z2n ), pn (v) = r + (1 − r)δ for v ∈ [z2n , v¯n ) , vn ) ∈ [r + (1 − r)δ, 1] pn (¯ with

z2n

≤ z2 (¯ vn ), and where z2 (¯ vn ) is the optimal price at t = 2 given beliefs F2 (v) =

(

F (v) F (¯ vn ) ,

for v ∈ [0, v¯n ]

0, otherwise Suppose that the limit of v¯n is v¯, (this limit exists since every bounded sequence has a convergent subsevn ) is increasing in v¯n , and hence it is continuous in v¯n from the quence). From Lemma 2 we know that z2 (¯ right, which guarantees that vn ) ≤ z2 (¯ v ). lim z2 (¯ n→∞

Now for all n ∈ N we have that

z2n ≤ z2 (¯ vn ),

let zˆ2 = limn→∞ z2n , (this limit exists because z2n is a bounded sequence), hence taking the limit as n → ∞ we get that vn ) ≤ z2 (¯ v). zˆ2 ≤ lim z2 (¯ n→∞

Let w1 , w2 , .... denote the rational points of [0, 1]. Since pn is bounded, the sequence {pn } has a subsequence, (1) (1) (2) {pn } that converges at point w1 . Since {pn } is also bounded, it has a subsequence {pn } converging at 6

.

(2)

(3)

the point w2 as well as the point w1 ; {pn } contains a subsequence {pn } that converges at point w3 as (n) well as at point w1 and w2 and so on. The “diagonal sequence” {hn } = {pn } will then converge to every rational point of [0, 1]. The limit of this subsequence, p, is an increasing function from [0, 1] to [0, 1]. z2 , v¯). We Moreover p(s) = r for all the rationals in [0, zˆ2 ), and p(s) = r + (1 − r)δ for all the rationals in [ˆ 4 complete the definition of p at the remaining points of [0, 1] by setting p(v) = lim p(w) if v is irrational. v→w−

The resulting function p is then the limit of {hn } at every continuity point of p, (see Kolmogorov and Fomin page 373). Since p is increasing it has at most countably many discontinuity points. Using the diagonal process we can find a subsequence of {hn } that converges to all the discontinuity points p, which implies that it converges pointwise everywhere to p on [0, 1]. From the above arguments it follows that {pn }n∈N has a subsequence that converges pointwise to p which is an increasing function, such that at zˆ2 its value jumps from r to r + (1 − r)δ and at v¯ its value is p(¯ v ) = r + (1 − r)δ, in other words, p : [0, 1] → [0, 1] is increasing and such that p(v) = r for v ∈ [0, zˆ2 ), p(v) = r + (1 − r)δ for v ∈ [ˆ z2 , v¯) p(¯ v ) ∈ [r + (1 − r)δ, 1] v ). with zˆ2 ≤ z2 (¯ Therefore p ∈ P. From Theorem on pointwise convergence, (stated in the proof of Proposition 2), it follows that {pn }n∈N has a subsequence that converges to p. Hence every infinite sequence in P has a convergent subsequence. Therefore, P is sequentially compact. As seen in the proof of Proposition 2, Step 1c, a bounded continuous function on a sequentially compact set has a maximum. The following Lemma establishes that the allocation rules in P ∗ are extreme points of the set P. Lemma A 2. Every allocation rule in P can be approximated arbitrarily closely, in the usual metric, by a convex combination of elements of P ∗ . Proof. We use p and q to denote generic elements of P and P ∗ respectively. Every measurable function can be approximated by a step function in the usual metric generated by the norm (see for instance Royden 1962). An element of P, say p, can be therefore approximated by a step function g. We now show that every step function that is arbitrarily close to an element of P, can be written as a convex combination of elements of P ∗ . Take a p ∈ P, and a step function g, such that |p − g| < ε, ε > 0 arbitrarily small. 4

The notation v → w− means that v approaches w from below.

7

Since the restriction of p on [0, v¯) is a step function, we can take p(t) = g(t), for t ∈ [0, v¯) . Suppose that in the interval [¯ v , 1], g has K steps. Then we can consider the division of [¯ v , 1] , into K subintervals, Ij , j = 1, ..., K. In each of these subintervals g takes a potentially different value gj , where r+(1−r)δ ≤ gj ≤ 1. We now show that we can write g as a linear combination of L functions q1 , ..., qL ∈ P ∗ , that is to say g=

L X

αi qi , ΣL i=1 αi = 1.

i=1

All qi0 s have the following characteristics qi (v) = r, v ∈ [0, zˆ2 ), z2 , vˆi ), qi (v) = r + (1 − r)δ, v ∈ [ˆ vi , 1], qi (v) = 1, v ∈ [ˆ vi ) ≥ z2 (¯ v ) ≥ zˆ2 , where zˆ2 and z2 (¯ v ) are the same as in the definition of p. where 1 ≥ vˆi ≥ v¯ and z2 (ˆ The way to determine the coefficients αi , is as follows. Suppose that for v ∈ I1 , g1 = g(v) = r + (1 − r)δ + η 1 . Then for v ∈ I1 , we have g(v) = ΣL i=2 αi qi + α1 q1 , where qi = r + (1 − r)δ for all i 6= 1 η1 and q1 = 1, α1 = 1−r−(1−r)δ , and of course ΣL i=1 αi = 1. (Observe that since q1 = 1 on I1 it must be 1−r−(1−r)δ−η1 . So for v ∈ I1 , we can write q1 = 1 for v ∈ Ij , j = 2, ..., K.) Obviously, ΣL i=2 αi = 1 − α1 = 1−r−(1−r)δ ´ ³ ´ ³ 1−r−(1−r)δ−η1 η1 +1· 1−r−(1−r)δ = r+(1−r)δ+η 1 . Now, suppose g(v) = ΣL i=2 αi qi +α1 q1 = (r + (1 − r)δ)· 1−r−(1−r)δ

that for v ∈ I2 , we have g2 = r+(1−r)δ+η1 +η2 . Then for v ∈ I2 we can write g(v) = ΣL i=3 αi qi +α1 q1 +α2 q2 , η1 η2 where qi = r + (1 − r)δ for all i 6= 1, 2 and q1 = 1 = q2 , α1 = 1−r−(1−r)δ , α2 = 1−r−(1−r)δ and ΣL i=1 αi = 1.

1−r−(1−r)δ−η1 −η 2 . We therefore obtain, that for v ∈ I2 , To verify this note that ΣL i=3 αi = 1 − α1 − α2 = 1−r−(1−r)δ L g(v) = Σi=3 αi qi + α1 q³1 + α2 q2 ´ ³ ´ ³ ´ η1 η2 1 −η 2 + 1 · + 1 · = (r + (1 − r)δ) · 1−r−(1−r)δ−η 1−r−(1−r)δ 1−r−(1−r)δ 1−r−(1−r)δ = r + (1 − r)δ + η 1 + η 2 . 0 Continuing in a similar manner we can determine all the αi s. It follows that any step function that is arbitrarily close to an element of P, can be written as a convex combination of elements of P ∗ . Therefore for each p ∈ P there exist a g, where g is a convex combination of elements of P ∗ , such that |p − g| < ε. Proposition A 1. Consider a linear function R : P →R. Suppose that there exists a set P ∗ ⊂ P, such that every element of P can be approximated by a convex combination of elements of P ∗ . Furthermore, suppose that the maximum value of R over P and P ∗ exists. Then

max R(p) = max∗ R(p). p∈P

p∈P

8

Proof. First note that since P ∗ ⊂ P, then R. max R ≥ max ∗ P

P

It is given, that every element of P can be arbitrarily closely approximated by a convex combination of elements of P ∗ . We will use p and q to denote generic elements of P and P ∗ respectively, and g to denote convex combinations of elements of P ∗ . Suppose that p¯ ∈ P is a maximizer of R, and consider a sequence p)| < ε, for {gn }n∈N such that gn → p¯. This implies that for all ε > 0, there exists gn such that |R(gn ) − R(¯ p) − ε or R(¯ p) ≥ R(gn ) − ε. n large enough. From this we get that, for n large enough either R(gn ) > R(¯ ∗ Fix an n large enough. Since gn is a convex combination of elements of P , we can rewrite each element n n n ∗ L n of this sequence as gn = ΣL i=1 αi qi , where qi ∈ P and Σi=1 αi = 1. Then, because R is linear, we can write n n R(gn ) = ΣL i=1 αi R(qi ). n n Now suppose that R(qin ) < R(gn ) for all i = 1, ..., L. Then we have that R(gn ) = ΣL i=1 αi R(qi ) < R(gn ), but this is impossible. Hence there must exist i such that R(qin ) ≥ R(gn ). Now

R(p) ≥ R(qin ) ≥ R(gn ), max ∗ P

where the first inequality follows from the fact that qin ∈ P ∗ . If R(¯ p) > R(gn ) then R(p) ≥ R(qin ) ≥ R(gn ) > R(¯ p) − ε, for all ε > 0. max ∗ P

Taking the limit as ε → 0, we get that R(p) = R(¯ g ) = max R(p). max ∗ P

If R(gn ) ≥ R(¯ p), then from the fact that

qin



P∗

and

P∗

P

⊂ P, we have

R(p) ≥ R(qin ) ≥ R(gn ) ≥ R(¯ p), R(¯ p) ≥ max ∗ P

which again implies that all inequalities hold with equality. We therefore get max R(p) = R(¯ p) = max R(p). ∗ P

P

Theorem A 1. Under non-commitment the seller maximizes expected revenue by posting a price in each period. Proof. In Lemma A 1, we verified that the seller’s maximization problem is well defined. From Lemma A 2 we know that an element of P can be written as a convex combination of elements of P ∗ . The result follows from Proposition A 1 and Proposition 5 page 17, in “Sequentially Optimal Mechanisms.” 9

References [1] Kolmogorov, A. N. and S.V. Fomin (1970): Introductory Real Analysis. Dover Publications. [2] Munkres, I. (1975): Topology: A First Course. Prentice Hall. [3] Royden, H. (1962): Real Analysis, 2nd Edition, New York: Macmillan.

10