Boundary value problem and the Ehrhard inequality

0 downloads 0 Views 455KB Size Report
Jun 20, 2017 - Key words and phrases. Gaussian measure, essential supremum, Prekopa–Leindler, Ehrhard. 1. arXiv:1605.04840v2 [math.AP] 20 Jun 2017 ...
BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

arXiv:1605.04840v1 [math.AP] 16 May 2016

PAATA IVANISVILI A BSTRACT. In this paper we study a natural extension of the Prékopa–Leindler inequality involving the essential supremum, almost arbitrary probability measures, and arbitrary function H(x, y) of two variables instead of the classical one x1−λ yλ . We obtain the necessary conditions on H in terms of partial differential inequalities, and in case of Gaussian measures we show that these conditions are also sufficient. We completely describe convex, concave and 1-homogeneous functions H which satisfy our necessary conditions, moreover, we illustrate some other nontrivial examples. As an immediate application we obtain the new proof of the Ehrhard inequality. In particular, we show that in the class of even probability measures the Gaussian measure is the only one which satisfies the functional form of the Ehrhard inequality on the real line with their own distribution functions. Our approach is based on two arguments: one of Brascamp–Lieb rewriting the essential supremum as a limit of L p norms, and the other one is a subtle inequality which is an extension of the classical Brascamp–Lieb inequality recently developed for the Gaussian measures.

1. I NTRODUCTION Let H(x, y) be a real-valued function on a closed bounded rectangular domain Ω ⊂ R2 . Fix some n ≥ 1. Let dµ be a probability measure on Rn and absolutely continuous with respect to the Lebesgue measure. For R simplicity we will always assume that H ∈ C3 (Ω) and Rn kxk5 dµ < ∞. In this paper we address the following question: what is the necessary and sufficient condition on H, real numbers a, b and a measure dµ such that the following inequality holds   Z     Z Z x−y y ess sup H f dµ(x) ≥ H f dµ, (1) ,g gdµ a b Rn y∈Rn Rn Rn for all Borel measurable ( f , g) : Rn × Rn → Ω. We assume that a and b are strictly positive numbers. Essential supremum in (1) is taken with respect to the Lebesgue measure. Our main result is the following theorem. Theorem 1. Suppose that Hx and Hy never vanish in Ω. For inequality (1) to hold it is necessary that Z 2 2 Hxy 2 2 2 Hyy 2 Hxx (2) a 2 + (1 − a − b ) + b 2 ≥ 0; |1 − a − b | ≤ 2ab; xdµ = 0 if a + b > 1. Hx Hx Hy Hy Rn Moreover, if dµ(x) is a Gaussian measure then the above conditions are also sufficient. By Gaussian measure we mean a probability measure of the form (3)

exp(−xAxT + bxT + c) dx

for some n × n matrix

A > 0, b ∈ Rn

and

c ∈ R.

The symbols , Hxy and Hyy denote partial derivatives. xT denotes transpose of the row vector x ∈ Rn . y , Hxx Hx , H Constraint 1 − a2 − b2 ≤ 2ab on numbers a, b in (2) can be rewritten as a + b ≥ 1 and |a − b| ≤ 1. Moreover, R if a + b > 1 then it is necessary that Rn xdµ is the zero vector. In the applications usually a + b = 1. Therefore the most important condition the reader needs to keep in mind is the partial differential inequality (PDI) in (2). We should also notice an independence from the dimension, i.e., the necessity conditions follow from the one dimensional case n = 1, and for the Gaussian measures (2) is sufficient for (1) to hold for all n ≥ 1. In Section 2 we present the proof of Theorem 1. We should mention that (2) already appeared in [16] as a sufficient condition for inequality (1) to hold in case of the Gaussian measure with supremum in (1) and 2010 Mathematics Subject Classification. 42B35, 47A30. Key words and phrases. Gaussian measure, essential supremum, Prekopa–Leindler, Ehrhard. 1

2

PAATA IVANISVILI

smooth compactly supported functions f , g. Namely, it was proved in [16] that if Hx , Hy are nonvanishing and H satisfies (2) then the following inequality holds Z

(4)

sup

Rn ax+by=t

H ( f (x), g(y)) dµ(t) ≥ H

Z f dµ, Rn



Z

gdµ Rn

for all smooth compactly supported functions f , g and the Gaussian measure dµ. In fact, in [16] inequality (4) was obtained in more general setting where one can allow H to have arbitrary number of variables and supremum in (4) is taken over an affine subspace of Rn . By slightly extending the class of test functions in (4) (which can be done by using the methods in [16]) one can derive (1) from (4) by using some subtle approximation arguments, the fact that H is monotone in each variable and elaborating an argument used in Appendix of [10]. In this paper we still prefer to directly work with (1). Our first immediate extension is that we have (1) with essential supremum and Borel measurable functions. Our second extension is that we obtain if and only if characterization, moreover we obtain the necessity part for almost arbitrary measures. Our approach to (1) sheds light to a question about optimizers, and it provides us with some quantitative version of (1) (see Lemma 5 and Lemma 7 which are interesting in itself), and it shows a link between two different type of PDEs considered in [16] (see PDE (1.3) and (1.5) in [16]). Our proof is completely different from the methods developed in [16]. In particular, we obtain the new proof of the Ehrhard inequality. The main idea of the current paper is based on two arguments: one of Brascamp–Lieb [9] rewriting essential supremum as the limit of L p norms, and the other one is a subtle inequality which was studied and developed only recently in [24, 21, 16, 17]. In Section 3 we give answers to the questions: what happens in Theorem 1 when Hx and Hy vanish in Ω. It turns out that in this degenerate case the problem depends very much on the class of test functions ( f , g), namely, it matters whether ( f , g) are continuous or discontinuous functions. In Section 4 we illustrate various applications of the theorem. The first immediate application is the Ehrhard inequality. This corresponds to the function H(x, y) = Φ(aΦ−1 (x) + bΦ−1 (y)) which does satisfy (2) where Φ is the Gaussian distribution function. In Theorem 4 and Corollary 2 we find necessary conditions on the probability measures dµ which satisfy the functional version of the Ehrhard inequality in dimension one where the Gaussian distribution Φ is replaced by the distribution of dµ, i.e., Φ(s) = µ((−∞, s]). As a corollary (see Corollary 1) we show that in the class of even probability measures the Gaussian measure is the only one which satisfies the functional form of the Ehrhard inequality on the real line. Another immediate application is the Prékopa–Leindler inequality. This corresponds to the function H(x, y) = xa yb with a + b = 1 which also does satisfy (2). In Subsection 4.2 we describe 1-homogeneous functions H, i.e., H(λ x) = λ H(x) for all λ ≥ 0, and measures dµ which satisfy (1). It turns out that in this particular class of functions the description is simple: H is a convex function, or H is one of the counterparts of the Prékopa–Leindler functions: H = xa yb with a + b = 1, H = xa y−b with a − b = 1, and H = x−a yb with b − a = 1. The last two functions provide us with a natural extension of the Prékopa–Leindler inequality for any real power a 6= 0, 1 (see Corollary 4). In the case H = xa yb as it is well known the only possible measures which satisfy (1) must be log-concave measures. On the other hand appearance of the convex functions H makes it clear that some of the inequalities of the form (1) are trivial consequences of the Jensen’s inequality (see Proposition 2 and Remark 2). In Subsection 4.4 we characterize all smooth concave functions H which satisfy (1) for a probability measure dµ. It turns out that H must satisfy homogeneous Monge–Ampère equation, and as a consequence of this, it generically must be the Prékopa–Leindler function H = xa yb with a + b = 1, or one of its counterpart (see Proposition 4). Thus we observe some universality of the Prékopa–Leindler inequality, as a unique reverse to Jensen’s inequality. Section 4.5 illustrates other examples of H such that (1) holds for the Gaussian measure (see Corollary 5, 6 and 7). In Section 5 we explain how Theorem 1 allows to restore the Ehrhard inequality by solving PDE problem over an obstacle.

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

3

Acknowledgements. I am grateful to Christos Saroglou who initiated this project and with whom I had many discussions. He should be considered as co-author (despite his insistence to the contrary). I am extremely thankful to the Kent State Analysis Group especially Fedor Nazarov who gave me some valuable suggestions in obtaining the necessity part, and Artem Zvavitch for providing C. Borell’s lecture notes. The talk given by Grigoris Paouris on the Informal Analysis Seminar at Kent State University served as a guide and inspiration for the present article. 2. P ROOF OF THE THEOREM 2.1. The necessity condition. First we notice that if (1) holds for some n ≥ 1 then it holds for n = 1. Indeed we can test (1) on the functions f (x1 , x2 , . . . .xn ) = f˜(x1 ), g(x1 , x2 , . . . , xn ) = g(x ˜ 1 ) for some measurable ( f˜, g) ˜ : R × R → Ω. In what follows we will assume that n = 1. Finiteness of the fifth moment together with the Lebesgue dominated convergence theorem implies that (5)

R5

Z ∞ R

dµ → 0

and R5

Z −R −∞

dµ → 0

as R → ∞.

We need several technical lemmas. We fix a number α ∈ (0, 1/3) close to

1 3

which will be determined later.

Lemma 1.

as ε → 0.

Z ±∞ 4α ±ε −α |t|dµ ≤ o(ε ) and

Z ±∞ 2 3α ±ε −α t dµ ≤ o(ε ),

Proof. We have Z ±∞ Z ±∞ 4α 5 4α ±ε −α |t|dµ ≤ ε ±ε −α |t| dµ = o(ε ).

Similarly for the second integral.



Let (u, v) be the point in the interior of Ω. Let Hu = Hu (u, v) and Hv = Hv (u, v). Lemma 2. If H satisfies (1) then 2

(6)

uv p HHvv2 + pq + q HHuu2 + HuuHH2vvH−H 2 v

Huu Hu2

u

+ p − 2 HHuuvHv

u

+

Hvv Hv2

≥ pa2 + qb2

v

+q

for all real numbers p and q such that p + q + HHuu2 − 2 HHuuvHv + HHvv2 < 0. u

v

Proof. Let δ > 1 be a number which will be determined later. We consider the following test functions ( f , g): (7) (8)

(9)

2 (a x) p ϕε,δ ϕε,δ (a x) 2 +ε ; f (x) = u + ε Hu Hu 2 (b y) q ϕε,δ ϕε,δ (b y) 2 g(y) = u + ε +ε ; Hv Hv  −α  −δ ε −α ≥ t ; −δ ε where ϕε,δ (t) = t −δ ε −α ≤ t ≤ ε −α ;   −α ε ε −α ≤ t .

We notice that |ϕε,δ (t)| ≤ Cε −α for some fixed constant C > 0. Since α < 1 it is clear that if ε is sufficiently small then ( f (x), g(y)) : R × R → Ω for all 0 ≤ ε ≤ ε0 where ε0 = ε0 (p, q) is a small fixed number. Ideally we want to choose ϕε,δ (t) = t for all t ∈ R but then the image of ( f , g) will escape from the rectangle Ω. On the other hand, unfortunately, we cannot just make it compactly supported (i.e., multiply by some cutoff function) as we will see from the computations given below.

4

PAATA IVANISVILI

Let R tdµ = τ and R t 2 dµ(t) = β . Choose α ∈ (0, 31 ) so that 4α > 1. Notice that for each fixed δ > 1 by (5) and Lemma 1 we have R

R

aτ pa2 bτ qb2 + ε2 β + o(ε 2 ) and gdµ = v + ε + ε2 β + o(ε 2 ) as ε → 0. Hu Hu Hv Hv R R By Taylor’s formula we obtain Z  Z f dµ, gdµ = H(u, v) + ετ(a + b) + ε 2 β (pa2 + qb2 ) + o(ε 2 ) as ε → 0. H Z

Z

f dµ = u + ε

R

R

On the other hand we have H( f (x), g(y)) = H(u, v) + ε(ϕε,δ (a x) + ϕε,δ (b y))+   Huv Huu 2 Hvv 2 2 2 2 ε pϕε,δ (a x) + qϕε,δ (b y) + 2 ϕε,δ (a x) + 2 ϕ (a x)ϕε,δ (b y) + 2 ϕε,δ (b y) + O(ε 3(1−α) ). Hu Hu Hv ε,δ Hv Since α < 1/3 we have O(ε 3(1−α) ) = o(ε 2 ). First we should compare small order terms in order to get a restriction on τ. Since f , g are continuous clearly the essential supremum of the integrand in (1) becomes supax+by=z H( f (x), g(y)). Thus, introducing new variables x˜ = a x, y˜ = b y and using the fact that supremum of the sum is at most the sum of the supremums, we obtain that (1) implies the inequality Z

(10)

sup (ϕε,δ (x) + ϕε,δ (y))dµ(t) ≥ ετ(a + b) + o(ε).

ε

R x+y=t

  Notice that supx+y=t (ϕε,δ (x) + ϕε,δ (y)) = t for |t| ≤ ε −α δ −1 and it is bounded as Cε −α otherwise. 2 Therefore (10) implies that τ ≥ (a + b)τ. On the other hand we can considered the new test functions ϕ˜ ε,δ = −ϕε,δ and we can obtain the opposite inequality −τ ≥ −(a + b)τ. This implies that if a + b > 1 then τ = 0. R Thus we have obtained the constraint on R tdµ. Further we should assume that τ = 0 because in the remaining case a + b = 1 the first order terms still cancel and the proof proceeds in the same way. Next we should compare the higher order terms. (1) implies the inequality Z

sup (ϕε,δ (x) + ϕε,δ (y))dµ(t)+      Z Huv Hvv Huu 2 2 + p ϕ (x) + 2 ϕ (x)ϕ (y) + + q ϕ (y) dµ(t) ≥ ε2 sup ε,δ ε,δ ε,δ Hu2 Hu Hv ε,δ Hv2 R x+y=t

ε

R x+y=t

ε 2 β (pa2 + qb2 ) + o(ε 2 )

(11) Since

R

R tdµ

= 0, (5) and Lemma 1 we obtain Z

sup (ϕε,δ (x) + ϕε,δ (y))dµ(t) = o(ε 2 ) as ε → 0.

R x+y=t

Set 

def

ψε,δ (t) = sup

x+y=t

    Hvv Huu Huv 2 2 ϕ (x)ϕε,δ (y) + + p ϕε,δ (x) + 2 + q ϕε,δ (y) . Hu2 Hu Hv ε,δ Hv2

We remind that p and q are chosen in such a way that p + HHuu2 + q + HHvv2 − 2 HHuuvHv < 0. u v We need the following lemma. Lemma 3. Let δ > 1 be such that for all s, δ1 ≤ s ≤ δ we have   Huu 2 Huv Hvv (12) p+ 2 s −2 s + q + 2 < 0, Hu Hu Hv Hv

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

−δ ε−α

C

5

B

x+y =t

x

t − ε−α

ε−α

−δ ε−α

ε−α

D

A

y F IGURE 1. Domain of the function w(x, y) then there exist sufficiently small positive constants c and ε0 > 0 such that 2

ψε,δ (t) =

(13)

for all real |t| ≤ cε −α and all ε ≤ ε0 .

uv p HHvv2 + pq + q HHuu2 + HuuHH2vvH−H 2 v

Huu Hu2

u

u

v

+ p − 2 HHuuvHv + HHvv2 + q

· t 2,

v

Before we proceed to the proof of the lemma, let us mention that Lemma 2 follows from Lemma 3. Indeed, first we choose δ > 1 such that (12) holds. Such choice is possible because of the continuity and the assumption on the numbers p and q. Lemma 3, (11), (5), Lemma 1 and the fact that |ψε,δ (t)| ≤ Cε −2α on the complement of the interval [−cε −α , cε −α ] imply (6). Thus it remains to prove Lemm 3. Proof. Set  w(x, y) =

   Huv Hvv Huu 2 2 + p ϕε,δ (x) + 2 ϕ (x)ϕε,δ (y) + + q ϕε,δ (y). Hu2 Hu Hv ε,δ Hv2

We should describe behavior of w(x, y) on the red line x + y = t (see Figure 1). If 2ε −α ≥ t ≥ (1 − δ )ε −α then the line x + y = t will cross the sides DA and DC of the rectangle ABCD as it is shown on Figure 1. We have  h   i Huu 2 − 2 Huv δ + Hvv + q −2α  ε + p δ x ≤ −δ ε −α ;  2 2 Hu Hv   H Hv    u   Huv Hvv Huu 2 −2α −α  t − ε −α ≥ x ≥ −δ ε −α ;   Hu2 + p x + 2 Hu Hv xε + Hv2 + q ε  Huv Hvv Huu 2 w(x,t − x) = + q (t − x)2 t − ε −α ≤ x ≤ ε −α ; 2 + p x + 2 H H x(t − x) + H Hv2  u v u       Huu  + p ε −2α + 2 HHuuvHv ε −α (t − x) + HHvv2 + q (t − x)2 ε −α ≤ x ≤ t + δ ε −α ;   Hu2 h     v i   ε −2α Huu2 + p − 2 Huv δ + Hvv2 + q δ 2 x ≥ t + δ ε −α . Hu Hv H H u

v

6

PAATA IVANISVILI

Notice that because of the assumption (12) the values of w(x,t − x) approach to negative infinity as −cε −2α if x ≤ −δ ε −α or x ≥ t + δ ε −α where c > 0 is some constant. If t − ε −α ≥ x ≥ −δ ε −α then let us reparametrize the function w(x,t − x) as follows x = −ε −α s where 1 − tε α ≤ s ≤ δ . Then     Hvv Huv Huu 2 −α −α −2α + p s −2 s+ +q . w(−ε s,t + ε s) = ε Hu2 Hu Hv Hv2 Clearly if t ≤ (1 − δ1 )ε −α then 1 − tε α ≥ δ1 and by (12) the maximal value of w(x,t − x) behaves as −cε −2α for some c > 0 on the interval t − ε −α ≥ x ≥ −δ ε −α . Behavior of w(x,t − x) on the interval ε −α ≤ x ≤ t + δ ε −α is completely symmetric to the previous case. One can check that the maximum of the function w(x,t − x) on the remaining interval [t − ε −α , ε −α ] is attained at the point   Huv Hvv + q t − Hu Hv Hv2 x0 = − Huu . + p − 2 HHuuvHv + HHvv2 + q H2 u

It is clear that if

|t| ≤ cε −α

v

for some sufficiently small number c then x0 ∈ [t − ε −α , ε −α ]. Finally we have 2

(14)

sup w(x,t − x) = x∈R

for all |t| ≤

cε −α

max

x∈[t−ε −α ,ε −α ]

w(x,t − x) =

uv p HHvv2 + pq + q HHuu2 + HuuHH2vvH−H 2 v

Huu Hu2

u

u

v

+ p − 2 HHuuvHv + HHvv2 + q

· t 2,

v

and all ε ≤ ε0 where ε0 and c are some sufficiently small numbers. Thus we obtain (13).

 

Lemma 4. Inequality (6) holds for all real p and q with p + q + HHuu2 − 2 HHuuvHv + HHvv2 < 0 if and only if u



1 − a2 − b2

 ≤1

2ab

and

a2

v

Huu Huv Hvv + (1 − a2 − b2 ) + b2 2 ≥ 0. 2 Hu Hu Hv Hv

Proof. Let us rewrite (6) as follows     Hvv Huv 2 2 Huu −2 + (a − 1) 2 + M(p, q) = (p, q) C (p, q) + p a Hu2 Hu Hv Hv     2 Hvv Huv Huu Huu Hvv − Huv q b2 + (b2 − 1) 2 − −2 ≥0 2 Hv Hu Hv Hu Hu2 Hv2 T

on the half plane where p + q + HHuu2 − 2 HHuuvHv + HHvv2 < 0 and u

v

C=

a2 a2 +b2 −1 2

a2 +b2 −1 2 b2

! .

In order for the quadric form M(p, q) to be nonnegative on the half plane it is necessary that C ≥ 0 and this gives the constraints on the numbers a and b. Further, without loss of generality we can assume that C is  2 2 2 invertible, i.e., 1−a2ab−b < 1. Indeed, otherwise we can slightly perturb the numbers a and b. Notice that M(p, q) is nonnegative on the boundary of the half plane, i.e.,   Huu Huv Hvv (Hvv Hu − Huv Hv + Hu Hv2 q)2 M −q − 2 + 2 − 2 ,q = ≥ 0. Hu Hu Hv Hv Hu2 Hv4

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

7

Let (p0 , q0 ) be the vertex of the paraboloid M, i.e., ∇M(p0 , q0 ) = 0. It is enough to make sure that if M(p0 , q0 ) < 0 then the point p0 , q0 lies outside of the half plane. Indeed, the direct computations show that  2 a2 HHuu2 + (1 − a2 − b2 ) HHuuvHv + b2 HHvv2 u   v M(p0 , q0 ) = < 0. 2 2 2 1−a −b 2 2 4a b −1 2ab Therefore p0 + q0 +

2 Huu 2 Hvv 2 2 Huv Huv Hvv a Hu2 + (1 − a − b ) Hu Hv + b Hv2 Huu   − 2 + = ≥ 0.  2 Hu2 Hu Hv Hv2 1−a2 −b2 2 2 2a b 1 − 2ab

This finishes the proof of the lemma and thereby the necessity conditions are obtained.



2.2. The sufficiency for the Gaussian measure. Our main ingredient will be a subtle Theorem 3 from [16]. Let us precisely formulate it in the way we will use it. Let k, k1 , k2 and k3 be some positive integers. Let A j be k × k j size matrices of full rank for j = 1, 2 and 3. Set A = (A1 , A2 , A3 ) to be k × (k1 + k2 + k3 ) size. Let B : Ω ⊂ Rk → R be in C2 (Ω) where Ω is a closed bounded rectangular domain, i.e., parallelepiped with sides parallel to the standard orthonormal system in Rk . Let C be a positive definite k × k matrix. Set |C−1/2 x|2 1 dγC (x) = p e− 2 dx. (2π)k det(C) By A∗ we denote the transpose of the matrix A. Let x ∈ Rk be a row vector, i.e., x = (x1 , . . . , xk ). Let denotes (∑ k j ) × (∑ k j ) matrix {A∗i CA j ∂i j B}ni, j=1 , i.e., A∗CA • Hess B is constructed by the blocks A∗i CA j ∂i j B.

A∗CA • Hess B

Theorem A. A∗CA • Hess B ≥ 0 on Ω if and only if Z

(15)

B(u1 (xA1 ), u2 (xA2 ), u3 (xA3 ))dγC (x) ≥ Z Z Z p p B u1 (y A∗1CA1 )dγk1 (y), u2 (y A∗2CA2 )dγk2 (y), Rk

Rk1

Rk3

Rk2

 p ∗ u3 (y A3CA3 )dγk3 (y)

for all Borel measurable (u1 , u2 , u3 ) : Rk1 × Rk2 × Rk3 → Ω. The theorem was formulated for smooth B and compactly supported functions u j , j = 1, 2, 3. We should mention that the theorem still remains true for B ∈ C2 (Ω) and for smooth bounded u j . The proof proceeds absolutely in the same way as in [16]. The only property we need is that the heat extension of u j and B(u1 , u2 , u3 ) must be well defined, which is definitely true under these assumptions as well. In order to obtain (15) for all Borel measurable functions (u1 , u2 , u3 ) : Rk1 × Rk2 × Rk3 → Ω we approximate pointwise almost everywhere by smooth bounded functions (un1 , un2 , un3 ) such that Im(un1 , un2 , un3 ) ⊆ Ω. Finally, the Lebesgue dominated convergence theorem justifies the result (notice that all functions u1 , u2 , u3 and B are uniformly bounded). We should also mention that inequality (15) for the function B(x, y, z) = x p yq zr recovers the reverse Young’s inequality for convolutions with sharp constants, and the latter was used in [9] in obtaining the Prékopa– Leindler inequality. In our case the situation is slightly different. We will be using (15) for some sequence of functions B, matrices C , A, and Gaussian measures. Further in obtaining the sufficiency condition, without loss of generality we can assume that dµ = dγn −|x|2 /2

e is the standard n-dimensional Gaussian measure (2π) n/2 dx. The case of the arbitrary Gaussian measure (3) follows by testing (1) on the shifts and dilates of f , g and change of variables in (1). Next we consider two different cases, when |1 − a2 − b2 | = 2ab and when |1 − a2 − b2 | < 2ab. To the first case we refer as parabolic case and the second case we call elliptic case. These names originate from studying the solutions of the partial differential inequality in (2).

8

PAATA IVANISVILI

2.2.1. Parabolic case. In this subsection we consider the case when 2  1 − a2 − b2 (16) = 1. 2ab (16) holds if and only if a = |1 − b| or a = 1 + b. Without loss of generality we will assume that H > δ for some δ > 0. Moreover, we can choose δ > 0 so that |Hx |, |Hy | > δ > 0. Set  p n/2 |py+qx|2 dγ p,q,x (y) = (17) e− 2p dy for x, y ∈ Rn . 2π The choice of the numbers p, q will be specified later. So far we assume that p > 0. Notice that dγ p,q,x (y) is a probability measure on Rn . We need the following lemma. Lemma 5. For any 1 > α > β > 0 with α + β > 1 there exists R0 = R0 (α, H) such that for all R > R0 we have       1α Z Z R−R x−y y R H f ,g dγ p,q,x (y) dγn (x) ≥ a b Rn Rn ! ! ! r r Z Z R 1 1 (18) f x 1 + 2 β dγn (x), g x 1 + 2 β dγn (x) , H R−Rα a R b R Rn Rn where p = Rβ , q = −bRβ if a = |1 − b| and q = bRβ if a = b + 1. Before we proceed to the proof of the lemma let us explain thatR the lemma implies the desired result (1). R It is clear that if R →R0 then the right hand side of (18) tends to H( f dγn , gγn ). We claim that the left hand side of (18) tends to Rn ess supy H( f ((x − y)/a), g(y/a))dγn (x). Indeed, let  1α      R−R y x−y dγ p,q,x (y) ,g . H f a b Rn

Z ϕR (x) =

R

We claim that ϕR (x) → ess supy H( f ((x − y)/a), g(y/a)) a.e. as R → ∞. Notice that   Rα R−R −−−→ ess sup H( f ((x − y)/a), g(y/a)). ϕR (x) ≤ ess sup H( f ((x − y)/a), g(y/a)) R→∞

y

y

On the other hand let ε > 0. Consider Aε = {y : H( f ((x − y)/a), g(y/a)) > ess supy H( f ((x − y)/a), g(y/a)) − ε}. Let N be a sufficiently large number such that |Aε ∩ B(0, N)| > 0. Here B(0, N) denotes the ball centered at the origin with radius N. Then 1

R

ϕR (x) ≥ (γ p,q,x (Aε ∩ B(0, N))) R−Rα (ess sup H( f ((x − y)/a), g(y/a)) − ε) R−Rα y

−−−→ ess sup H( f ((x − y)/a), g(y/a)) − ε. R→∞

y

The last passage follows from the fact that the power in the exponent (17) is of order Rβ where β < 1. Since ε is arbitrary we obtain the pointwise convergence for ϕR (x). Finally, since ϕR (x) are uniformly bounded the Lebesgue dominated convergence theorem implies      Z Z y x−y ,g ϕR (x)dγn (x) −−−→ ess sup H f dγn (x). n n R→∞ a b R R y It remains to prove the lemma. Proof. Take an arbitrary Borel measurable ϕ such that δ 0 > ϕ > 1/δ 0 > 0 for some δ 0 > 0. Let a(R) = R−Rα . First, we show that

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

Z Rn

(19)

HR

9

     y x−y ,g ϕ 1−a(R) (x)dγ p,q,x (y)dγn (x) ≥ H f a b Rn ! ! ! Z r r 1−a(R) Z Z 1 1 ϕdγn . f x 1 + 2 β dγn , g x 1 + 2 β dγn · a R b R Rn Rn Rn

Z

R

We should apply Theorem A. In order to do that notice that −1

e−hC ~x,~xi/2 dγ p,q,x (y)dγ1 (x) = p det(C−1/2 )dxdy := γC (~x)dxdy, 2n (2π)

(20) where ~x = (x, y) and

2

(21)

C

−1

=

1 + qp q

! q ⊗ In×n = C˜ −1 ⊗ In×n , p

where In×n is n × n identity matrix. Clearly C > 0 if and only if p > 0. Set  1    0 A1 = a1 ⊗ In×n = a1 ⊗ In×n , A2 = ⊗ In×n = a2 ⊗ In×n , − b1 −a

  1 A3 = ⊗ In×n = a3 ⊗ In×n . 0

Let A = (A1 , A2 .A3 ) be 2n × 3n matrix. Notice that A = A˜ ⊗ In×n where A˜ = (a1 , a2 , a3 ) is 2 × 3 matrix. By Theorem A the desired inequality (19) holds, namely, Z

H R ( f (~x A1 ), g(~x A2 ))ϕ 1−a(R) (~x A3 )dγC (~x) ≥ Z  Z Z p ∗ p ∗ R H f (x A1CA1 )dγn , g(x A2CA2 )dγn · R2n

(22)

Rn

Rn

Rn

p ϕ(x A∗3CA3 )dγn

1−a(R)

if and only if A∗CA • Hess B = {A∗i CA j ∂i j B}ni, j=1 ≥ 0 where B(x, y, z) = H R (x, y)z1−a(R) is given on Ω × (0, ∞). Notice that in this case ! 1 − qp 2 C= ⊗ In×n = C˜ ⊗ In×n . − qp 1p + qp2 Therefore ˜ 3 , a3 i ⊗ In×n = In×n ; A∗3CA3 = hCa   1 1 q2 ∗ ˜ A2CA2 = hCa2 , a2 i ⊗ In×n = 2 + ⊗ In×n ; b p p2   1 1 q q2 ∗ ˜ A1CA1 = hCa1 , a1 i = 2 1 + + 2 · + 2 ⊗ In×n . a p p p ˜ j , a j i = 1 for all j = 1, 2 and 3. Unfortunately, it is not possible, i.e., there are Ideally we want to have hCa no solutions p and q which satisfy (16). We choose p = ε −1 , and q = −bε −1 if a = |1 − b| and q = bε −1 if ˜ 2 , a2 i = 1+ ε2 a = 1+b. We will set ε = R−β but so far lets think of it as a sufficiently small number. Then hCa b ˜ 1 , a1 i = 1 + ε2 . and hCa a First we consider the case when a + b = 1. The remaining cases are similar. We obtain   ε   1 + aε2 1 − ab 1 1 b ε 1 + bε2 1 ⊗ In×n = A˜ ∗C˜ A˜ ⊗ In×n . (23) C= ⊗ In×n and A∗CA = 1 − ab 2 b b +ε 1 1 1

10

PAATA IVANISVILI

For B(x, y, z) = H R (x, y)z1−a(R) we have   Hxx H + (R − 1)Hx2 Hxy H + (R − 1)Hx Hy (1 − a(R))Hx Hz−1 Hess B = RH R−2 z1−a(R) × Hxy H + (R − 1)Hx Hy Hyy H + (R − 1)Hy2 (1 − a(R))Hy Hz−1  . a(R)(a(R)−1) 2 −2 (1 − a(R))Hx Hz−1 (1 − a(R))Hy Hz−1 H z R We have Hess B = RH R−2 z1−a(R) · ST S∗ where S is a diagonal matrix with entries 1, 1 and (1 − a(R))Hz−1 on the diagonal, and   Hx Hxx H + (R − 1)Hx2 Hxy H + (R − 1)Hx Hy 2  Hy  T = Hxy H + (R − 1)Hx Hy Hyy H + (R − 1)Hy . a(R) Hx Hy R(a(R)−1) a(R) Thus A∗CA • Hess B ≥ 0 if and only if A˜ ∗C˜ A˜ • T ≥ 0. If we set R − 1 = N and R(a(R)−1) = M1 then we have     ε (Hxx H + N · Hx2 ) 1 + aε2  (Hxy H + N · Hx Hy ) 1 − ab Hx  ε (Hyy H + N · Hy2 ) 1 + bε2 Hy  . A˜ ∗C˜ A˜ • T = (Hxy H + N · Hx Hy ) 1 − ab 1 Hx Hy M

By choosing N>−

min{Hxx H, Hyy H} = C = C(kHk∞ , k(Hx−1 , Hy−1 )k∞ , kD2 Hk∞ ) < ∞ min{Hx2 , Hy2 }

we will make diagonal entries positive. Notice that such choice is possible because H ∈ C2 (Ω) and Hx , Hy 6= 0. Let us investigate the sign of 2 × 2 leading minor. We have    2  ε (Hxx H + N · Hx2 ) 1 + aε2  (Hxy H + N · Hx Hy ) 1 − ab 2 2 2 a+b  det (24) + O(N), = N Hx Hy ε (Hyy H + N · Hy2 ) 1 + bε2 (Hxy H + N · Hx Hy ) 1 − ab ab where O(N) ≤ N ·C1 where C1 = C1 (H) is some absolute constant which depends only on kHk∞ , k(Hx−1 , Hy−1 )k∞ and kD2 Hk∞ . Thus we see that choosing Nε > C2 for some absolute C2 = C2 (H) the determinant of the minor will be positive. For the next 2 × 2 minor we have      Hyy H Nε (Hyy H + N · Hy2 ) 1 + bε2 Hy 2 N + (25) det −1+ . = Hy 1 2 Hy M Mb M M  Hyy H R Nε N Notice that N − M = a(R) − 1 + Mb ≥ M1 (Hy2 Nε + Hyy H) and − 1 and since a(R) < R we have Hy2 M 2 + M the last expression is nonnegative if εN > C3 for some absolute C3 = C3 (H). In the similar way we obtain that all 2 × 2 minors are nonnegative provided that Nε is sufficiently large. So it remains to check the sign of det(A˜ ∗C˜ A˜ • T ). We have (26)

a2 b2 M det(A˜ ∗C˜ A˜ • T ) = NεH(a2 Hxx Hy2 + b2 Hyy Hx2 + (1 − a2 − b2 )Hxy Hx Hy )+  Nε 2 2 ε 2N (N − M) Hx Hy (a + b)2 + H(a2 b2 + )(Hxx Hy2 + Hyy Hx2 − 2Hxy Hx Hy ) + 2 N −M  Nε εH(b2 Hxx Hy2 + a2 Hyy Hx2 + 2Hxy Hx Hy ab) + (N − M) Hx2 Hy2 (a + b)2 + 2 2 2 H 2 (a2 b2 + ε 2 )(Hxx Hyy − Hxy ) + εH 2 (Hxx Hyy (a2 + b2 ) + 2abHxy ).

Notice that the first term, i.e., NεH(a2 Hxx Hy2 + b2 Hyy Hx2 + (1 − a2 − b2 )Hxy Hx Hy ) is nonnegative by the condition of the theorem. For the rest of the terms we notice that if we set a(R) = R − Rα and ε = R−β for any 1 > α > β > 0, α + β > 1, then we obtain that for sufficiently large R we have N − M ≈ Rα−1 , ε Nε ≈ R1−β → ∞, ≈ R1−α−β → 0 and (N − M)Nε ≈ Rα−β → ∞. N −M

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

11

2 2 Therefore the second term Nε 2 Hx Hy will dominate the rest of the terms, and in the last terms we notice that 2 2 (N − M) Nε 2 Hx Hy will dominate all the bounded terms. For the remaining cases when a = b + 1 and a = b − 1 we have A˜ ∗C˜ A˜ =     ε ε −1 1 + aε2 −1 − ab 1 1 + aε2 −1 − ab ε −1 − ε 1 + bε2 1  if a = b − 1; −1 − ab 1 + bε2 −1 if a = b + 1. ab −1 1 1 1 −1 1

In both of the cases there is a diagonal matrix D having entries ±1 on the diagonal such that   ε 1 + aε2 1 + ab 1 ˜ = 1 + ε 1 + ε2 1 . (27) DA˜ ∗C˜ AD ab b 1 1 1

Notice that (27) is the same as A˜ ∗C˜ A˜ in (23) except b has switched the sign. Formulas (24), (25) and (26) are still valid if we switch the sign of b to −b. The rest of the discussions proceed without any changes. R Finally in order to obtain (18) we take infimum of the left hand side of (19) over all positive ϕ such that ϕdγn = 1. Indeed, for the convenience of the reader let us mention the following classical result. Lemma 6. Let s > 1 and t < 0 be such that s + t = 1 then for any positive bounded F, G we have Z s Z t Z s t (28) F G dµ ≥ Fdµ Gdµ . The equality holds if F = λ G for a constant λ > 0. Proof. Indeed, notice that B(x, y) = xs yt is a 1-homogeneous convex function for x, y > 0. Therefore (28) follows from the Jensen’s inequality.  R In case of (19) we take t = 1 − a(R) and s = a(R). Taking infimum over all positive and bounded ϕ with

ϕdγn = 1 and finally rising the obtained inequality to the power 1/a(R) we obtain (18). In fact the infimum is attained on the following function 1/a(R) Z      x−y y R (29) dγ p,q,x (y) ϕ(x) = m · H f ,g a b Rn

where the constant m is chosen so that ϕdγn = 1. Clearly such optimizer satisfies δ 0 < ϕ ≤ 1/δ 0 for some nonzero δ 0 > 0 because H is bounded and H > δ . This finishes the proof in the case |1 − a2 − b2 | = 2ab.  R

Remark 1. Inequality (15) was obtained in [16] as a result of an identity, involving Ornstein–Uhlenbeck semigroups and quadratic form A∗CA • Hess B. Since in (29) we have explicit optimizer we conclude that (18) and thereby inequality (1) can be rewritten as an equality in terms of limits, Ornstein–Uhlenbeck semigroups and quadratic form A∗CA • Hess B. One can use this information to obtain some structure of the optimizers ( f , g) in (1). 2.2.2. Elliptic case. In this subsection we consider the following case  2 1 − a2 − b2 < 1. 2ab Lemma 7. There exist positive constants c = c(H) > 0 and R0 = R0 (H) such that for any R > R0 we have 1       R−c Z  Z Z Z R x−y y R R−c H f ,g dγ p,q,x (y) dγn (x) ≥ H f (x) dγn (x), g (x) dγn (x) , a b Rn Rn R R where dγ p,q,x (y) is defined as in (17) and (30)

p=

4 2 (1 − (a − b) )((a + b)2 − 1)

and

q=

2(a2 − b2 − 1) . (1 − (a − b)2 )((a + b)2 − 1)

12

PAATA IVANISVILI

As before using Lemma 6 it is enough to prove (22) for all bounded, positive and uniformly separated from zero ϕ where a(R) = R − c, c will be determined later. Notice that (30) implies that     2 2 1 q 1 1 q q 1 + = 1; hC˜ a˜1 , a˜1 i = 2 1 + + 2 · + 2 = 1. hC˜ a˜3 , a˜3 i = 1; hC˜ a˜2 , a˜2 i = 2 b p p2 a p p p We have C˜ =

1 − qp

1 p

− qp

! 2

+ qp2

=

1 1−a2 +b2 2

1−a2 +b2 2 b2

! .

Notice also that C = C˜ ⊗ In×n is positive-definite if and only if |a − b| < 1 and a + b > 1. Let A = (A1 , A2 , A3 ) be the same matrix as before. By Theorem A, the inequality Z R2n

R

H ( f (~x A1 ), g(~x A2 ))ϕ

1−a(R)

(~x A3 )dγC (~x) ≥ H

R

Z Rn

Z

f (x)dγn ,

Rn

 Z g(x)dγn ·

Rn

1−a(R) ϕ(x)dγn

holds if and only if A∗CA • Hess B ≥ 0 where again B(x, y, z) = H R (x, y)z1−a(R) . We have   1−a2 −b2 1+a2 −b2 1 2ab 2a  2 2 1−a2 +b2  ⊗ I A∗CA =  1−a2ab−b 1  n×n . 2b 1−a2 +b2 1+a2 −b2 1 2a 2b Notice that as before it is enough to check positive definiteness of the following matrix   2 2   2 2 Hx 1+a2a−b Hxx H + N · Hx2 (Hxy H + N · Hx Hy ) 1−a2ab−b   2 2  2 2    Hyy H + N · Hy2 Hy 1−a2b+b  , A˜ ∗C˜ A˜ • T = (Hxy H + N · Hx Hy ) 1−a2ab−b  2 2  2 2   1 Hx 1+a2a−b Hy 1−a2b+b M a(R) where R − 1 = N and R(a(R)−1) = M1 . If R is sufficiently large then all diagonal entries are positive. One can notice that all principal 2 × 2 minors have positive determinant provided that R is sufficiently large and R > a(R). This follows from the fact that  2 ((a + b)2 − 1)(1 − (a − b)2 ) 1 − a2 − b2 1− = > 0, 2ab 4a2 b2  2 1 − a2 + b2 ((a + b)2 − 1)(1 − (a − b)2 ) 1− = > 0, 2b 4b2 R and N − M = a(R) − 1 > 0. So it remains to check the sign of det(A˜ ∗C˜ A˜ • T ). We have

4a2 b2 M det(A˜ ∗C˜ A˜ • T ) = MH(1 − (a − b)2 )((a + b)2 − 1)[a2 Hxx Hy2 + (1 − a2 − b2 )Hxy Hx Hy + b2 Hyy Hx2 ]+   1 2 2 2 2 2 2 2 2 2 2 2 (N − M) NHx Hy (1 − (a − b) )((a + b) − 1) + H(4a b (Hxx Hy + Hyy Hx ) − 2Hxy Hx Hy (1 − a − b ) ) 2   ! 2 − b2 2 1 1 − a 2 (N − M)NHx2 Hy2 (1 − (a − b)2 )((a + b)2 − 1) + 4a2 b2 H 2 Hxx Hyy − Hxy = 2 2ab = MI1 + (N − M)I2 + I3 + I4 .

Notice that the first term I1 ≥ 0 by (2). The second term I2 contains a factor of the form 12 NHx2 Hy2 (1 − (a − b)2 )((a − b)2 − 1) which will dominate the remaining subterms as N → ∞. Finally the sum of the last two

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

13

terms I3 + I4 will be positive provided that 8a2 b2 H 2 (N − M)N ≥ −

(31)

  2 2 2  2 Hxx Hyy − Hxy 1−a2ab−b

Hx2 Hy2 (1 − (a − b)2 )((a + b)2 − 1)

.

The last inequality holds provided that c is sufficiently large number. Indeed, notice that if R > R0 for some large R0 > 0 then (N − M)N = R−1 a(R) (R − a(R)) > c. On the other hand the right hand side of (31) is bounded. This finishes the proof of the lemma. 3. D EGENERATE CASES In this section we give the answers to the question what happens when Hx and Hy vanish in Ω. 3.0.3. The sufficiency condition for the Gaussian measure in the general case. Unfortunately our technique developed in Section 2.2 directly does not apply to the cases when partial derivatives of H vanish. This obstacle could be overcome by using Theorem 2 in [16]. Let us illustrate its application. Let H ∈ C2 (Ω) for some closed bounded rectangular domain Ω ⊂ R2 . Let a and b be strictly positive numbers. Theorem 2. If a + b ≥ 1, |a − b| ≤ 1 and the following conditions hold

a2 Hxx Hy2 + (1 − a2 − b2 )Hxy Hx Hy + b2 Hyy Hx2 ≥ 0 if ! 1−a2 −b2 H a2 Hxx xy 2 ≥ 0 if ∇H = 0, 1−a2 −b2 2H H b xy yy 2

(32) (33)

∇H 6= 0

then Z

(34)

sup H( f (x), g(y))dµ(t) ≥ H

Rn ax+by=t

Z



Z

f dµ, Rn

gdµ Rn

holds for all smooth Rcompactly supported ( f , g) : Rn × Rn → Ω, and any Gaussian measure dµ. If a + b > 1 then we require that xdµ = 0. The theorem was proved in [16] Corollary 5.2 under the extra assumptions that Hx and Hy never vanish in Ω. Let us explain that the general case follows from Theorem 2 given in [16]. We should mention that the assumption of smoothness (actually continuity) of ( f , g) in (34) is crucial, and this makes the statement (34) completely different than what we have in (1). We will see that there are examples of discontinuous functions f , g when (34) does not hold at all, although H satisfies (32) and (33) (see the discussions after the proof of Theorem 3). Proof. As before it is enough to obtain (34) for the standard Gaussian measure dµ = dγn . First let us mention that if (34) holds for n = 1 then it holds for all n ≥ 1. Indeed, assume (1) is true for n = 1. Let h(s) = supax+by=s H( f (x), g(y)). Then h(ax + by) ≥ H( f (x), g(y)) for all x, y ∈ Rn . Let x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) and t = (t1 , . . . ,tn ). We apply one dimensional (34) to the following functions h(1) (s) = h(s, ax2 + by2 , . . . , axn + byn ),

f (1) (s) = f (s, x1 , . . . , xn ) and

g(1) (s) = g(s, y1 , . . . , yn ).

We obtain Z R

h(s, ax2 + by2 , . . . , axn + byn )dγ1 (s) ≥ H

Z

Z

f (s, x1 , . . . , xn )dγ1 (s), R

 g(s, y1 , . . . , yn )dγ1 (s) .

R

Then we consider new functions h(2) (t) =

Z

h(s,t, ax3 + by3 , . . . , axn + byn )dγ1 (s), R

g(2) (t) =

Z

f (s,t, x3 , . . . , xn )dγ1 (s) R

Z

g(s,t, y3 , . . . , yn )dγ1 (s), R

f (2) (t) =

14

PAATA IVANISVILI

and we again apply (1). After iterating this process we will obtain that Z  Z Z hdγn ≥ H f dγn , (35) gdγn . Rn

Rn

Rn

1

1−a2 −b2 2ab

Thus we obtain the desired result. Let B(x, y, z) = z − H(x, y). Set   1 0 a , A = (a1 , a2 , a3 ) := 0 1 b

C=

1−a2 −b2 2ab

! and

1

  Bx 0 0 D =  0 By 0  . 0 0 Bz

By Theorem 2 in [16] the inequality B( f (~u · a1 ), g(~u · a2 ), h(~u · a3 )) ≥ 0 for all ~u = (u, v) ∈ R2 implies the inequality Z  Z Z p p p f (y hCa1 , a1 i)dγ1 (y), g(y hCa2 , a2 i)dγ1 (y), h(y hCa3 , a3 i)dγ1 (y) ≥ 0 B (36) R

R

R

A∗CA • Hess B ≤ 0

if on the subspace ker AD. Notice that if we take smooth bounded h such that h(ax + by) ≥ H( f (x), g(y)) for all x, y ∈ R then (36) is the same as (35) for n = 1 because hCa j , a j i = 1 for all j = 1, 2 and 3. On the other hand the matrix A∗CA • Hess B is negative semidefinite on the subspace ker AD where   2 2   −Hxx −Hxy 1−a2ab−b 0 −Hx 0 a 2 −b2 ∗ 1−a   A CA • Hess B = −Hxy and AD = −Hyy 0 2ab 0 −Hy b 0 0 0 if and only if 1 − a2 − b2 Hxy uv + Hyy v2 ≥ 0 ab These are precisely conditions (32) and (33). Hxx u2 +

for all u, v

such that

Hy Hx u= v. a b 

3.1. The necessity condition for a probability measure: space of test functions. Let dµ be defined as in the introduction. Let us consider the following degenerate case H(x, y) = B(x) where B ∈ C2 (R). Let I be a closed bounded set I ⊂ R. Notice that (34) for all Borel measurable f : Rn → I takes the following form sup B(x) ≥

(37)

x∈Im( f )

sup

B(v)

v∈conv(Im( f ))

where conv(J) denotes the convex hull of the set J. Notice that if f is continuous then Im( f ) = conv(Im( f )), therefore (37) becomes equality and it is always true without any restrictions on B. However, the sufficiency condition, namely (33), requires that if B0 (x0 ) = 0 then B00 (x0 ) ≥ 0. Thus sufficiency condition (33) is unnecessary if we consider inequality (34) on the space of smooth compactly supported functions. However, if we slightly enlarge the class of test functions, for example, if we allow to include discontinuous functions in (34) then the sufficiency conditions (32) and (33) become necessary. In the next theorem we assume that there exists a point (u, v) ∈ int(Ω) such that Hx (u, v) · Hy (u, v) 6= 0. Theorem 3. Let H ∈ C3 (Ω) for some closed bounded rectangular domain Ω ⊂ R2 . Let dµ be defined as in Introduction. Then for the inequality Z  Z Z (38) sup H( f (x), g(y))dµ(t) ≥ H f dµ, gdµ Rn ax+by=t

to hold for all continuous functions ( f , g) : Rn × Rn → Ω having at most one point of discontinuity it is R 2 2 necessary that |1 − a − b | ≤ 2ab, (32) and (33) holds. If a + b > 1 then it is necessary that xdµ = 0.

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

15

Proof. Again we restrict the measure dµ to the case n = 1, i.e., dµ is a probability measure on the real line with finite absolute fifth moment. If Hx · Hy 6= 0 then condition (32) together with constraints on numbers a, b, i.e., |1 − a2 − b2 | ≤ 2ab was already obtained in Section 2.1. If Hu (u, v) = 0 but Hv (u, v) 6= 0 then we consider the following test functions f (x) = u + εϕ(x) and

g(x) = v,

where ϕ is a bounded function. 2 Z   Z Z 2 ϕ(x)dµ(x) + O(ε 3 ). H u + ε ϕ(x)dµ(x), gdµ = H(u, v) + ε Huu R

R

R

On the other hand Z

sup H ( f (x), g(y)) dµ(t) ≤ H(u, v) + ε 2 sup Huu ϕ 2 (x) + O(ε 3 ). x

R ax+by=t

Therefore (38) implies that 2

sup Huu ϕ (x) ≥ Huu x

Z

2 ϕ(x)dµ(x) .

R

Assume the contrary that Huu < 0. Then we obtain that infx ϕ 2 (x) ≤ ( R ϕ(x)dµ(x))2 . There exists a point R τ > 0 such that µ(−∞, τ) = µ(τ, ∞) = 12 . Take ϕ = −1 if x ≤ τ and ϕ = 1 otherwise. Since ϕdµ = 0 we obtain that 1 ≤ 0. Thus Huu ≥ 0. The similar reasoning works in the case when Hx 6= 0 but Hy = 0. Finally, if ∇H = 0 at point (u, v), then in the similar way we can show that Hxx ≥ 0 and Hyy ≥ 0. By considering the following test functions f (x) = u + εϕ(x) and g(y) = v + εψ(y) where ϕ and ψ are bounded continuous functions, and using Taylor’s formula we obtain Z  sup Huu ϕ 2 (x) + 2Huv ϕ(x)ψ(y) + Hvv ψ 2 (y) dµ(t) ≥ R

R ax+by=t

2

Z Huu

ϕdµ

Z + 2Huv

 Z ϕdµ

 Z 2 ψdµ + Hvv ψdµ

for all bounded continuous ϕ and ψ. If Huv = 0 then there is nothing to prove because we have that Huu and Hvv are nonnegative. Further assume that Huv 6= 0. Consider the following function M(x, y) = Huu x2 + 2Huv xy + Hvv y2

in a rectangular domain Λ such that Mx and My never vanish. M does satisfy (1) with bounded continuous test functions f , g. Notice that the test functions (7), (8) and (9) which we used in derivation of necessity xx conditions for (1) are continuous functions. Therefore we can apply Theorem 1, and we have a2 M + (1 − M2 M

x

M

xy + b2 Myy2 ≥ 0. This implies that a2 − b2 ) Mx M y y

(39)

a2

2Huu 2Huv 2Hvv + (1 − a2 − b2 ) + b2 ≥0 2 (2Huu x + 2Huv y) (2Huu x + 2Huv y)(2Hvv y + 2Huv x) (2Hvv y + 2Huv x)2

2 6= 0 otherwise for all x, y such that Huu x + Huv y 6= 0 and Hvv y + Huv x 6= 0. We should assume that Huu Hvv − Huv there is nothing to prove. Therefore (39) can be rewritten as follows a2 Huu x2 +(1−a2 −b2 )Huv xy+b2 Hvv y2 ≥ 0 for all nonzero x and y. The last inequality implies (33), and this finishes the proof of the theorem. 

Let us make the following remark. Notice that for inequality (38) to hold for the standard Gaussian measure and the class of functions f , g having at most one point of discontinuity it is necessary that conditions (32) and (33) must be satisfied. These conditions are also sufficient for inequality (38) to hold for a class of smooth functions but not discontinuous. For example, consider H(x, y) = −x4 . This function does satisfy (32), (33). On the other hand, inequality (38) fails for the function f (x) = −1 for x ≤ 0 and f = 1 otherwise. Thus we see that in the degenerate case, i.e., when we allow Hx and Hy to switch the signs in Ω, it is crucial whether the space of test functions consists by continuous or discontinuous functions.

16

PAATA IVANISVILI

4. A PPLICATIONS In the applications H will not belong to the class C3 (Ω). Instead we will only have H ∈ C3 (int(Ω)) and H is lower-semicontinuous on Ω. Thus we cannot directly apply Theorem 1. In order to avoid this obstacle we will slightly modify the functions H and then pass to the limit in (1). For example, if Ω = [0, 1]2 we will consider auxiliary functions Hε1 ,ε2 ,δ1 ,δ2 (x, y) = H(ε1 + xδ1 , ε2 + yδ2 ) for 0 < ε1 , ε2 , δ1 , δ2 < 1, and we apply (1) to these functions. Finally we just send ε1 , ε2 → 0 and δ1 , δ2 → 1 in the appropriate order. 4.1. Ehrhard inequality and the Gaussian measure. Let Ψ(s) = that if a + b ≥ 1 and |a − b| ≤ 1 then

Rs

−∞ dγ1 .

The Ehrhard inequality states

 γn (aA + bB) ≥ Ψ aΨ−1 (γn (A)) + bΨ−1 (γn (B))

(40)

for all Borel measurable A, B ⊂ Rn such that the Minkowski sum aA + bB is measurable. The equality is attained in (40) for the half spaces with one containing the other one. Inequality (40) was proved by Ehrhard [14] when A and B are convex sets under the assumptions that a + b = 1. Ehrhard, by developing Gaussian symmetrization method showed that (40) is enough to prove in the case n = 1. It was an open problem whether (40) holds for the Borel measurable sets A and B (see [22]). Latala [19] showed that the inequality is true if at least one of the sets is convex (again under the constraints a + b = 1). It was also noticed that the inequality is equivalent to its functional version  Z  Z  Z  −1 −1 −1 −1 (41) sup Ψ aΨ ( f (x)) + bΨ (g(y)) dγn ≥ Ψ aΨ f dγn + bΨ gdγn Rn ax+by=t

for all smooth ( f , g) : Rn × Rn ∈ [δ , 1 − δ ]2 for some 0 < δ < 1/2. Finally, Borell in his series of papers [7, 8] using a maximum principle obtained (41). We refer the reader to [3, 27] for further developments of the subject. Among many applications of the Ehrhard ineuqlaiy, it also allows to find isoperimetric profile for the Gaussian measure. Let dµ be a probability measure on Rn . Let Aε be an epsilon neighborhood of the set A. Set µ(Aε ) − µ(A) and Iµ (p) = inf µ + (A). µ + (A) = lim inf ε→0 ε µ(A)=p The function Iµ (p) is called isoperimetric profile of the measure µ. Iµ (p) measures minimal perimeter of the set A under the constraint that µ(A) = p is fixed. One can obtain from (40) that Iγn (p) ≥ Ψ0 (Ψ−1 (p)) which is regarded as an infinitesimal version of |Aε |γn ≥ Ψ(Ψ−1 (p) + ε). A subtle result of Bobkov [4] asserts that Rx for any even, log-concave measure dµ on the real line we have Iµ (p) = Φ0 (Φ−1 (p)) where Φ(x) = −∞ dµ. We should also mention that if dµ is a probability measure with positive distribution function Φ on the real line then there is a trivial upper bound inf A,B⊂R: µ(A)=x,µ(B)=y

µ(aA + bB) ≤ Φ(aΦ−1 (x) + bΦ−1 (y)) for all x, y > 0.

The inequality is exhausted by half-lines. If dµ is a log-concave measure then we also have a trivial lower bound µ(aA + bB) ≥ µ(A)a µ(B)b for a + b = 1 via the Prékopa–Leindler inequality. These considerations motivate to the following question: which measures dµ satisfy (40) or (41) with Ψ replaced by Φ(x), i.e., with a distribution function of dµ. Further by dµn we denote product measure, i.e., µn = µ × µ × . . . × µ.

Theorem 4. Let dµ be aRprobability measure with density function ϕ = e−V ∈ C2 (R) and finite absolute s fifth moment. Let Φ(s) = −∞ dµ. Let a and b be some fixed positive numbers. Let H(x, y) = Φ(aΦ−1 (x) + bΦ−1 (y)) on [0, 1]2 \ {(0, 1) ∪ (1, 0)} and set H(0, 1) = H(1, 0) = 0. For the inequality Z       Z Z x−y y (42) ess sup H f ,g dµn (x) ≥ H f dµn , gdµn a b Rn y∈Rn

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

17

to hold for n = 1 and all Borel measurable ( f , g) : Rn × Rn ∈ [0, 1]2 it is necessary that  2 Z 1 − a2 − b2 0 0 0 V (ax + by) ≤ aV (x) + bV (y), ≤ 1, (43) xdµ = 0 if a + b > 1. 2ab Rn Moreover, if dµn = dγn is the Gaussian measure then the above conditions are also sufficient for the inequality to hold for any n ≥ 1. Further inequality (42) we call the Ehrhard inequality. Proof. First we prove the necessity part. We consider H(x, y) on the domain Ωδ = [δ , 1 − δ ]2 for some δ ∈ (0, 1/2). Clearly H ∈ C3 (Ωδ ) and in particular (42) holds for Borel measurable ( f , g) : Rn × Rn → Ωδ . Notice that Hx and Hy never vanish in Ωδ . It remains to use the Theorem 1. Direct computations show that Hxy Hyy Hxx + (1 − a2 − b2 ) + b2 2 = 2 Hx Hx Hy Hy   1 ϕ 0 (Φ−1 (x)) ϕ 0 (Φ−1 (y)) ϕ 0 (aΦ−1 (x) + bΦ−1 (y)) −a − b + . ϕ(aΦ−1 (x) + bΦ−1 (y)) ϕ(Φ−1 (x)) ϕ(Φ−1 (y)) ϕ(aΦ−1 (x) + bΦ−1 (y))

a2

Therefore if we introduce new variables x˜ = Φ−1 (x) and y˜ = Φ−1 (y) we see that by Theorem 1 the necessary condition for (42) is (43). For the sufficiency condition we should introduce an auxiliary function (44)

Hε,δ (x, y) = H(ε + xδ , ε + yδ ) for

0 < ε, δ < 1,

ε + δ < 1.

Notice that Hε,δ ∈ C3 ([0, 1]2 ) and it satisfies (2). By Theorem 1 we have (1) for Hε,δ . We consider      x−y y hε,δ (x) = ess sup Hε,δ f . ,g n a b y∈R Since H and hence Hε,δ is increasing in each variable we have hε,δ (x) ≥ h0,δ (x) almost everywhere. As ε → 0 the sequence of functions hε,δ is decreasing in ε. These two conditions imply that limε→0 hε,δ = h˜ δ almost everywhere for some Borel measurable h˜ δ . Clearly h˜ δ ≥ h0,δ (x). We claim that h˜ δ = h0,δ a.e.. Indeed, otherwise h˜ δ > h0,δ + τ for some τ > 0 on some set of positive measure. Since continuous function on a compact set is uniformly continuous we can choose ε0 so that |Hε,δ − H0,δ |C([0,1]2 ) ≤ τ/2 for all ε < ε0 . This would contradict to the fact that h˜ δ > h0,δ + τ. Thus limε→0 hε,δ = h0,δ almost everywhere. Since |Hε,δ | ≤ 1 the Lebesgue dominated convergence theorem implies that hε,δ → h0,δ in L1 (dµn ). Using the fact that H is increasing in each variable R we R obtain h0,δ ≤ h0,1 . This gives the left hand side of (42). For the right hand side notice that if the point ( f , Rg) coincides with (0, 1) or (1, 0) then there is nothing to prove because H ≥ 0. R In the remaining case when ( f , g) is the point of continuity of H in [0, 1]2 we obtain the right hand side of (42) by taking the limit.  The next corollary says that in the class of even probability measures on the real line, the only measures which satisfy the Ehrhard inequality are the Gaussian measures. Corollary 1. An even probability measure dµ with finite absolute fifth moment and the density function e−V ∈ C2 (R) satisfies Ehrhad inequality (42) with n = 1 and some a, b > 0 if and only if it is the Gaussian measure. Proof. By (43) and the fact that V 0 is an odd function we obtain −V 0 (ax + by) = V 0 (−ax − by) ≤ aV 0 (−x) + bV 0 (−y) = −aV 0 (x) − bV 0 (y).

Therefore V 0 (ax + by) = aV 0 (x) + bV 0 (y) for all x, y ∈ R. If we take derivative with respect to x we obtain aV 00 (ax + by) = aV 00 (x). Choose y so that ax + by = 0 then we obtain that V 00 (x) = V 00 (0) for all x ∈ R. Thus V = cx2 + d if a + b > 1 and V = cx2 + kx + d if a + b = 1. Further we will just write V = cx2 + k(a + b − R (x) −V 1)x + d instead of considering previous cases separately. In both cases c > 0 because e < ∞.

18

PAATA IVANISVILI

y P

V0 L

y = x · c+ x y = x · c− F IGURE 2. Graph of V 0 , parallelogram P and the line L On the other hand testing the Ehrhard inequality (42) with dγ1 and test functions f˜(x) = f (px + q), g(y) ˜ = g(py + q) we see that after the change of variables the inequality holds for the probability measures dµ = 2 e−V dx with V (x) = ecx +k(a+b−1)x+d .  It turns out that the measures e−V which satisfy V 0 (ax + by) ≤ aV 0 (x) + bV 0 (y) for all x, y ∈ R and all a, b > 0 with a + b ≥ 1 and |a − b| ≤ 1 have a simple geometrical description.

Corollary 2. Let dµ be a probability measure with the density function e−V ∈ C2 (R) and finite absolute fifth moment. Assume that the Ehrhard inequality (42) holds for all a, b > 0 with a + b ≥ 1 and |a − b| ≤ 1. Then R −V (x) dµ = 0 and V 0 is a convex function. Moreover, there exist constants c > 0 with c ≤ c such that ± − + R xe limx→±∞ |V 0 (x) − xc± | = 0. Proof. By Theorem 4 we have V 0 (ax + by) ≤ aV 0 (x) + bV 0 (y)

(45)

for all real x, y and for all positive numbers a, b with a + b ≥ 1 and |a − b| ≤ 1. Inequality (45) has the following geometrical meaning. Let epi V 0 = {(x, y) ∈ R2 : y ≥ V 0 (x)} be the epigraph of V 0 . Condition (45) means that a(x,V 0 (x)) + b(y,V 0 (y)) ∈ epi V 0 for all a + b ≥ 1 and |a − b| ≤ 1. It follows that the infinity parallelogram P (see Figure 2) with sides s · (x,V 0 (x)) + (1 − s) · (y,V 0 (y)), 0

0

0

0

0

0

x ∈ [0, 1];

(x,V (x)) + s · ((x,V (x)) + (y,V (y))),

(y,V (y)) + s · ((x,V (x)) + (y,V (y))),

s ∈ [0, ∞)

s ∈ [0, ∞)

belongs to epiV 0 . Since this is true for all x, y ∈ R it follows that this can happen if and only if V 0 is convex and epiV 0 contains all lines L of the form s · (x,V 0 (x)), s ≥ 1 for all Rx ∈ R. Then it follows that there exist real numbers c± , c− ≤ c+ such that limx→±∞ |V (x) − xc± | = 0. Since e−V dx < ∞ it follows that there exists sufficiently large p, q > 0 such that V 0 (p) > 0 and V 0 (−q) < 0. This implies that c± > 0.  One can observe that c+ = c− if and only if dµ is the Gaussian measure. It would be interesting to see whether the converse of Corollary 2 is also true at least for a + b = 1, i.e., a probability measure with density

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

19

e−V and the function V described in Corollary 2 satisfies the Ehrhard (42) with n = 1. If this is the   q inequality c− −1 case then for such measures we obtain |Aε |µ ≥ Φ Φ (|A|µ ) + ε c+ . Next we investigate 1-homogeneous functions which satisfy (1). The class of 1-homogeneous functions was studied in a remarkable paper of Borell [6]. One should compare our results of Subsection 4.2 with the results of Borell. For the convenience of the reader we have included Borell’s theorem in Appendix (see Theorem B). 4.2. Lebesgue measure and 1-homogeneous functions. In this section we describe all 1-homogeneous functions H which satisfy (1). It turns out that they are either convex functions, or the Prékopa–Leindler type functions (47), (48) and (49). Further we will always assume that the numbers a, b > 0 satisfy the constraint in (2), i.e., a + b ≥ 1 and |a − b| ≤ 1.

Corollary 3. Let H ∈ C3 (int(R2+ )) be 1-homogeneous function with Hx , Hy 6= 0. Partial differential inequality (2) holds on int(R2+ ) if and only if one of the following holds: (46)

H

is a convex function ;

(47)

H(x, y) = xa yb ,

(49)

b = 1 − a,

with

H(x, y) = −x−a yb

b = a − 1,

with

b = a + 1,

H(x, y) = −x y

(48)

with

a −b

a ∈ (0, 1);

a ∈ (1, ∞);

a ∈ (−∞, 0).

Proof. Since H is 1-homogeneous we have H(x, y) = xh( xy ) for some h. Conditions Hx 6= 0 and Hy 6= 0 imply that h0 6= 0 and h(t) − th0 (t) 6= 0. Notice that (2) takes the following form (50)

a2

h00 ((b|h| − |h0 |t)2 + |h||h0 |t(2b + sign(hh0 )(a2 − b2 − 1))) Hxx 2 2 Hxy 2 Hyy + (1 − a − b ) = + b Hx2 Hx Hy Hy2 x(h − th0 )2 (h0 )2

where h = h(t) and t = y/x. Notice that (2b ± (a2 − b2 − 1)) ≥ 0. We have (51)

(b|h| − |h0 |t)2 + |h||h0 |t(2b + sign(hh0 )(a2 − b2 − 1)) ≥ 0.

The last expression becomes zero if and only if b|h| = |h0 |t and |h||h0 |t(2b + sign(hh0 )(a2 − b2 − 1)) = 0. But if b|h| = |h0 |t then h 6= 0 (since h 6= 0). We consider several cases. Suppose h0t = bh. Then h = Ct b for some nonzero C. Then sign(hh0 ) = 1 and we obtain that 2b + (a2 − 2 b − 1) = 0. The last equality implies that either a + b = 1 or b − a = 1. Thus we obtain that h(t) = Ct b for some nonzero C with a + b = 1 or b − a = 1. Suppose h0t = −bh. Then h = Ct −b for some nonzero C. Then sign(hh0 ) = −1 and we obtain that 2b − (a2 − b2 − 1) = 0. The last equality can happen if and only if a − b = 1. Thus h(t) = Ct −b with some nonzero C and a − b = 1. Thus if (51) becomes zero on some interval then the several cases might happen: 1) a + b = 1 and h(t) = Ct b ; 2) b − a = 1 and h(t) = Ct b ; 3) a − b = 1 and h = Ct −b . The obtained functions are either convex functions or concave functions. Notice that non of these concave functions can be glued smoothly with a convex function h00 ≥ 0. It follows that (50) is nonnegative if and only if either h is a convex function, or h is a concave function of the form (52)

h(t) = Ct b ,

C > 0,

a+b = 1

and H(x, y) = Cxa yb ;

(53)

h(t) = Ct b ,

C < 0,

and H(x, y) = Cx−a yb ;

(54)

h(t) = Ct −b ,

b−a = 1

C < 0,

a−b = 1

and

H(x, y) = Cxa y−b .

Since H(1, y) = h(y) and H is 1-homogeneous, then convexity/concavity of h is equivalent to convexity/concavity of H. In the first case when h00 ≥ 0, h0 6= 0 and h −th0 6= 0 then H(x, y) becomes 1-homogeneous convex function such that Hx and Hy never vanish. In the remaining cases the descriptions are given in (52), (53) and (54).

20

PAATA IVANISVILI

 Thus in case of smooth 1-homogeneous functions there are two instances: H is convex, or H coincides with one of the functions (47), (48) and (49). Next we describe measures dµ which satisfy (1) for 1-homogeneous functions H. First we consider the case when H is a function of the form (52), (53) and (54). The case of convex functions we consider separately in the next subsection. 4.2.1. Case of the Prékopa-Leindler functions. Functions found in (62), (63) and (64) provide us with the following inequalities. Corollary 4. Let dµ be the Gaussian measure (or the Lebesgue measure). We have     Z a Z 1−a Z y a x−y 1−a (55) ess sup f g dµ(x) ≥ f dµ gdµ , a a−1 Rn y∈Rn for all nonnegative Borel measurable f , g ∈ L1 (dµ). Moreover, if dµ is even then    Z a Z 1−a  Z y 1−a a x−y (56) g dµ(x) ≤ f dµ gdµ , ess inf f a a−1 Rn y∈Rn

a ∈ (0, 1)

a ∈ (−∞, 0) ∪ (1, ∞) R

for all bounded compactly supported nonnegative Borel measurable f , g with positive gdµ and

R

f dµ.

Proof. Inequalities in the corollary follow from the application of Theorem 1 to the functions (47), (48) and (49). The only obstacle to directly apply Theorem 1 is that H ∈ / C3 (R2+ ). To avoid this obstacle one needs to consider an auxiliary function Hε (x, y) = H(x + ε, y + δ ) for ε, δ > 0 and then send ε, δ → 0 (see the similar discussions in (44)). Case of the Lebesgue measure follow from the Gaussain measure and the fact that H is 1-homogeneous. Indeed, we can test inequalities in the corollary for the following test functions fλ (x) = f (λ x) and gλ (x) = g(λ x). By making change of variables and using 1-homogeneity of H we can send λ → ∞ and obtain the desired result.  Inequality (55) is the classical Prékopa–Leindler inequality [23, 26]. Among its many applications we should mention a remarkable paper [5]. Stability of (55) was studied in [2]. The inequality implies that the marginals of log-concave measures are log-concave. For a local version of the latter fact we refer the reader to [1] (see also [11] for the complex setting). An extension of the inequality was obtained in [13]. Inequality (56) can be understood as an extension of the classical Prékopa–Leindler inequality for a ∈ / [0, 1]. In fact, one can show that (55) and (56) are equivalent if instead of essential infimum in (56) we would have only infimum. Since the measure is even in (56), it is easy to show that (56) for a ∈ (1, ∞) is the same as (56) for a ∈ (−∞, 0). Notice that if we take f (x) = 1C (x) and g(y) = 1B (y) in (56) for compact Borel measurable C, B ⊂ Rn with |C|, |B| > 0 then we obtain     y x−y −1 · 1B = 1A (x); ess inf 1C y a a−1 (57)

|A| = |{x ∈ Rn : |(a − 1)B \ (x − aC)| = 0}| ≤ |C|a |B|1−a

for any a > 1.

Since the measure is even then if we interchange C → −C, A → −A and we denote λ = from (57) that (58)

|C| ≥ |{x ∈ Rn : |[(1 − λ )B + λ x] \C| = 0}|λ · |B|1−λ .

1 a

then we obtain

Inequality (58) does not follow immediately from the general Brunn–Minkowski inequality |ess{λ A + (1 − λ )B}| ≥ |A|λ |B|1−λ . However, it is easy to see that weaker version of (58), i.e., |C| ≥ |{x ∈ Rn : [(1 − λ )B + λ x] \C = 0}| / λ · |B|1−λ

is equivalent to the Brunn–Minkowski inequality.

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

21

It is the remarkable result of Borell [6] that (55) holds if and only if dµ is a log-concave measure. One can show that the weaker version of (56), i.e., when essential infimum is replace by infimum, also holds for even log-concave measures. Finally we would like to mention that even though among C3 smooth 1-homogenous functions H with nonvanishing Hx and Hy there are only two instances either H is convex or H is of the form (47), (48) and (49), it is not the case in general if we drop the assumption of smoothness. We can always take the maximum of any two functions which satisfy (1). Indeed, next proposition says that (1) is closed under taking maximum. Proposition 1. If H1 and H2 satisfy (1) then H = max{H1 , H2 } also satisfies (1). R

R

R

R

Proof. Indeed, suppose H ( f , g) = H1 ( f , g) then since H ≥ H1 we have    Z         Z Z Z x−y y x−y y f dµ, gdµ . ess sup H f ,g dµ(x) ≥ ess sup H1 f ,g dµ(x) ≥ H1 a b a b y y  Next we describe measures dµ and convex functions H which satisfy (1). 4.3. Convex functions. 4.3.1. Case of convex functions and a + b = 1. If H is a convex function (not necessarily 1-homogeneous) one should be careful in this case especially when a + b = 1. This case is a trivial application of the fact that ess supy H( f ((x−y)/a), g(y/b)) ≥ H( f (x), g(x)) and the Jensen’s inequality, i.e., (1) holds for any probability measure dµ. If in addition H is 1-homogeneous it easily follows that (1) holds for any σ -finite measure dν. Let us formulate the last statement precisely. Proposition 2. Let H ∈ C(R2+ ) be a nonnegative 1-homogeneous convex function. Then for any σ -finite measure dν absolutely continuous with respect to the Lebesgue measure we have       Z Z Z x−y y (59) ess sup H f f ν, gdν for all f , g ∈ L1 (dν). ,g dν(x) ≥ H a b Rn y Proof. There exist measurable sets A1 ⊂ A2 . . . such that ∪ j A j = Rn and 0 < ν(A j ) < ∞ for all j. We apply ν Jensen’s inequality to the probability measures ν j = ν(A 1A j . By using the fact that H is 1-homogeneous we j) obtain    Z    Z Z Z Z y x−y dν(x) ≥ ,g H( f , g)dν ≥ H( f , g)dν ≥ H f dν, ess sup H f gdν . a b Rn Aj Aj Rn Aj y Finally we just send j → ∞ and we use the continuity of H.



Remark 2. Fix some positive a, b such that a + b = 1. Let Mp (x, y) = (ax p + by p )1/p for x, y ≥ 0. Then Mp (x, y) is 1-homogeneous convex function on R2+ for p ≥ 1. Thus (59) holds for H = Mp with p ≥ 1. One may think that Remark 2 improves the Borell–Brascamp–Lieb inequality for p ≥ 1 because Mp ≥ p pointwise. We caution the reader that this is not the case because in the Borell–Brascamp–Lieb inM np+1 equality Mp (x, y) is set to be zero if xy = 0 (see [10]). Under these restrictions the new function M˜ p fails to be convex and the inequality becomes nontrivial (see also [12] for a Riemannian version of the Borell– Brascamp–Lieb inequality). Remark 3. There are nontrivial functions f , g for which inequality (59) for H = Mp is equality. Indeed, set f (x) = F(x) and g(y) = F(y) for some positive function F such that F p is convex. Then F(ax + by) ≥ (aF(x) p + bF(y) p )1/p . Thus the integrand in the left hand side of (59) becomes F(x) and since Mp (x, x) = x we obtain equality in (59). This is the example of function H for which partial differential inequality (PDI) in (2) holds with not being identically equality, however, (1) is sharp. The last remark also sheds light about connection of existence of nontrivial optimizers and equality cases of PDI in (2).

22

PAATA IVANISVILI

4.3.2. Case of convex functions and a + b > 1. In this case inequality (1) for the Gaussian measure seems to be a nontrivial statement, i.e., it does not follow from the application of Jensen’s inequality as before. Proposition 3. Let H ∈ C(Ω) ∩ C3 (int(Ω)) for some closed bounded rectangular domain Ω ⊂ R2 . Assume that H is a convex function and Hx , Hy never vanish in int(Ω). Let a, b be some positive numbers such that R a + b > 1. Then (1) holds for the Gaussian measure dµ if and only if |a − b| ≤ 1 and xdµ = 0. Proof. The necessity condition follows from Theorem 1. For the sufficiency it is enough to check that convex functions H satisfy (2). Indeed, p Hxx Hyy |Hxy | 2 Hyy 2 2 Hxy 2 Hxx (60) ≥ 2ab − |1 − a2 − b2 | ≥ a 2 + b 2 + (1 − a − b ) Hx Hy Hx Hy |Hx Hy | |Hx Hy | |Hxy | (2ab − |1 − a2 − b2 |) ≥ 0. |Hx Hy |

Finally we need to invoke the auxiliary function (44) to relax the requirement H ∈ C3 (Ω) to H ∈ C(Ω) ∩ C3 (int(Ω))  Thus we finish characterization of (1) in case of 1-homogeneous functions, and in case of convex functions. In the next section we describe all concave functions H which satisfy (1). 4.4. Concave functions and universality of the Prékopa–Leindler functions. We noticed in the previous section that if H is a convex function then (2) holds. Actually if we assume even less than convexity of H, for example, the following matrix is positive semidefinite ! |1−a2 −b2 | Hxx H xy 2ab (61) ≥ 0, |1−a2 −b2 | H H xy yy 2ab 2

2

then H still satisfies (2). Indeed, the proof proceeds in the same way as in (60). Notice that since |1−a2ab−b | | ≤ 1 condition (61) is weaker than convexity of H: any convex function H satisfies (61) but not any function H which satisfies (61) is a convex function. There are many functions which are not convex and satisfy (61). Such functions appeared in the proof of hypercontractivity of the Ornstein–Uhlenbeck semigroup and in obtaining the Gaussian noise stability. For example, let X and Y be two standard real Gaussian random variables but they are not independent EXY = ρ, 0 < ρ < 1. Consider H(u, v) = −P(X < Φ−1 (u),Y < Φ−1 (v)),

2

2

s where Φ(s) = −∞ dγ1 . Then H is not convex but it satisfies (61) with ρ = |1−a2ab−b | . We refer the reader to [16] for more details. On the other hand it is an interesting question whether there are concave functions H which satisfy (2). For example, we already found some of them: (47), (48) and (49). These functions correspond to the Prékopa– Leinlder’s inequality which sometimes is regarded as reverse to Hölder’s inequality because (1) can be treated as a reverse to Jensen’s inequality. Are there other nontrivial concave functions H which satisfy (2)? One trivial example is H(x, y) = αx + β y. It turns out that all nontrivial concave functions which satisfy (1) are of the form (47), (48) and (49).

R

Proposition 4. Let H ∈ C∞ (Ω) be a concave function such Hx , Hy 6= 0 are nonvanishing. Assume that H is nowhere linear. H satisfies (2) if and only if H is generically equivalent to one of the following functions (62) (63) (64)

H(x, y) = xa yb

with

b = 1 − a,

a ∈ (0, 1)

a −b

with

H(x, y) = −x−a yb

b = a − 1,

with

b = a + 1,

H(x, y) = −x y

a ∈ (1, ∞)

a ∈ (0, ∞).

By generically equivalent we mean up to shifts, translations and dilations. For example, H(x, y) = C(c1 x + d1 )a (c2 y + d2 )b + d3 with C > 0 and some constants c1 , c2 , d1 , d2 , d3 generically is equivalent to H = xa yb .

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

23

Proof. It is clear that the functions of the form (62), (63) and (64) are concave functions and they satisfy (2). Let us show that there are no other concave functions which satisfy (2). Without loss of generality we can ˜ y) = H(−x, y) in assume that Hx , Hy > 0. Indeed, otherwise if Hx < 0 but Hy > 0 then we can consider H(x, ˜ Notice that (2) does not change by this transformation. the reflected domain Ω.

0≤

Hxx a2 2 Hx

Hyy + b2 2 Hy

p Hxx Hyy Hxy |Hxy | + (1 − a − b ) ≤ −2ab + |1 − a2 − b2 | ≤ Hx Hy Hx Hy Hx Hy 2

2

|Hxy | (|1 − a2 − b2 | − 2ab) ≤ 0. Hx Hy Therefore we must have (65) (66)

2 2ab = |1 − a2 − b2 |, Hxx Hyy = Hxy ;  Hyy Hxx a2 2 = b2 2 , sign (1 − a2 − b2 )Hxy > 0. Hx Hy

It follows that if one of the second partial derivative becomes zero then all of them become zero. Consider 2 = 0 we have that a point (u, v) in the interior of Ω where Hxx 6= 0. Then Hyy , Hxy 6= 0. Since Hxx Hyy − Hxy H satisfies homogeneous Monge–Ampére equation. Locally at point (u, v) there exists a foliation, i.e., some family of lines such that ∇H is constant on each line. This is the result of Pogorelov [25], and we also refer the reader for the terminology to [18], see Lemma 3. The lines provide us with directions (cos(θ (s)), sin(θ (s))) for some C1 smooth θ (s) and s ∈ J for some interval J. Each line can be identified with some point s ∈ J, therefore we can think that ∇H depends only one parameter s ∈ J. We have ∇H = (t1 (s),t2 (s)) for all s ∈ J where t1 ,t2 are smooth functions of s. The last equality corresponds to the fact that H has constant gradient on each line. We can treat s = s(x, y) as a C1 function in a neighborhood of (u, v) such that s0x , s0y 6= 0. Since Hxy = Hyx we obtain t10 (s)s0y = t20 (s)s0x , i.e.,

s0x s0y

t0

= t10 . Equation (66) implies a2 2

t10 s0x t12

= b2

t20 s0y . t22

Thus we obtain that

a2 (t10 )2t22 = b2 (t20 )2t12 . The last equality implies that (a lnt1 ± b lnt2 )0 = 0. Thus we obtain that t1a = Ct2±b with some C > 0. Equality 2ab = |1 − a2 − b2 | implies two cases a + b = 1 or |a − b| = 1. We consider these cases separately. Case a+b =1. In this case since (1 − a2 − b2 ) = 2ab we obtain from (66) that Hxy ≥ 0. Then t1a = Ct2±b is the same as Hxa = CHy±b . The last equality implies aHxa−1 Hxx = C(±b)Hy±b−1 Hxy . Since Hxy ≥ 0 we obtain that the right choice of the sign should be negative. Thus Hxa Hyb = C. By using methods of characteristics we obtain   b a s + d1 , s + d2 = s + d3 , H p q where p, q > 0, pa qb = C, s belongs to some open interval and d1 , d2 and d3 are some constants. If we set a b a b C p s = x and q s = y then we obtain H(x + d1 , y + d2 ) = x y aa bb + d3 , and the result follows. Case |a-b|=1. In this case we have (1 − a2 − b2 ) = −2ab and therefore Hxy ≤ 0. Then Hxa = CHyb . We consider the case a = b + 1 (the remaining case is similar). Again using the method of characteristics we obtain obtain H is generically equivalent to (63). The case a = b − 1 is similar. Thus we obtain that locally functions are generically equivalent to one of the functions in (62), (63) and (64). Since H is a smooth function the only possibility to glue these local pieces is when H just coincides with one of these functions everywhere on Ω. This finishes the proof of the proposition.  4.5. Other examples for the Gaussian measure. Further we will always assume that a, b > 0, R a + b ≥ 1, |a − b| ≤ 1 and dµ is the Gaussian measure (3). Whenever a + b > 1 we will always assume that Rn xdµ = 0.

24

PAATA IVANISVILI

For any real p we define k f kL p (dµ) =

Z supp( f )

p

| f | dµ

1/p .

We set that f = 0 almost everywhere if and only if k f kL p = 0. If p = 0 then we set k f kL p = exp

R

 ln | f |dµ . supp( f )

4.5.1. Young’s functions. Corollary 5. Let p and q be the arbitrary real numbers. The following inequality holds     Z x − y y ess sup f (67) g dµ(x) ≥ k f kL p (dµ) kgkLq (dµ) n a b Rn y∈R

for all Borel measurable f ∈ L p (dµ) and g ∈ Lq (dµ) if and only if pa2 + qb2 ≤ 1. We notice that the case p = a1 , q =

1 b

with a + b = 1 recovers the Prékopa–Leindler inequality.

Proof. Without loss of generality we assume that k f k p , kgkq > 0. First let us obtain (67) for bounded f and 1

1

g. We start with the case when p, q 6= 0. Set H(x, y) = x p y q on some bounded closed rectangular domain Ω ⊂ int(R2+ ). Then (1) holds if and only if pa2 + qb2 ≤ 1. Indeed, notice that (2) takes the form  Hxy Hyy 1 Hxx a2 2 + (1 − a2 − b2 ) + b2 2 = 1 1 1 − pa2 − qb2 ≥ 0, Hx Hx Hy Hy x p yq

Finally, in (1) we should take the following test functions f˜(x) = | f (x)| p and g(y) ˜ = |g(y)|q . Thus we obtain (67) for bounded functions f , g and uniformly separated from zero, i.e., f , g ≥ ε for some ε > 0. The general case of bounded f , g follows by considering f˜ = | f | + ε and g˜ = |g| + ε. By sending ε → 0 and using the dominated convergence theorem we obtain (67) for f , g. If p, q become zero then we consider the new exponents pε = p − ε, qε = q − ε for ε > 0 sufficiently small so that pε , qε 6= 0 for all ε < ε0 . Since the map s → k f kLs (dµ) is increasing on R we have that if (67) holds for p, q then it holds for pε , qε . Thus pε a2 + qε b2 ≤ 1. Sending ε → 0 we obtain that pa2 + qb2 ≤ 1. For the converse we notice that inequality pa2 + qb2 ≤ 1 implies the inequality pε a2 + qε b2 ≤ 1. Since pε , qε 6= 0 we obtain (67) with pε and qε . Finally, notice that limε→0 k f kL pε = k f kL p and limε→0 kgkLqε = kgkLq . For arbitrary f ∈ L p and g ∈ Lq we can approximate by bounded fn ≤ | f | and gn ≤ |g| with fn → | f | in L p and gn → |g| in Lq . Since ess supy | f ((x − y)/a)g(y/b)| ≥ ess supy | fn ((x − y)/a)gn (y/b)| almost everywhere we obtain the desired result. 

4.5.2. Minkowski’s functions. Corollary 6. Let p, q, r > 0. Then

  

 y  x−y

(68) +g

ess sup f

≥ k f kL p + kgkLq

y∈Rn a b r L √ √ p q for all nonnegative f ∈ L and g ∈ L if and only if 0 < p, q ≤ 1 and r ≥ 1 − (a p − 1 + b q − 1)2 . 1

1

Proof. Indeed, consider H(x, y) = (x p + y q )r . Let s = y1/q x−1/p . Then notice that (2) takes the form  1 (69) a2 s2 (1 − p) + s(b2 (1 − q) + a2 (1 − p) − 1 + r) + b2 (1 − q) = 1/p 1/q r sr(x + y )  p  p p p 1 2 2 (70) (as (1 − p) − b 1 − q) + s(r − 1 + (a 1 − p + b 1 − q) ) . sr(x1/p + y1/q )r

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

25

For the quantity in (69) to be nonnegative it is necessary that p, q ≤ 1. We can assume that otherwise √ p, q 6= 1 √ the conclusion follows easily. Finally (70) implies that (1) holds if and only if r ≥ 1 − (a p − 1 + b q − 1)2 . Now we consider (1) with the test functions f˜ = f p , g˜ = gq and we obtain (68).  √ √ It Ris interesting to mention that if a 1 − p + b 1 − q = 1 then we can take r → 0 in (68), since lim khkLr → exp( ln |h|dµ), we obtain the following corollary √ √ Corollary 7. Let p, q > 0 be such that a 1 − p + b 1 − q = 1. Then    Z  y  x−y +g dµ ≥ ln (k f kL p + kgkLq ) . ess sup ln f a b Rn y for all nonnegative f ∈ L p and g ∈ Lq . 5. A BOUNDARY VALUE PROBLEM : BACK TO THE E HRHARD INEQUALITY Let I and J be bounded closed subsets of the real line R. By symbol A(Ω) we denote all pair of Borel measurable functions ( f , g) such that f : Rn → I and g : Rn → J. In other words ( f , g) ∈ A(Ω) if and only if

( f (x), g(y)) : Rn × Rn → Ω

Let dµ be a probability measure on Rn such that dµ = ϕ(x)dx for some positive Borel measurable function ϕ. Let M : Ω → R be an arbitrary Borel measurable function. Fix some a, b > 0. We consider the following extremal problem    Z    Z Z y x−y dµ(x) : ,g f dµ = u, B(u, v, a, b, M, Ω, dµ) = inf ess sup M f gdµ = v . a b ( f ,g)∈A(Ω) Rn Rn Rn y We will omit variables a, b, H, Ω, dµ and we will just write B(u, v) instead of B(u, v, a, b, H, Ω, dµ). Since dµ is a probability measure it is easy to notice that B(u, v) is defined on a bounded rectangular domain conv(Ω), i.e., convex hull of I × J. 2 e−|x| /2 dx, The goal of the paper is to find a good lower bound for B(u, v) under the assumptions that dµ = √ n (2π)

|a − b| ≤ 1 and a + b ≥ 1.

Proposition 5. Let H ∈ C3 (conv(I × J)) be such that H ≤ M on I × J, H satisfies (2) and Hx , Hy never vanish in conv(I × J). Then B(u, v) ≥ H(u, v) on conv(I × J). R

R

Proof. Indeed, pick any ( f , g) ∈ A(Ω) such that f = u and g = v. Then   Z Z          Z Z x−y y y x−y dµ(t) ≥ ess sup H f dµ(t) ≥ H f, g . ,g ,g ess sup M f a b a b Rn Rn y y Finally we take infimum over all such test functions f , g and we obtain B(u, v) ≥ H(u, v).



Since (1) is closed under taking maximum it is reasonable to seek for the maximal function H such that H ≤ M on I × J and H satisfies (2). This procedure together with Proposition 5 allows to find good lower bounds for B(u, v).

Next we consider a particular case when when I × J = {0, 1}2 , and M(0, 0) = M(0, 1) = M(1, 0) = 0 and H(1, 1) = 1. Then we have (71)

B(u, v) =

inf

A⊂Rn ; B⊂Rn

{µ(ess (aA + bB)) : µ(A) = u, µ(B) = v}.

where ess(aA+bB) is understood as the essential sum of the sets A and B (see [10]). Then finding the function B becomes the following boundary value problem: find the maximal function H on conv({0, 1}2 ) = [0, 1]2 which satisfies (2) and has the obstacle condition: H(0, 0), H(0, 1), H(1, 0) ≤ 0 and H(1, 1) ≤ 1. One should seek such functions among those when partial differential inequality in (2) becomes equality. Solution of the PDE (2) reduces to Laplacian eigenvalue problem, and sometimes to backwards heat equation (see [16]). It

26

PAATA IVANISVILI

turns out that the Ehrhard function is the best possible H(u, v) = Φ(aΦ−1 (u) + bΦ−1 (v)). Thus this algorithm indicates how to find the Ehrhard’s function by solving PDE problem with the obstacle condition M over the square [0, 1]2 .

6. A PPENDIX The following remarkable result belongs to Borell [6]. Theorem B. Let ϕ : Rn × Rn → Rn be a continuously differentiable function such that ϕ = (ϕ 1 , . . . , ϕ n );

ϕ k (x1 , x2 ) = ϕ k (x1k , x2k ), ∂ ϕk ∂ xik

> 0,

i = 1, 2,

xi = (xi1 , . . . , xin ) for

i = 1, 2;

k = 1, . . . , n;

k = 1, . . . , n.

1 (Rn ). Further suppose Φ : [0, ∞)×[0, ∞) → [0, ∞] is a continuous 1-homogeneous Let f , g, h ≥ 0 and f , g, h ∈ Lloc function and increasing in each variable. Then the inequality Z ∗  Z ∗ Z ∗ h1ϕ(A,B) dm ≥ Φ f 1A dm, (72) g1B dm Rn

Rn

Rn

holds for all nonempty A, B ⊂ Rn if and only if there are sets Ω1 , Ω2 ⊂ Rn , m(Rn \ Ω1 ) = m(Rn \ Ω2 ) = 0 such that !  n n n  ∂ ϕk ∂ ϕk ρk + ηk ≥ Φ f (x) ∏ ρk , g(y) ∏ ηk (73) h(ϕ(x, y)) ∏ ∂ yk k=1 k=1 k=1 ∂ xk for all x ∈ Ω1 , y ∈ Ω2 , ρ1 , . . . , ρn > 0 and for every η1 , . . . , ηn > 0. Moreover if (72) is holds then Ω1 = supp f and Ω2 = supp g will do. Borell obtained the theorem in more general case when one can include arbitrary number of test functions and ϕ can be defined only on some subdomains of Rn . Let us consider a particular case when n = 1. Since in the current paper when the inR we are Rinterested R equalities of the form h(ax + by) ≥ H( f (x), g(y)) imply its integral version h ≥ H( f , g) then in order to apply Borell’s result we should take ϕ(x, y) = ax + by for x, y ≥ 0. Then (73) takes the following form h(ax + by)(aρ +bη) ≥ Φ( f (x)ρ, g(y)η). The form (72) reduces to the form (1) if h(t) = supax+by=t Φ( f (x)1A , g(y)1B ) and h is supported on 1aA+bB . This may happen if and only if Φ(0, 0) = Φ(0, 1) = Φ(1, 0) = 0. Since A and B is arbitrary we obtain that h(ax + by) = Φ( f (x), g(y)). Therefore the last condition takes the form Φ( f (x), g(y))(aρ + bη) ≥ Φ( f (x)ρ, g(y)η).

(74)

Thus if (74) holds for all η, ρ > 0 and all nonnegative f (x), g(y) then we obtain the integral inequality Z  Z ∗ Z (75) sup Φ( f (x), g(y))dt ≥ Φ f dx, gdx for all nonnegative f , g ∈ L1 . R ax+by=t

R

R

Since supax+by=t Φ( f (x), g(y)) may not be measurable we should understand the integral in the left hand side of (75) as an upper integral. Proposition 6. Let Φ ∈ C1 (int(R2+ )) ∩C(R2+ ) be 1-homogeneous, nonnegative and increasing in each variable. Assume Φ(0, 0) = Φ(0, 1) = Φ(1, 0) = 0. If Φ satisfies (74) for all positive ρ, η, f (x), g(y), and with some positive a, b such that a + b = 1 then Φ(x, y) = Cxa yb .

BOUNDARY VALUE PROBLEM AND THE EHRHARD INEQUALITY

27

  Proof. Φ is one homogeneous therefore Φ(p, q) = p m qp for some nonnegative increasing function m. (74) simplifies to m(s)(a + bt) ≥ m(st) for all s,t > 0. Let st = u and s = v then we obtain m(v)(av + bu) ≥ m(u)v. Set u = v + ε. Then by Taylor’s expansion we obtain that for sufficiently small ε we have m(v)v + m(v)bε ≥ vm(v) + vm0 (v)ε + o(ε). Since ε can be negative as well we obtain bm(v) = vm0 (v) and hence m(v) = Cvb for some C > 0. Therefore Φ(p, q) = Cpa qb .  Thus the corollary shows that in the particular case ϕ = ax + by the functions which satisfy the assumption of Borell’s theorem (74) and hence would give us integral inequality (75) are of the form Φ(x, y) = xa yb . The reader can recognize that this is the instance of the Prékopa–Leindler inequality. Notice that this also confirms ours results: in Subsection 4.2 we have found that Φ has to convex function or Φ has to be function of the form (47), (48) and (48). Since in application of Borell’s theorem we require that Φ(0, 0) = Φ(0, 1) = Φ(1, 0), and Φ ≥ 0 then the only possibility is Φ(x, y) = xa yb . Indeed, 1-homogeneous convex nonnegative function Φ(x, y) on R2+ with values zero at the points (0, 0), (0, 1) and (1, 0) must be identically zero. R EFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]

K. Ball, F. Barthe, A. Naor, Entropy jumps in the presence of a spectral gap, Duke Math. J. 119 No. 1, 41–63 (2003) K. Ball, K. Böröczky, Stability of the Prékopa–Leindler inequality, Mathematika, 56, Issue 2, 339–356 (2010) F. Barthe, N. Huet, On Gaussian Brunn-Minkowski inequalities, arXiv:0804.0886 S. Bobkov, Extremal properties of half-spaces for log-concave distributions, Ann. Probab. 24 (1996), no. 1, 35–48. S.G. Bobkov, M. Ledoux, From Brunn–Minkowski to Brascamp–Lieb and to logarithmic Sobolev inequalities, GAFA, Vol 10 1028–1052 C. Borell, Convex set functions in d-space, Period. Math. Hungar. 6:2 (1975), 111-136. C. Borell, The Ehrhard inequality, 337, Issue 10, pages 663-666 (2003) C. Borell, Inequalities of the Brunn–Minkowski type for Gaussian measures, 140, Issue 1, pages 195–205 (2007) H. J. Brascamp, E. H. Lieb, Best constants in Young’s inequality, its converse and its generalization to more than three functions, Advances in Mathematics, 20, Issue 2, pages 151-173 H. J. Brascamp, E. H. Lieb, On extensions of the Brunn–Minkowski and Prékopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation, Journal of Functional Analysis, 22, Issue 4, 366–389 (1976) D. Cordero-Erausquin, On Berndtsson’s generalization of Prékopa’s theorem, Math. Z. 249, 401–410 (2005) D. Cordero-Erausquin, R. J. McCann, M. Schmuckenschläger, A Riemannian interpolation inequality à la Borell, Brascamp and Lieb, Invent. math. 146, 219–257 (2001) D. Cordero-Erausquin, B. Maurey, Some extensions of the Prèkopa–Leindler inequality using Borell’s stochastic approach, arXiv: 1512.05131 A. Ehrhard, Symeétrisation dans l’espace de Gauss, Math. scand. 53 (1983), R. J. Gardner, The Brunn–Minkowski inequality, Bulletin of the American Mathematical Society 39, No. 3, 355–405 P. Ivanisvili, A. Volberg, Bellman partial differential equation and the hill property for classical isoperimetric problems, preprint arXiv: 1506.03409 P. Ivanisvili, A. Volberg, Hessian of Bellman functions and uniqueness of the Brascamp–Lieb inequality, J. London Math. Soc. (2015) 92 (3): 657–674. P. Ivanisvili, Inequality for Burkholder’s martingale transform, Analysis & PDE, Vol. 8, No. 4, pp. 765–806 (2015). R. Latala, A note on the Ehrhard inequality, Studia Mathematica, 118, Issue 2, pages 169–174 R. Latala, On some inequalities for Gaussian measures, preprint: arXiv:0304343 M. Ledoux, Remarks on Gaussian noise stability, Brascamp–Lieb and Slepian inequalities, Geometric aspects of functional analysis, Lecture Notes in Mathematics 2116 (Springer, Berlin, 2014) 309–333. M. Ledoux and M. Talagrand, Probability in banach Spaces, Springer, 1991 L. Leindler, On a certain converse of Hölder’s inequality. II. Acta Sci. Math. (Szeged) 33 (1972), 217–223 J. Neeman, A multidimensional version of noise stability, Electronic Communications in Probability, 19, pages 1–10. A. V. Pogorelov, Differential geometry, “Noordhoff” 1959. A. Prékopa, Logarithmic concave measures with applications to stochastic programming. Acta Sci. Math. (Szeged) 32 (1971), 301–316 R. van Handel, The Borell–Ehrhard game, arXiv:1605.00285

D EPARTMENT OF M ATHEMATICS , K ENT S TATE U NIVERSITY, K ENT, OH 44240, USA E-mail address: [email protected]