Dec 13, 2017 - PR] 13 Dec 2017. RANDOM PERMUTATIONS WITHOUT ... investigated in [8, 13]. The situation is much more difficult when the cycle weights ...
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
arXiv:1712.04738v1 [math.PR] 13 Dec 2017
VOLKER BETZ, HELGE SCHÄFER, AND DIRK ZEINDLER Abstract. We consider uniform random permutations of length n conditioned to have no cycles above a certain sub-macroscopic length, in the limit of large n. Since in unconstrained uniform random permutations most of the indices are in cycles of macroscopic length, this is a singular conditioning in the limit. Nevertheless, we obtain a fairly complete picture about the cycle number distribution at various lengths. Depending on the scale on which cycle numbers are studied, our results include Poisson convergence, a central limit theorem, a shape theorem and two different functional central limit theorems.
1. Introduction Uniform random permutations are among the oldest and best understood models of probability theory. One of their most prominent properties is that almost all indices are in macroscopic cycles: for all ε > 0, the probability that a given index of a uniform permutation of length n is in a cycle of length less than nε converges to ε as n → ∞. Classical results about uniform random permutations include the convergence of the renormalized cycle structure towards a Poisson-Dirichlet distribution [16, 20], convergence of joint cycle numbers towards independent Poisson random variables in total variation distance [3], and a central limit theorem for cumulative cycle numbers [12]. With so much known about uniform permutations, it is natural to ask what can be said about variations of the model. If we are interested in probability measures that are still invariant under transpositions, the only way to deform the uniform measure is to impose constraints or penalizations on the number of cycles of given lengths. Several ways of doing this have been investigated. Firstly, one can introduce cycle weights, so that the weight of a permutation with Cj cycles of Qn C length j is proportional to j=1 θj j . If θj = θ > 0 for all j, this is the Ewens model [14] that has applications in genetics. Permutations with more general, but fixed cycle weights have been investigated in [8, 13]. The situation is much more difficult when the cycle weights may depend on the size n of the permutation, but also in this case there are some results [10]. A more drastic way of changing the measure is to condition on the absence of cycles of some given length. Again, we can make the set A ⊂ N of forbidden cycle lengths independent of n, in which case the theory goes under the name of A-permutations [21, 22]. The case where the set of forbidden cycles itself depends on n is less well understood, and the purpose of the present paper is to investigate this case in the most natural situation, namely where cycles above a certain (n-dependent) length are not allowed. For example, we might restrict to permutations whose cycle lengths do not exceed nβ for some β < 1. Numerical studies [6] suggest that permutations with algebraic behavior of longest cycles occur naturally in the study of two-dimensional spatial random permutations. Even though in that situation, there is certainly no hard constraint in place, this gives further motivation for the study of constrained permutations. Our results can be paraphrased as follows: We consider the uniform measure on permutations of length n with cycles of length less than α(n), where α(n) is bounded above and below by some power laws. Then for cycles of order less than α(n)/ log n, we find that they behave just like those of unconstrained permutations. At the scale α(n)/ log n, the influence of the restriction starts to manifest itself in the sense that, as n → ∞, the expected cycle numbers converge to zero more slowly than they would in unrestricted permutations. At the scale cα(n), √ 0 ≤ c < 1, the restriction starts to become manifest, and if α(n) diverges more slowly than n, diverging numbers of cycles occur for lengths corresponding to sufficiently large c. In these cases, a central limit theorem holds. Finally, we investigate the scale where most of the cycles live. Due to the 2010 Mathematics Subject Classification. 60F17, 60F05, 60C05. Key words and phrases. random permutation, cycle structure, cycle weights, functional limit theorem, limit shape. 1
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
2
length constraint, there must be at least n/α(n) cycles, and we show that almost all of them live log t on the scale α(n) + α(n) log n , 0 < t < 1. On that scale, the cumulative cycle numbers satisfy a limit shape theorem, and their fluctuations around that limit shape satisfy a functional central limit theorem towards a Brownian bridge. The proofs of our results are based on the saddle point method of asymptotic analysis. In particular, we benefit from the precise estimates given by Manstavicius and Petuchovas [18] for the probability that an unconstrained permutation has no long cycles. While it is clear that such information must be useful for our purposes, it is surprising that they, and extensions of the methods by which they are proved, provide such a complete picture of the situation. Let us give an outline of the paper. In Section 2, we state our assumptions and results. Section 3 discusses the relevant saddle point method in our context and presents a general asymptotic equality from which almost all our further results will be derived. Section 4 then contains those proofs. 2. Main results 2.1. Notation and standing assumptions. Let α : N → N be a sequence of integers such that there exist a1 , a2 ∈ (0, 1) with
(2.1)
na1 ≤ α(n) ≤ na2 .
For n ∈ N, let Sn,α be the set of permutations where all cycles have length α(n) or less, and let Pn,α be the uniform measure on Sn,α . We write En,α for the expectation with respect to Pn,α . We will be interested in the (joint) distribution of the random variables Cm : Sn,α → N0 that map a permutation σ to the number of cycles of length m that σ has. m will often depend on n, but we will sometimes suppress this dependence from the notation, as well as the dependence of α on n. When two sequences (an ) and (bn ) are asymptotically equivalent, i.e. if limn→∞ an /bn = 1, we write an ∼ bn . We also use the usual O and o notation, i.e. f (n) = O(g(n)) means that there exists some constant c > 0 so that |f (n)| ≤ c|g(n)| for large n, while f (n) = o(g(n)) means that for all c > 0 there exists nc ∈ N so that the inequality holds for all n > nc . 2.2. Expected cycle lengths. The most basic characteristics of the Cm are their expected values. Let xn,α be the unique positive solution of the equation (2.2)
n=
α X
xjn,α ,
j=1
and define
xm n,α . m Proposition 2.1. For all sequences m = (m(n))n∈N with m(n) ≤ α(n) for all n, we have (2.3)
µm (n) :=
En,α(n) [Cm(n) ] ∼ µm(n) (n)
as n → ∞. Furthermore, 1 1 n n + log log α +O (2.4) log(mµm ) = log xn,α = log α m α for large n.
log log n log n
An example illustrates the amount of information that we can already extract from this simple result. We fix β ∈ (0, 1) and let α(n) = nβ . Equation (2.4) then reads log(mµm ) = mn−β (1 − β) log n + log log n + log(1 − β) + o(1) . We now have the following asymptotic regimes:
1. For m(n) = o(nβ / log n), we have limn→∞ µm(n) (n)m(n) = 1. This is exactly what happens for the corresponding expected values in uniform permutations, and in particular the limiting behavior of µ is independent of β. We could call this the classical regime. 2. For m(n) such that log n lim m(n) β = y ≥ 0, n→∞ n
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
3
we find that lim µm(n) (n)m(n) = ey(1−β) .
(2.5)
n→∞
In this case, the expected value µ(n) is still inversely proportional to m(n), but the factor of proportionality starts to increase; put differently, while in unconstrained permutations the expected number of indices in cycles of length m is equal to one for all m, in constrained permutations it now starts to increase to infinity as y becomes large. The constraint is starting to be felt! Note that in this regime, the number of cycles still converges to zero, though, but not as quickly as in unconstrained permutations. If we are looking for the sequence m(n) for which the fastest decay of expected cycle numbers occurs, we have to rewrite (2.5) as µm(n) (n) ∼
log n y(1−β) e ynβ
n . and minimize in y. We find y = 1/(1 − β) and µm(n) (n) ∼ e(1 − β) log nβ
3. The next regime occurs when we put m = cnβ for 0 < c ≤ 1. Then
log µm = (c(1 − β) − β) log n + c log log n + c log(1 − β) − log c + o(1).
We see that µm → 0 when c < β/(1 − β), and µm → ∞ when c ≥ β/(1 − β). So on this scale, the transition from finite cycle counts to infinite ones occurs. However, the case of infinite cycle counts can only occur if there exists c ∈ (0, 1] with c ≥ β/(1 − β), which means that β ≤ 1/2. Intuitively, it is clear why β = 1/2 is the borderline: When β > 1/2, we only need to put n1−β cycles into the system, but have nβ ≫ n1−β possible lengths to put them. So there is no need to put too many at the same length. The situation is reversed when β < 1/2: too many cycles have to be put at too few possible lengths, so it will be unavoidable to put infinitely many at some of them. 4. The fourth regime occurs when we zoom in on the case of critical c above: let β β log log n q c = c(n) = − + , 1−β (1 − β)2 log n (1 − β) log n for q ∈ R, and let m(n) = c(n)nβ . Then
lim log µm(n) (n) = q +
n→∞
1
and so limn→∞ µm(n) (n) = eq cycle counts!
(1−β) 1−β β
1 log (1 − β) − log β, 1−β
. So we have a second (very narrow) regime with finite
p Another interesting example in this context is α(n) = n log(n). In this case, only the regimes 1, 2, and 4 occur, but there are no cycle lengths such that µ(n) → ∞. 2.3. Joint cycle count distributions. We will now investigate the joint distributions of the random variables Cj . We start with the strongest result, which also has the most restrictive ˜ on a discrete assumptions. Recall that the variation distance of two P and P Pprobability measures ˜ ˜ probability space Ω is simply given by kP − PkTV = ω∈Ω (P(ω) − P(ω))+ . Theorem 2.2. Let b = (b(n))n be a sequence so that b(n) = o α(n)(log n)−1 . Let Pn,b(n),α be the ˜ b(n) be the distribution of independent Poissondistribution of (C1 , . . . Cb(n) ) under Pn,α , and let P ˜ b(n) (Yj ) = 1 for all j ≤ b(n). Then there exists distributed random variables (Y1 , . . . Yb(n) ) with E j C < ∞ so that for all n ∈ N, we have ˜ b(n) kTV ≤ C α(n) + b(n) log n . kPn,b(n),α − P n α(n) Remarks: 1. We compare our result to the classical situation of unrestricted uniform permutations. Here, it is known that the total variation distance between the cycle counts up to b(n) and independent Poisson random variables with means (1/k)1≤k≤b(n) converges to zero iff b(n) = o(n). This was proven by Arratia and Tavaré, see [4, Theorem 2]. So as in the example of the previous subsection, most cycle lengths behave just as they would in unconstrained permutations.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
4
2. As for the rate of convergence, it has been proved by Barbour [5], and independently by Diaconis and Pitman (1986) in some unpublished lecture notes, that the total variation distance is bounded above by 2b(n)/n for all n. This result was then improved by Arratia and Tavaré [3, Theorem 2]: they show that there exists a function F with log F (x) ∼ −x log x as x → ∞ so that the total variation distance is bounded above by F (n/b(n)). So, the rate of convergence is indeed exponential. This fast decay rate appears to be special for the uniform measure. The decay rate for all other known measures is at most algebraically fast, including the case we study in this paper. A well known example is for instance the Ewens measure. It was shown by Arratia, Barbour, Tavaré, see [1, Theorem 3 and 5], that the total variation distance is O(1/n) and is bounded from below by −1 c · n/b(n) log n/b(n) , where c > 0 is some constant (as long we are not considering the special case of the uniform measure). We can slightly relax the condition b(n) = o(α(n) log(n)−1 ) in Theorem 2.2 if we only consider convergence of finite dimensional distributions. What is more, we can in this case apply a ’tilt’ as we would do in large deviations theory in order to get a better understanding of those cases where µm → 0 in Proposition 2.1, and thus the limiting distribution of Cm(n) is trivial. (ν)
For the cycle number Ck and ν ∈ R+ 0 , consider the tilted cycle number Ck , which is the N0 -valued random variable with 1 eν (ν) Pn,α [Ck = l] Pn,α [Ck = l] = Z νl for all l. Z is the normalization. If we are dealing with several random variables, we tilt them simultaneously in the following way: k h i νj Y e 1 (νk ) (ν1 ) Pn,α [Cm1 = l1 , . . . , Cmk = lk ] . Pn,α Cm = l1 , . . . , Cm = lk = 1 k Z j=1 ν lj j
Theorem 2.3. Let (m1 (n))n , . . . , (mk (n))n be sequences with mk (n) ≤ α(n) for all n. Assume that for all j ≤ k, (2.6)
lim sup µmj (n) (n) < ∞.
Then
(µm ) ) (µ d Cm1m1 , . . . , Cmk k −→ (Z1 , . . . , Zk ),
n→∞
where the Zj are independent Poisson distributed random variables with E[Zj ] = 1. Remarks: 1. From equation (2.4) and our standing assumption (2.1), we find that a sufficient condition for a1 so that for all j, mj (n) ≤ cα(n), where a1 is given by (2.1). (2.6) is that there exists c < 1−a 1
2. If the µmj converge as n → ∞, it follows that the random vector (Cm1 , . . . Cmk ) converges in distribution to a vector of independent Poisson random variables (Z1 , . . . Zn ) with the mean of Zj given by limn→∞ µmj . It is not hard to deduce from (2.2) that limn→∞ xn,α = 1, so convergence of m implies convergence of µm . However, this case is already covered in Theorem 2.2 and resembles the unconstrained situation. On the other hand, we have seen in the example of Subsection 2.2 that convergence of µm is possible even when m diverges. 3. For m such that µm → 0, the distribution of Cm converges to the trivial Poisson distribution with mean 0, but just like it is the case in large deviations theory, the tilt allows us to extract much more information than that. For instance, it follows that for all j, the probability P(Cm = j) decays like µ−j m . We now treat the case of diverging expected cycle numbers. Here, the standard rescaling leads to a central limit theorem. Theorem 2.4. Let (m1 (n))n , . . . , (mk (n))n be sequences with mj (n) ≤ α(n) for all n and all j. Assume that µmj (n) (n) → ∞ as n → ∞ for all j. Assume finally that in (2.1), we have a1 > 1/7. Define j − µm j emj := Cm√ . C µm j
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
5
Then d em −→ em1 , . . . , C (N1 , . . . , Nk ) C k
as n → ∞,
where (Nj )kj=1 are independent, standard normal distributed random variables. Remarks: 1
1.) The condition α (n) ≥ n 7 +δ is required for technical reasons. We believe that by a more detailed investigation of the corresponding saddle point equation, this condition can be removed and that the theorem holds under condition (2.1). 2.) One might expect that the cycle numbers are mod-Poisson convergent under the assumptions of Theorem 2.4. This convergence is much stronger than a central limit theorem and we do not have a proof for it. Details about mod-Poisson convergence can be found for instance in [17].
2.4. Cumulative cycle numbers. Here we investigate the random variable Km =
m X
Cj ,
j=1
i.e. the number of cycles with length below a certain threshold. Since no cycle can be larger than α(n), the total number of cycles Kα(n) is at least ≥ n/α(n). In [7] it is shown that indeed K
m(n) n Kα(n) ∼ α(n) , and so the random variable n/α(n) gives the fraction of cycles that have length up to m(n). The regime in which this fraction converges to a finite limit will be given by α (n) , 0 , 0 ≤ t ≤ 1. (2.7) bt (n) := max α (n) + log (t) n log α(n)
We have the following limit shape of the random function t 7→ Kbt (n) :
Theorem 2.5. We have for each ǫ > 0, " # Kbt (n) Pn,α sup − t > ǫ → 0 as n → ∞. (2.8) t∈[0,1] n/α(n) Remarks:
1.) When t = 1, then bt (n) = α(n), so we recover the fact that there are asymptotically exactly n/α(n) cycles in the permutation. The limit t → 0 shows that no positive fraction of cycles live below the scale defined by (2.7). 2.) Numerical simulations indicate that convergence in Theorem 2.5 is significantly faster if we substitute ˜bt (n) := max α (n) + log (t) log n α(n) , 0 for bt (n). But since we do not n ( α(n) log( α(n) )) have precise error terms justifying this choice, we stick with the simpler form given in Equation (2.7). 3.) A similar P theorem can be proved for the number of indices instead of the number of cycles. If m we set Sm = j=1 jCj , then trivially Sα = n, and we can prove that # " Sbt (n) − t > ǫ → 0 as n → ∞. (2.9) Pn,α sup n t∈[0,1]
As before, we can subtract the mean and renormalize. The result is the following functional central limit theorem for the fluctuations of the function Kbt (n) : Theorem 2.6. Let
Pbt (n) Kbt (n) − j=1 p Lt (n) := n/α (n)
xjn,α j
.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
6
Then (Lt (n))t∈[0,1] converges to Brownian bridge in D [0, 1], where D [0, 1] is the space of cadlag functions on [0, 1], endowed with the Skorohod topology. Remarks: Pbt (n) xjn,α 1. We did not actually subtract the mean, but rather the expression j=1 j . By Proposition 2.1, it is tempting to claim that this is equal to E[Kbt (n) ], but we have to be careful since each term in Proposition 2.1 carries a relative error and these may pile up. However, once we have proved Theorem 2.6, we can conclude that E(Lt (n)2 ) is bounded in n for all t, and thus in particular converges in L1 . It now follows by taking expectations that we may indeed replace the sum in the statement of the theorem by E[Kbt (n) ]. 2. As above, we can do the same construction for the indices instead of the cycles. With Sm being as in the third remark after Theorem 2.5, we have that Pbt (n) j Sbt (n) − j=1 xn,α ˜ t (n) := p L nα (n)
converges to Brownian bridge in D [0, 1]. The proof is similar to the one of Theorem 2.6, so it is omitted. 3. When t = 1 in Theorem 2.6, the limiting variance is zero. However, it has been shown in [7] that there exists a different rescaling of the variance so that the Gaussian fluctuations persist: in this case, Pα(n) xj Kα(n) − j=1 n,α d j q (2.10) −→ N (0, 1). n α(n)(log(n/α(n)))2
Of course, no such statement can hold for Sα(n) since Sα(n) −
Pα(n) j=1
xjn,α = Sα(n) − n = 0.
4. For unrestricted permutations, Delaurentis and Pittel [12] show that the stochastic process P t ⌊n ⌋ C − t log (n) j j=1p (2.11) log (n) t∈[0,1]
converges weakly to Brownian motion in [0, 1]. Interestingly, this holds for restricted permutations as well, and we have already shown it! Indeed, the convergence in total variation distance from Theorem 2.2 is strong enough to show that for all t < a1 (cf. (2.1)), convergence to Brownian motion also holds when the Cj in (2.11) are those of constrained permutations. Thus in the case of constrained permutations, we actually have two functional central limit theorems: one for ’short’ cycles, and one for the ones very close to the maximal cycle length. 5. The convergence to a Brownian bridge is the consequence of the logarithmic time scale (2.7), which is convenient and natural in the sense that larger t corresponds to longer cycle lengths, and the whole range of interesting cycle lengths is covered for 0 ≤ t ≤ 1. A different option is the ’reversed linear’ time scale where 0 corresponds to all cycles, i.e. to cycle lengths up to α(n), and where the decrease in maximal cycle length is linear. This is achieved by setting cs (n) = b e−s (n). Then from Theorem (2.5) we conclude # " Kcs (n) − e−s > ǫ → 0 as n → ∞, Pn,α sup s∈[0,∞) n/α(n) i.e. there is an exponential limit shape. The fluctuations around this limit shape are still Gaussian, with covariance ′
lim Cov(L e−s (n), L e−s′ (n)) = e−s (1 − e−s )
n→∞
(s < s′ ).
A particularly appealing representation occurs if in addition we enhance the cumulative cycle ¯ c (n) = es/2 Kc (n) . Then the limit shape is still numbers for large s exponentially by defining K s s ′ exponentially decaying (now as e−s/2 ), and the limiting fluctuations have covariance e−|s−s |/2 − ′ e−|s+s |/2 . This means that they form a standard Ornstein-Uhlenbeck process. In particular, the variance of this process approaches 1 in the limit s → ∞ of small cumulative cycle lengths.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
7
2.5. The asymptotic behaviour of the longest cycles. We can easily use Theorem 2.5 to study the asymptotic behaviour of the largest cycles. Since the proofs of this subsection are very short, we give them immediately. We denote by ℓ1 (σ) the length of the longest cycle in a permutation, ℓ2 (σ) the length of the second longest cycle in a permutation and so on. Theorem 2.7. We have for each k ∈ N (2.12)
1 d (ℓ1 , ℓ2 , . . . , ℓk ) −→ (1, 1, . . . , 1). | {z } α(n) k times
Proof. Note that we have X 1 Kα(n) − Kbt (n) Cj − (1 − t) = − (1 − t) n/α(n) n/α(n) bt (n) ǫ → 0. b1/2 (n) 0. We have for each k∈N (2.13) Pn,α (ℓ1 , ℓ2 , . . . , ℓk ) 6= α(n), . . . , α(n) → 0 as n → ∞.
Proof. It follows immediately from Theorem 2.4 that Cα(n) follows a central limit theorem after appropriate scaling since µα(n) (n) → ∞ in this case. This implies in particular that Cα(n) → ∞ as n → ∞ with high probability. Thus the number of cycles of length α(n) is tending to ∞. This clearly implies (2.13) since each ℓj ≤ α(n). As for Theorem 2.4, we believe that this theorem is indeed true for α(n) = O(n1/2 ) and α (n) ≥ nδ for some δ > 0. 3. Generating functions and the saddle-point method Generating functions and their connection with analytic combinatorics form the backbone of the proofs in this paper. More precisely, we will determine formal generating functions for all relevant moment-generating functions and then use the saddle-point method to determine the asymptotic behaviour of these moment-generating functions as n → ∞. Let (an )n∈N be a sequence of complex numbers. Then its ordinary generating function is defined as the formal power series ∞ X f (z) := an z n . n=1
The sequence may be recovered by formally extracting the coefficients [z n ] f (z) := an
for any n. The first step is now to consider a special case of Pólya’s Enumeration Theorem, see [19, §16, p. 17], which connects permutations with a specific generating function.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
8
Lemma 3.1. Let (qj )j∈N be a sequence of complex numbers. We then have the following identity between formal power series in z ∞ k ∞ j X qj z X z k X Y Cj = qj , (3.1) exp j k! j=1 j=1 σ∈Sk
k=0
where Cj = Cj (σ) are the cycle counts. If either of the series in (3.1) is absolutely convergent, then so is the other one. Extracting the nth coefficient yields
[z n ] exp
(3.2)
∞ X qj z j j=1
n X Y C = 1 qj j . j n! j=1 σ∈Sn
To see why this is useful for our purposes, note that when we set qj = 1 whenever j ≤ α(n) and qj = 0 otherwise, we obtain α j X z |Sn,α | . (3.3) = [z n ] exp Zn,α := n! j j=1
When we now fix distinct numbers 1 ≤ mk ≤ α(n) for 1 ≤ k ≤ K and set qmk = esk and qj = 1 for j 6= mk , we arrive at the expression ! α(n) j K i h PK mk Xz X 1 z (3.4) En,α e k=1 sk Cmk = exp [z n ] exp (esk − 1) Zn,α mk j j=1 k=1
for the moment-generating function of the cycle counts Cm1 , ..., CmK under Pn,α . Another example arises if we choose qj = es for j ≤ bt (n) and qj = 1 for j > bt (n). The result is then the moment-generating function of the number of cycles of lengths less than or equal to bt (n) that is investigated in Theorems 2.5 and 2.6. We find bt (n) s j α(n) h Pbt (n) i X e z X zj sK 1 . (3.5) [z n ] exp + En,α e bt (n) = En,α es j=1 Cj = Zn,α j j j=1 j=bt (n)+1
Since the bt (n) diverge as n → ∞, we have to rescale the random variables Kbt (n) with some sequence γ(n), i.e. we consider Kbt (n) /γ(n). Such a rescaling of the random variable will actually appear as a rescaling of the parameter s. Also, we will need joint distributions of differPm sl ent Kbti , 0 ≤ t1 < t2 < ... < tm ≤ 1. This is achieved by putting qj = e l=i(j) γ(n) where i(j) := min {1 ≤ l ≤ m : btl (n) ≥ j}. Intuitively, any index l with btl (n) ≥ j contributes a factor exp (sl /γ(n)) to qj since the number of cycles of length j is counted in Kbtl (n) in this case. We obtain
(3.6)
h
En,α e
Pm
si i=1 γ(n) Kbti (n)
i
=
m X 1 [z n ] exp
Zn,α
bti+1 (n)
X
i=0 j=bti (n)+1
e
Pm
sℓ ℓ=i+1 γ(n)
j
j
z ,
where t0 := 0 and tm+1 := 1. The way to extract the series coefficients from expressions such as (3.4) and (3.6) is the saddle point method, a standard tool in asymptotic analysis. The basic idea is to rewrite the expression (3.2) as a complex contour integral and choose the path of integration in a convenient way. The details of this procedure depend on the situation at hand and need to be done on a case by case basis. A general overview over the saddle-point method can be found in [15, page 551]. We now treat the most general case of the saddle point method that is relevant for the present situation. Let q = (qj,n )1≤j≤α(n),n∈N be a triangular array. We assume that all qj,n are nonnegative and define xn,q as the unique positive solution of α(n)
(3.7)
n=
X j=1
qj,n xjn,q .
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
9
Let further α(n)
λp,n := λp,n,α,q :=
X
qj,n j p−1 xjn,q ,
j=1
where p is a natural number. Due to Equation (3.7),
λp,n ≤ n (α (n))p−1
(3.8)
holds for all p ≥ 1. Let us write an ≈ bn when there exist constants c, C > 0 such that cbn ≤ an ≤ Cbn for large n. We will call a triangular array q admissible if the following three conditions are met: (i): We have n (3.9) . α(n) log xn,q ≈ log α(n) (ii): We have λ2,n ≈ nα(n).
(3.10)
(iii): There exists a sequence n 7→ b(n) with b(n)/α(n) < (1 − δ) for some δ > 0, and such that qj,n ≥ c > 0 for all j ≥ b(n) and some constant c > 0. Note that condition (i) implies in particular that limn→∞ xn,q = 1. Let Br (0) denote the circle with midpoint 0 and radius r in the complex plane. We will call a family of complex-valued functions fn admissible if the following three conditions are met: (i): there exists δ > 0 such that fn is holomorphic on Bxn,q +δ (0) for large enough n. (ii): There exists K > ∞ with (3.11)
sup z∈∂Bxn,q (0)
|fn (z)| ≤ nK |fn (xn,q )|
for all large enough n ∈ N. (iii) With (3.12)
|||fn |||n := n
5 − 12
7 − 12
α (n)
sup |θ|≤n−5/12 α(n)−7/12
we have limn→∞ |||fn |||n = 0.
′ fn xn,q eiθ , |fn (xn,q )|
We are now in the position to formulate our general saddle point result. Proposition 3.2. Let q be an admissible triangular array, and (fn ) an admissible family of functions. Then α(n) X qj,n eλ0,n α(n) [z n ] fn (z) exp (1 + O (|||fn |||n )) . z j = fn (xn,q ) n p 1+O j n xn,q 2πλ2,n j=1 Proof. Cauchy’s formula gives (3.13)
ˆ α(n) X qj,n X qj,n dz 1 zj = z j n+1 fn (z) exp Mn := [z n ] fn (z) exp j 2πi ∂Br (0) j z j=1 j=1
α(n)
for any r such that fn is holomorphic on Br (0). Condition (i) on f guarantees that we can take r = xn,q for large enough n. We then rewrite ˆ π α(n) X qj,n 1 j xn,q eiθ − inθ dθ. Mn = fn xn,q eiθ exp 2πxnn,q −π j j=1
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
10
For the remainder of the proof, we will write x instead of xn,q and α instead of α(n) for lighter notation. We define α(n)
(3.14)
gn (θ) :=
X
eijθ − 1 j x − inθ j
qj,n
j=1
and obtain exp Mn =
7
qj,n j j x
j=1
2πxn
Note that gn (0) = gn′ (0) = 0, −
P α(n)
(p) gn (0)
5
π
ˆ
−π
fn xeiθ exp (gn (θ)) dθ.
(p) = ip λp,n , and gn (θ) ≤ λp,n .
Let θ0 := α (n) 12 n− 12 . For |θ| < θ0 , equation (3.8) implies that λp,n |θ|p ≤ (n/α)1−5p/12 . Therefore a Taylor expansion about 0 gives λ3,n 3 λ2,n 2 θ −i θ + O λ4,n θ4 gn (θ) = − 2 6 and λ3,n 3 λ 2 6 4 2 (3.15) θ + O λ θ exp(gn (θ)) = exp − 2,n 1 + O(λ θ ) . θ 1 − i 4,n 3,n 2 6 As for fn , we have ˆ θ fn′ (x eiϕ )x eiϕ dϕ. fn (x eiθ ) = fn (x) + i 0
Estimating the modulus of the integrand in the second term by its maximum and using assumption (3.12), we find that on [−θ0 , θ0 ], fn (x eiθ ) = fn (x)(1 + O (|||fn |||n ) .
Putting things together, we have ˆ ˆ θ0 fn xeiθ exp (gn (θ)) dθ =fn (x) −θ0
θ0
e−
λ2,n θ2 2
−θ0
+ fn (x)
ˆ
θ0 −θ0
e−
1 + O λ23,n θ6 + λ4,n θ4
λ2,n θ2 2
dθ
O (|||fn |||n ) dθ.
By (3.10), λ2,n θ02 ≈ n1/6 α−1/6 , which diverges as n → ∞. The standard estimate on Gaussian tails gives that for all m ∈ N, √ ˆ θ0 ˆ ∞ λ2,n θ2 λ2,n θ2 2π p + O(λ−m e− 2 dθ = e− 2 dθ + O(λ−m ) = 2,n ). 2,n λ −θ0 −∞ 2,n
A scaling argument, (3.8) and assumption (3.10) give √ √ ˆ θ0 λ θ2 2π λ23,n 2π − 2,n 2 6 2 p p e λ3,n |θ| dθ ≤ = O 3 λ λ λ −θ0 2,n 2,n 2,n and
ˆ
θ0
−θ0
e
−
λ2,n θ2 2
√ √ 2π λ4,n 2π O λ4,n |θ| dθ ≤ p = p 2 λ2,n λ2,n λ2,n 4
Altogether, we find that s ˆ θ0 2π fn xeiθ exp (gn (θ)) dθ = fn (x) 1+O λ 2,n −θ0
α n
α n
α n
(1 + O(|||fn |||n ).
What remains to be shown is that ! ˆ α (n) iθ . fn xe exp (gn (θ)) dθ = O fn (x) p (3.16) n λ2,n |θ|≥θ0 We have
−ℜgn (θ) =
α X qj,n j=1
j
(1 − cos(jθ))xj .
.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
For θ0 ≤ θ < π/α, due to −∂θ ℜgn (θ) > 0, we have − ℜgn (θ) ≥ −ℜgn (θ0 ) ≈ θ02 λ2,n ≈
(3.17)
π α,
11
n 1/6 α
let us first assume that qj,n ≥ c > 0 for all n and j, i.e. b(n) = 1 by assumption (3.10). For θ > in assumption (iii). We use that −ℜgn (θ) =
α X qj,n j=1
j
(1 − cos(jθ))xj ≥
α cX (1 − cos(jθ))xj =: crn (θ) α j=1
and 1 xα − 1 xα eiθα − 1 x − ℜ x eiθ α x−1 x eiθ − 1 2 α+1 θ 2x 2 x . − ≥ 2 π α (x − 1) (x − 1)2 + θ2 α (x − 1)
rn (θ) = (3.18)
The calculations for the final inequality can e.g. be found in [18, Lemma 12]. By (3.9), there exist c1 , c2 > 0 with n n c1 log ≤ α log x ≤ c2 log . α α n Thus x ∼ 1, and x − 1 ∼ log x ≥ cα1 log α . So the second term on the right hand side of (3.18) converges to zero. For the first term, we use that θ2 /((x − 1)2 + θ2 ) is monotone increasing in θ, and find an asymptotic lower bound of the form π 2 α−2 2 xα+1 2 xα+1 ∼ 3 . n 2 2 2 π c2 log α c α−2 log n + π 2 α−2 c2 log n 3 2 α α
(3.19)
n c1 Since xα ≥ α , and using condition (3.11), we conclude that when θ ≥ θ0 , |fn (x eiθ ) egn (θ) | vanishes faster than all powers of 1/n. This shows the claim in the case b(n) = 1. For the case of general b(n), we have α
−ℜgn (θ) ≥ (3.20)
α
1X 1X qj,n (1 − cos(θj))xj = crn (θ) + (qj,n − c)(1 − cos(θj))xj α j=1 α j=1
b(n) 2b(n) b(n) 2c X j x ≥ crn (θ) 1 − . x ≥ crn (θ) − α j=1 rn (θ)α
By assumption, b(n)/α ≤ 1 − δ for some δ > 0, and then xb(n)−α ≤
b(n)−α n c1 α α
≤
n −c1 δ . α
Thus, by applying (3.19), the bracket on the right hand side of (3.20) converges to 1 as n → ∞, and the proof is finished. 4. Proofs of the main results We P most of our results by calculating the moment generating function, i.e. we determine establish E e i si Xi for random variables Xi . We study the moment generating function only in the sector given by si ≥ 0 for all i since this simplifies the argumentation with respect to the saddle point method. In the cases we consider, it is a consequence of [23] that pointwise convergence of the moment generating function in this sector is sufficient to establish weak convergence of the joint distribution of the involved random variables. To begin with, we state two results that have been proved elsewhere, and that show that the triangular array q with qj,n = 1 for all j ≤ α(n) is admissible. Lemma 4.1. We have, as n → ∞, (4.1)
α (n) log (xn,α ) = log
n log α (n)
n α (n)
+O
log (log (n)) log (n)
.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
12
In particular, xn,α ≥ 1, lim xn,α = 1 and xα(n) n,α ∼ n→∞
n log α (n)
n α (n)
.
This Lemma is a reformulation of Lemma 4.11 in [7], which in turn follows [18]. In the latter reference, the Lemma is actually shown for more general functions α. Lemma 4.2. As n → ∞,
α(n)
X j=1
jxjn,α ∼ nα(n).
This result has result has been proved in Lemma 9 in [18]. It may also be derived as a special case of Lemma 4.6. 4.1. Proof of Proposition 2.1. Equation (2.4) follows directly from Lemma 4.1. By Equation (3.4), the moment generating function of Cm(n) is given by α(n) j m(n) Xz sC 1 z . En,α e m(n) = exp [z n ] exp (es − 1) Zn,α m(n) j j=1 Differentiating with respect to s and setting s = 0, we obtain α(n) j m(n) Xz z 1 . [z n ] exp En,α Cm(n) = Zn,α m(n) j j=1 z m(n) m(n)
and qj,n = 1 for 1 ≤ j ≤ α(n). The 5 7 admissibility of (fn ) is a consequence of m(n) ≤ α(n) = o n 12 α(n) 12 . The fact that q is admissible follows from Lemmata 4.1 and 4.2. The claim
We may now apply Proposition 3.2 with fn (z) =
is therefore proved.
xm(n) n,α = µm(n) (n) En,α Cm(n) ∼ m(n)
4.2. Proof of Theorem 2.2. We follow the ideas in [3], where the case of uniform permutations is treated. Let (Zk )k be independent random variables with Zk ∼ Poi k1 for k ∈ N and let (4.2)
Tb1 b2 :=
b2 X
kZk .
k=b1 +1
Let C b = (C1 , C2 , . . . , Cb ) the vector of the cycle counts up to length b, Z b = (Z1 , Z2 , . . . , Zb ), and a = (a1 , a2 , . . . , ab ) a vector. A corner stone for investigating the classical case of uniform random permutation is the so-called conditioning relation [2, Equation (1.15)] (4.3)
Pn [C b = a] = P [ Z b = a| T0n = n] .
Since the measure of random permutations without long cycles satisfies Pn,α = Pn ·| Cα(n)+1 = ... = Cn = 0 ,
an analogue of Equation (4.3) holds for b ≤ α (n): (4.4) Pn,α [C b = a] = P Z b = a| T0α(n) = n . Pb(n) Let L(a) := k=1 kak . For a ∈ Nb(n) with L(a) = r, independence of the Zk gives P Z b(n) = a P Tb(n)α(n) = n − r P Z b(n) = a T0α(n) = n = . P T0α(n) = n
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
13
˜ b(n) as in Theorem 2.2, and let db(n) := kPn,b(n),α − P ˜ b(n) kTV . Then equation Define Pn,b(n),α and P (4.3) implies X P Z b(n) = a − P Z b(n) = a T0α(n) = n + db(n) = a∈Nb(n)
! P Tb(n)α(n) = n − r P Z b(n) = a 1 − = P T0α(n) = n r=0 a:L(a)=r + ! ∞ X P Tb(n)α(n) = n − r = P T0b(n) = r 1 − P T0α(n) = n r=0 + ! ρn b(n) X P Tb(n)α(n) = n − r ≤P T0b(n) ≥ ρn b (n) + 1 + , P T0b(n) = r 1 − P T = n 0α(n) r=0 ∞ X
X
+
where ρn > 0 is arbitrary for now. In [3, Lemma 8] it is shown that ρn −ρn P T0b(n) ≥ b (n) ρn ≤ . e So P T0b(n) ≥ b (n) log n decays faster than any power of n. For the second term, we estimate ! ! ρn b(n) X P Tb(n)α(n) = n − r P Tb(n)α(n) = n − r 1− ≤ max . P T0b(n) = r 1 − 0≤r≤ρn b(n) P T0α(n) = n P T0α(n) = n r=0 +
+
The proof is then concluded by plugging ρn = log n into the estimate of the lemma below. α(n) and ρn = O (log (n)). Then, Lemma 4.3. Let b (n) = o log(n) ! P Tb(n)α(n) = n − r b(n) α(n) 1− max =O + log(n) n α(n) 1≤r≤ρn b(n) P T0α(n) = n +
as n → ∞.
Proof. It is easily verified that the probability generating function of Tb1 b2 is given by b2 j X z − 1 exp j j=b1 +1
(cf. [3]). Therefore,
P[Tb(n),α(n) = m] = [z m ] exp
In particular,
P[Tb(n),α(n) = n − r] = [z n−r ] e and P[T0,α(n) = n] = e −
−
Pα(n)
j=b(n)+1
z j −1 j
Pα(n)
1 j=b(n)+1 j
α(n)
X
j=b(n)+1
= e−
[z n ] e
z − 1 . j
Pα(n)
Pb(n) j=1
j
1 j=b(n)+1 j
z j −1 j
e
[z n ]z r e
Pα(n)
j=b(n)+1
zj j
Pα(n)
j=b(n)+1
zj j
.
Pα(n)
1 j=b(n)+1 j
Since the factors e will cancel in the quotient of the two terms, we see that we are in the situation of Proposition 3.2. We have qj,n = 1 if j ≥ b(n) + 1 and qj,n = 0 otherwise, and f1 (z) = z r and f2 (z) = e solution of
Pb(n) j=1
z j −1 j
, respectively. The saddle point xn,b,α is the unique positive α(n)
n=
X
xjn,b,α .
j=b(n)+1
When xn,α is the saddle point with qj,n = 1 for all j, we easily see that (4.5)
xn,α ≤ xn,b,α ≤ xn, α2
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
14
for large n. Lemma 4.1 now shows that log
n n ∼ α log xn,α ≤ α log xn,b,α ≤ α log xn,α/2 ∼ 2 log , α(n) α(n)
n , and similarly Lemma 4.2 shows that λ2,n ≈ nα(n). Thus q is admissible. so α log xn,b,α ≈ log α(n) In the admissibility of f1 and f2 , only condition (iii) is not trivial. For f1 , we have
|||f1 ||| ≤ rn
5 − 12
7 − 12
α(n)
=O
b(n) log n 5/12 n α(n)7/12
,
and for f2 , we find b(n)−1
|||f2 ||| ≤ n
5 − 12
7 − 12
α(n)
X j=0
5
7
b(n)
xjn,b,α ≤ n− 12 α(n)− 12 b(n)xn,b,α .
By equation (4.5), (4.6)
b(n) log xn,b,α ≤
2b(n) 2b(n) α log xn,α/2 ∼ log α 2 α
n α(n)
= o(1),
so |||f2 ||| = O(n−5/12 α(n)−7/12 ). We can now apply Proposition 3.2, and find α(n) P Tb(n)α(n) = n − r f1 (xn,b,α ) b(n) log n 1+O = + 5/12 , f2 (xn,b,α ) n n α(n)7/12 P T0α(n) = n
uniformly in 1 ≤ r ≤ ρn b(n). Writing x instead of xn,b,α for brevity, we find
X X xj − 1 ˆ x b(n)−1 v j dv ≤ (x − 1) b(n) xb(n) . = 0 ≤ log f2 (x) = j 1 j=0 j=1 b(n)
Since x − 1 ≤ xn,α/2 − 1 ∼ 2 log(n/α(n)) and xb(n) = O(1) by (4.6), we find that α(n) b(n)
(x − 1)b(n)x
=O
b(n) log n , α(n)
and so 1 ≤ f2 (x) ≤ 1 + O Since f1 (x) ≥ 1, we find that f1 (x) ≥1+O f2 (x)
b(n) α(n)
log n .
b(n) log n , α(n)
and thus α(n) P Tb(n)α(n) = n − r b(n) + log n , ≥1+O n α(n) P T0α(n) = n
uniformly in r. The Lemma is proved.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
15
(µm ) 4.3. Proof of Theorem 2.3. Write µj := µmj and C˜mj := Cmj j . Let sj ≥ 0. According to the definition of the tilted random variables, we have k X En,α exp sj C˜mj j=1
=
∞ X
l1 =1
=
1 Z
∞ X
l1 =1
exp =
lk =1
···
P
k i h X exp sj lj Pn,α C˜m1 = l1 , . . . , C˜mk = lk j=1
∞ X
P
k Y
exp(sj lj + µj ) l
µjj
lk =1 j=1
k j=1
µj
Z
exp =
···
∞ X
k j=1
µj
Z
∞ X
l1 =1
···
∞ Y k X
lk =1 j=1
En,α exp
Pn,α [Cm1 = l1 , . . . , Cmk = lk ]
exp[lj (sj − log µj )]Pn,α [Cm1 = l1 , . . . , Cmk = lk ]
k X j=1
(sj − log µj ) Cmj .
Here, the normalization Z depends on n. By Equation (3.4), k X (sj − log µj ) Cmj (4.7) En,α exp j=1
= Let fn (z) := exp
P
1 Zn,α
k j=1
α(n) i k mj X X z z . [z n ] exp exp esj −log µj − 1 m i j i=1 j=1
esj −log µj − 1
z mj mj . If sj ≥ log µj , we obtain
ℜ esj −log µj − 1
j z mj xm n,α ≤ esj −log µj − 1 mj mj
for z ∈ C with |z| = xn,α . If 0 ≤ sj < log µj , since lim supn→∞ µj < ∞ for all j, we have esj −log µj − 1
j xm z mj n,α ≥ −µj ≥ µj − K0 ≥ ℜ esj −log µj − 1 − K0 mj mj
for some constant K0 > 0 for all j, n, and z with |z| = xn,α . We conclude that |fn (z)| ≤ Kfn (xn,α )
(4.8)
for some constant K > 0 for all n and z with |z| = xn,α . Differentiating fn with respect to z yields fn′ (z) =
k X j=1
esj −log µj − 1 z mj −1 fn (z).
Due to Equation (4.8), k X s −log µ |fn′ (z)| j e j − 1 |z|mj −1 ≤K |fn (xn,α )| j=1
for z with |z| = xn,α . If sj ≥ log µj , we have
s −log µ esj mj −1 j e j − 1 |z|mj −1 ≤ x ≤ esj mj ≤ esj α(n). µj n,α
If 0 ≤ sj < log µj , a short calculation yields s −log µ j −1 j e j − 1 |z|mj −1 ≤ xm ≤ µj mj ≤ µj α(n). n,α
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
16
Since lim supn→∞ µj < ∞ for all j, it follows that 7
5
|||fn |||n = n− 12 α(n)− 12
|fn′ (z)| −→ 0 z∈∂Bxn,α (0) |fn (xn,α )| sup
as n → ∞. So (fn ) is admissible. By choosing qi,n = 1 for 1 ≤ i ≤ α(n), q is admissible by Lemmata 4.1 and 4.2. We may therefore apply Proposition 3.2 to Equation (4.7). We obtain P Qk k k sj exp µ X j j=1 j=1 exp (e ) ˜ fn (xn,α ) = . sj Cmj ∼ En,α exp Z Z j=1 By setting sj = 0 for all j, we may deduce Z → ek as n → ∞, and the claim is proved.
4.4. Proof of Theorem 2.4. We now turn to the case of diverging expectation. The following proposition states the most general result in this regime. Proposition 4.4. Let mj : N → N for 1 ≤ j ≤ k such that mj (n) ≤ α (n) for large n and let m (n)
µj (n) :=
j xn,α mj (n)
m (n)
j 7 xn,α − 12
5
. If µj (n) → ∞ and n− 12 α (n)
for all sj ≥ 0.
√
µj (n)
→ 0 for all j, then
! k X s2j Cmj (n) − µj (n) = exp p exp sj lim En,α n→∞ 2 µ (n) j j=1 j=1 k Y
Proof. Starting from Equation (3.4), we need to apply Proposition 3.2. Again qi,n := 1 if 1 ≤ i ≤ α(n), so q is admissible. Clearly, ! ! k k q mj (n) X X z s j exp − sj µj (n) fn (z) := exp exp p −1 mj (n) µ (n) j j=1 j=1 is the natural choice. What needs to be shown for admissibility is that 7 − 12
5
lim |||fn |||n = lim n− 12 α (n)
n→∞
n→∞
|fn′ (z)| = 0. z∈∂Bxn,α (0) |fn (|z|)| sup
We compute that 5
n− 12 α (n) ≤n
5 − 12
α (n)
7 − 12
7 − 12
|fn′ (z)| z∈∂Bxn,α (0) |fn (|z|)| sup
k X j=1
5
7 − 12
=O n− 12 α (n)
exp
s p j µj (n)
!
!
j (n)−1 − 1 xm n,α
k mj (n) X xn,α . p µ (n) j j=1
So, according to our assumptions, (fn ) is admissible and we may apply the proposition. From !! k k q X X s2j s3j s j p fn (xn,α ) = exp sj µj (n) µj (n) − +O + 3 µj (n) 2µj (n) (µj (n)) 2 j=1 j=1 k k X X s2j 1 1 + O = exp 1 2 2 j=1 (µj (n)) j=1 k 2 X sj , → exp 2 j=1 we then conclude the claim.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
17
Proof of Theorem 2.4. We only have to show that 7 − 12
5
n− 12 α (n)
m (n)
j xn,α p →0 µj (n)
for all j, then we may apply Proposition 4.4. Consider m (n)
j xn,α p = µj (n)
s ! q q n mj (n) α(n) mj (n) xn,α ≤ α (n) xn,α = O n log α (n)
which holds by Lemma 4.1. We conclude that mj (n) 7 7 5 1 xn,α n − 12 − − 12 12 p n α (n) ⊂ o (1) = O n 12 α (n) log α (n) µj (n) 1 1 since there is some δ > 0 such that α(n) = O n− 7 −δ . The claim is proved.
4.5. Proofs of Theorems 2.5 and 2.6 and Equation (2.9). This section deals mainly with the proofs concerning the limit shape and fluctuations of cumulative cycle counts. The last point is the proof of the limit shape for indices. The proof of the limit shape and the convergence of the finite-dimensional distributions of the fluctuations will apply the saddle-point method. Note that, according to Equation (3.6), we need to calculate moment-generating functions of the form bti+1 (n) bti (n) m m P j Y X X X m 1 z s i (4.9) Mn,γ (s) =En,α exp [z n ] exp Ck = e l=i+1 s˜l γ(n) Z j n,α i=1 i=0 k=1
j=bti (n)+1
with t0 := 0 and tm+1 := 1. Here, γ is a function and the tilde indicates the rescaling of the s ˜ := γ(n) . Since the random variables in question diverge, we will always assume for the variable s rescaling thatP γ(n) → ∞ as n → ∞. In the terms of Proposition 3.2, this means that fn = 1 sl m and qj,n = e l=i(j) γ(n) where i(j) := min {1 ≤ l ≤ m : btl (n) ≥ j}. Intuitively, any index l with btl (n) ≥ j contributes a factor exp (sl /γ(n)) to qj,n since the numer of cycles of length j is counted in Kbtl (n) in this case. The saddle-point of this problem is, up to rescaling, given by the unique positive solution xn (s) := xn,α,t (s) of
(4.10)
n=
m X
e
Pm
ℓ=i+1
bti+1 (n) sℓ
i=0
X
j
(xn (s)) .
j=bti (n)+1
Note that xn (0) = xn,α . In order to apply Proposition 3.2, we now have to verify that q is admissible. This is done in Lemmata 4.5 and 4.6. The lemmata provide more detailed information than is necessary for proving admissibility because it will be of importance for investigating the moment generating function more closely. s Lemma 4.5. Let s˜ := γ(n) with γ (n) → ∞ and t = (ti )1≤i≤m with 0 = t0 ≤ t1 < ... < ti < ... < tm ≤ tm+1 = 1 and si ≥ 0 for all 1 ≤ i ≤ m. Then
(4.11)
α (n) log (xn (˜ s)) ∼ log
locally uniformly in s. In particular, (4.12) locally uniformly in s.
lim xn (˜ s) = 1
n→∞
n α (n)
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
18
Proof. Since si ≥ 0 for all i, comparing Equations (2.2) and (4.10) yields x ˆn ≤ xn (˜ s) ≤ xn,α(n) ,
(4.13)
where x ˆn is the unique positive solution of α(n)
n
e
Pm
i=1
s˜i
=
X
xˆjn .
j=1
Applying Lemma 4.1 to xn,α(n) yields lim sup n→∞
α(n) log (xn (˜ s)) ≤1 n log α(n)
by monotonicity. By a slightly more general version of Lemma 4.1 (cf. [18, Lemma 9]), we also have − Pm s˜i i=1 ne n α(n) log (ˆ xn ) ∼ log ∼ log α(n) α(n)
as n → ∞ due to γ(n) → ∞. Equation (4.11) then follows from lim inf n→∞
α(n) log (xn (˜ s)) ≥ 1. n log α(n)
Now Equation (4.12) is a direct consequence. Lemma 4.6. Let s˜ :=
s γ(n)
T
with γ (n) → ∞ and t = (t1 , ..., tm ) T
m
m ∈ N. Then, locally uniformly in s = (s1 , ..., sm ) ∈ [0, ∞) , nα (n) . λ2,n = nα (n) + O n log α(n)
with 0 ≤ t1 < ... < tm ≤ 1 for
Proof. W.l.o.g., let 0 < t1 < 1 and m = 1. As the following calculations will show, larger values of m pose no particular problem since they only produce additional terms of similar structure and btk (n) ∼ α (n) for all k ≥ 1 in this case. Moreover, let x := xn,α,t (˜ s). Then bt1 (n)
λ2,n =e
s˜1
X
α(n)
jxj +
j=1
X
jxj
j=bt1 (n)+1
α(n) X d s˜1 X j xj =x x + e dx j=1 j=bt1 (n)+1 bt1 (n) α(n)−bt1 (n) d s˜1 1 − x bt1 (n)+1 1 − x =x e x . +x dx 1−x 1−x
bt1 (n)
We calculate the relevant terms separately and obtain x
bt1 (n) bt1 (n) X d s˜1 X xbt1 (n)+1 1 − xbt1 (n) xj − es˜1 bt1 (n) xj =es˜1 e + es˜1 x2 dx 1−x (1 − x)2 j=1 j=1
and
x
d dx
α(n)
α(n)
X
xj = (bt1 (n) + 1)
X
j=bt1 (n)+1
j=bt1 (n)+1
+ xbt1 (n)+2
xj − (α (n) − bt1 (n))
1 − xα(n)−bt1 (n) (1 − x)2
.
xα(n)+1 1−x
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
19
By Equations (2.7) and (4.10) as well as γ(n) → ∞, it follows that
α(n) X xbt1 (n)+1 xj + (bt1 (n) + 1) 1−x j=bt1 (n)+1 nα (n) . =nα (n) + O n log α(n)
− es˜1 bt1 (n)
Also by Equations (2.7) and (4.10), we conclude that
xα(n)+1 nα (n) . (α (n) − bt1 (n)) = O 1−x log n α(n)
Equation (4.10) yields
bt1 (n)
es˜1
X j=1
By Lemma 4.5, we have
xj = O (n) .
α (n) 1 . ∼− 1−x log n α(n)
The claim then follows from applying Equation (4.13) and Lemma 4.1 to the remaining terms α(n)−bt1 (n) bt1 (n) 1−x nα (n) bt1 (n)+2 1 − x . = O es˜1 x2 2 +x 2 n (1 − x) (1 − x) log α(n)
T
Having proved that q is admissible, Proposition 3.2 yields, for γ (n) → ∞ and t = (t1 , ..., tm ) , T m that locally uniformly in s = (s1 , ..., sm ) ∈ [0, ∞) , !# " m X 1 1 p s˜i Kbti (n) = Mn,γ (s) := En,α exp exp [hn (s)] (1 + o (1)) , Zn,α 2πnα(n) i=1
where Zn,α is the normalizing constant as in (3.3) such that Mn,γ (0) = 1 and (4.14)
hn (s) := hn,α,t,γ (s) :=
m X i=0
e
Pm
l=i+1
bti+1 (n) s ˜l
X
j=bti (n)+1
j
(xn,α,t (˜ s)) − n log (xn,α,t (˜ s)) . j
The next step is to extract more information by investigating the functions hn . The proofs will rest on a Taylor expansion of hn about 0, so we need expressions and asymptotics for the derivatives of hn . We will prove in Section 4.6 for γ(n) → ∞: (i) s 7→ hn (s) is infinitely often differentiable, Pbti (n) xjn,α n (ii) ∂s˜i hn (0) = j=1 = ti α(n) (1 + o(1)), j n (iii) ∂s˜i2 ∂s˜i1 hn (0) = ti2 (1 − ti1 ) α(n) (1 + o(1)) for i2 ≤ i1 , n (iv) ∂s˜i2 ∂s˜i1 hn (s) = O α(n) locally uniformly in s, n (v) ∂s˜i3 ∂s˜i2 ∂s˜i1 hn (s) = O α(n) locally uniformly in s.
Due to Mn,γ (0) = 1, we therefore arrive at n 2 ˜+O (1 + o(1)) |˜ s| (4.15) Mn,γ (s) = exp ∇hn (0) · s α and n 1 ˜ + h˜ (1 + o(1)) , (4.16) Mn,γ (s) = exp ∇hn (0) · s si + O s, Hhn (0)˜ |˜ s|3 2 α which hold locally uniformly in s.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
20
So, by Equation (4.15), (4.17)
"
lim En,α exp
n→∞
and, by Equation (4.16),
(4.18)
m X i=1
si Kb (n) n/α(n) ti
!#
n (s) = exp = lim Mn, α(n)
n→∞
m X
s i ti
i=1
!
,
X xjn,α Kbt (n) − p lim En,α exp i n→∞ j n/α(n) j=1 i=1 ! 1 T s √ = lim Mn, n/α(n) (s) exp −∇hn (0) · p s A (t) s , = exp n→∞ 2 n/α(n)
m X
si
bti (n)
where A (t) = (Ai1 ,i2 ) is symmetric with
Ai1 ,i2 = ti2 (1 − ti1 ) for i2 ≤ i1 . Note that A(t) is the covariance matrix of Brownian bridge. We can now give the Proof of Theorem 2.5. We follow the arguments of the proof of Corollary 3.4 in [11]. Let ǫ > 0 and choose 0 = t0 < t1 < ... < tl = 1 such that tj+1 − tj < 2ǫ . Then, due to monotonicity, Kbt (n) >ǫ − t n/α(n) for some t ∈ [0, 1] implies the existence of an index j such that Kbt (n) ǫ j − tj > . n/α(n) 2
So
(4.19)
Pn,α
" # # l Kbt (n) ǫ X Kbt (n) j Pn,α − t > ǫ ≤ − tj > sup n/α(n) 2 t∈[0,1] n/α(n) j=1
"
holds. By Equations (4.17) and (4.18), each summand in (4.19) converges to 0 and the claim is therefore proved. Equation (4.18) establishes the convergence of the finite-dimensional distributions of the fluctuations to those of Brownian bridge. In order to show that, under Pn,α , the fluctuations Pbt (n) xjn,α Kbt (n) − j=1 j p (Ln (t))t∈[0,1] = n/α (n) t∈[0,1]
converge as a process to the Brownian bridge, we also have to prove tightness. Then Theorem 2.6 is proved. We will apply the criterion that i h 2 2 2 (4.20) En,α |Ln (t) − Ln (t1 )| |Ln (t2 ) − Ln (t)| = O |t2 − t1 | for t1 ≤ t ≤ t2 which is an instance of [9, Equation (13.14)].
Proposition 4.7. The sequence of processes (Ln (t))0≤t≤1 under Pn,α is tight in D [0, 1]. The proof of the proposition needs Lemma 4.8. Since some results will be needed in Section 4.6, we prove more than is strictly necessary in the present context. Lemma 4.8. Let 0 ≤ t1 < t2 ≤ 1. Then bt2 (n)
(4.21)
X
bt1 (n)+1
xjn,α ∼ (t2 − t1 ) n
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
and bt2 (n)
X
(4.22)
bt1 (n)+1
xjn,α n ∼ (t2 − t1 ) . j α (n)
Proof. We start with Equation (4.21). Let 0 < t < 1. We first prove bt (n)
X
(4.23)
j=1
xjn,α ∼ tn.
Equation (4.21) then follows easily due to (2.2). Since xn,α > 1, we have ˆ
bt (n)
0
bt (n)
xvn,α dv ≤
X j=1
xjn,α ≤
ˆ
bt (n)+1
1
xvn,α dv.
By Lemma 4.1, we obtain ˆ
bt (n)+1
1
xvn,α dv
∼
ˆ
bt (n)
0
xvn,α dv.
It therefore remains to be shown that ˆ bt (n) b (n) (xn,α ) t − 1 xvn,α dv = ∼ tn. log (xn,α ) 0 According to Lemma 4.1, both (xn,α )α(n) =
n log α (n)
n α (n)
(1 + o (1))
and
hold. Since
α (n) α (n) 1 ∼ ∼ n n n log (xn,α ) log α(n) log α(n) log α(n) 0
θ0 vanishes faster than any power of 1/n. For |θ| ≤ θ0 , apply
and eijθ = O (1). Then, Gn,t1 ,t xn,α eiθ Gn,t,t2 (xn,α ) 2 bt (n) X xjn,α + =O θ j=bt1 (n)+1
eijθ − 1 = O (jθ)
bt (n)
X
j=bt1 (n)+1
Due to Equation (3.15), we have
xjn,α j
θ
bt2 (n)
X
j=bt (n)+1
2
xjn,α +
bt2 (n)
X
j=bt (n)+1
xjn,α j
.
λ2,n 2 . θ exp (gn (θ) = O exp − 2
By substituting v =
p λ2,n θ, we therefore obtain
1 θk exp (gn (θ)) dθ = O k+1 2 −θ0 λ2,n
ˆ
θ0
because of the moments of the normal distribution. By linearity of the integral as well as the definition of Zn,α and Lemmata 4.6 and 4.8, we conclude ˆ θ0 Gn,t1 ,t xn,α eiθ Gn,t,t2 xn,α eiθ exp (gn (θ)) dθ −θ0 ih i h 2 2 =O (t − t1 ) + t − t1 (t2 − t) + (t2 − t) =O (t2 − t1 )2 . The last line holds due to 0 ≤ t1 ≤ t ≤ t2 ≤ 1. The claim is proved.
Having proved Theorem 2.5, we may now also deduce the existence of the limit shape for indices. Proof of Equation (2.9). By following the same steps as in the proof of Theorem 2.5, we arrive at an analogue of Equation (4.19): " " # # l Sbt (n) ǫ X Sbt (n) j Pn,α . Pn,α sup − t > ǫ ≤ − tj > n 2 n t∈[0,1] j=1
Due to Sb0 (n) = 0 and Sb1 (n) = n, we only have to consider j with 0 < tj < 1. Pα(n) Pα(n) Let Kbt (n),α(n) := j=bt (n)+1 Cj and Sbt (n),α(n) := j=bt (n)+1 jCj . Then, for t ∈ (0, 1), we have Sbt (n) Kbt (n),α(n) Sbt (n),α(n) = 1 − t − · δn n − t = 1 − t − n n/α(n) Kbt (n) K K α(n) bt (n),α(n) − t + 1 − · (δn − 1) + ≤ n/α(n) n/α(n) n/α(n) Kbt (n) K K α(n) α(n) ≤ + − t + 1 − · (δn − 1) , (4.30) n/α(n) n/α(n) n/α(n) where δn is a random variable satisfying
bt (n) ≤ δn ≤ 1. α(n)
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
24
From bt (n) ∼ α(n) by Equation (2.7) we deduce that δn → 1 uniformly. Therefore, by Theorem 2.5, the individual terms in Equation (4.30) converge to 0 in probability. 4.6. Properties of hn . This section provides the proofs for five properties of hn and its derivatives T s stated in Section 4.5. Let s˜ := γ(n) with γ (n) → ∞ and t = (t1 , ..., tm ) for m ∈ N throughout this section. Recall that hn (s) =
m X
e
bti+1 (n)
j
X
Pm
˜l l=i+1 s
i=0
j=bti (n)+1
(xn,α,t (˜ s)) − n log (xn,α,t (˜ s)) . j
Property (i), which states that hn is infinitely often differentiable, can thus be deduced from the differentiability of the saddle point. Lemma 4.9. Under the assumptions of Lemma 4.5, xn,α,t is infinitely often differentiable with respect to s˜. Proof. Consider the function bti+1 (n)
m X
F (˜ s, x) :=
X
i=0 j=bti (n)+1
j Pm e l=i+1 s˜l x − n
for positive x which is motivated by Equation (4.10). Then F is infinitely often differentiable in both s˜ and x and ∂x F is invertible for all x > 0. The claim now follows from the implicit function theorem. The derivatives of hn can now be calculated. Fix i3 ≤ i2 ≤ i1 and let xn (˜ s) := xn,α,t (˜ s). We obtain
(4.31)
∂s˜i1 hn (s) =
bti+1 (n)
iX 1 −1 P m
e
l=i+1
i=0
j
X
s ˜l
j=bti (n)+1
1 −1 P s) iX ∂s˜i xn (˜ m e l=i+1 s˜l ∂s˜i2 ∂s˜i1 hn (s) = 2 xn (˜ s) i=0
(4.32)
+
bti+1 (n)
X
l=i+1
s˜l
i=0
(xn (˜ s))j
j=bti (n)+1
bti+1 (n)
iX 2 −1 P m
e
(xn (˜ s)) , j
X
j=bti (n)+1
(xn (˜ s))j j
and (4.33)
∂s˜i3 ∂s˜i2 ∂s˜i1 hn (s) 3 −1 P s) iX ∂s˜i2 xn (˜ m e l=i+1 s˜l = xn (˜ s) i=0
+
+
+
+
bti+1 (n)
X
(xn (˜ s))j
j=bti (n)+1
s) s) ∂s˜i3 xn (˜ s) ∂s˜i2 xn (˜ ∂s˜i3 ∂s˜i2 xn (˜ − xn (˜ s) xn (˜ s) xn (˜ s)
1 −1 P s) iX s) ∂s˜i3 xn (˜ ∂s˜i2 xn (˜ m e l=i+1 s˜l xn (˜ s) xn (˜ s) i=0 2 −1 P s) iX ∂s˜i3 xn (˜ m e l=i+1 s˜l xn (˜ s) i=0
iX 3 −1 P m
e
i=0
l=i+1
bti+1 (n) s ˜l
X
j=bti (n)+1
iX 1 −1
j
(xn (˜ s)) . j
bti+1 (n) s˜l
i=0
X
j (xn (˜ s))
j=bti (n)+1
(xn (˜ s))
j=bti (n)+1
l=i+1
j
X
j=bti (n)+1
bti+1 (n)
bti+1 (n)
X
e
Pm
j
j
(xn (˜ s))
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
25
In order to prove properties (ii) to (v), we need to understand the derivatives of the saddle point. s Lemma 4.10. Let s˜ := γ(n) with γ (n) → ∞ and xn (˜ s) = xn,α,t (˜ s) be the unique positive solution of Equation (4.10). Fix i2 ≤ i1 . Then Pi1 −1 Pm s˜l Pbti+1 (n) j l=i+1 s)) s) ∂s˜i1 xn (˜ i=0 e j=bti (n)+1 (xn (˜ . =− xn (˜ s) λ2,n
Moreover, s) ∂s˜i1 xn (˜ =O xn (˜ s) and
1 α(n)
s) ∂s˜i2 xn (˜ ∂s˜i2 ∂s˜i1 xn (˜ s) s) ∂s˜i1 xn (˜ − =O xn (˜ s) xn (˜ s) xn (˜ s)
locally uniformly in s.
1 α (n)
Proof. Differentiating Equation (4.10) with respect to s˜i1 yields 0=
bti+1 (n)
iX 1 −1 P m
X
˜l l=i+1 s
e
i=1
m P s) X ∂s˜i1 xn (˜ m e l=i+1 s˜l j (xn (˜ s)) + xn (˜ s) i=1
bti+1 (n)
j
j=bti (n)+1
X
j
j (xn (˜ s)) .
j=bti (n)+1
So Pi1 −1 Pm s˜l Pbti+1 (n) l=i+1 s))j i=1 e s) ∂s˜i xn (˜ 1 j=bti (n)+1 (xn (˜ = O . = −P Pm Pbti+1 (n) m xn (˜ s) α(n) j (x (˜ s))j e l=i+1 s˜l i=1
j=bti (n)+1
n
by Equation (4.10) and Lemma 4.6. W.l.o.g., let i2 ≤ i1 . Differentiating once more, now with respect to s˜i2 , we obtain s) s) ∂s˜i1 xn (˜ s) ∂s˜i2 xn (˜ ∂s˜i2 ∂s˜i1 xn (˜ − xn (˜ s) xn (˜ s) xn (˜ s) Pi2 −1 Pm s˜l Pbti+1 (n) l=i+1 s))j i=0 e j=bti (n)+1 (xn (˜ =− P Pm m ˜l Pbti+1 (n) l=i+1 s s))j i=0 e j=bt (n)+1 j (xn (˜ s) ∂s˜i xn (˜ 2
−
xn (˜ s)
Pi1 −1
Pm
i=0 Pm
e
Pm
i=0 e Pi1 −1 Pm s˜l l=i+1 i=0 e
+ Pm
i=0
e
Pm
l=i+1
Pi1 −1 i=0
+ Pm
i=0
e
l=i+1
e
Pm
l=i+1
l=i+1
s ˜l
s ˜l
Pbti+1 (n)
j=bti (n)+1
Pbti+1 (n)
j=bti (n)+1
Pbti+1 (n)
j=bti (n)+1
s˜l Pbti+1 (n)
l=i+1
Pm
i
s˜l
j=bti (n)+1
Pbti+1 (n)
s˜l Pbti+1 (n)
j=bti (n)+1
j
j (xn (˜ s))
(xn (˜ s))
j (xn (˜ s))
j=bti (n)+1
j
j (xn (˜ s))
j
2
iX 2 −1 P m
e
l=i+1
i=0
bti+1 (n) s˜l
X
j (xn (˜ s))j
j=bti (n)+1
j
m P s) X ∂s˜i2 xn (˜ m e l=i+1 s˜l 2 x (˜ s ) j n i=0
(xn (˜ s))
j (xn (˜ s))
j
bti+1 (n)
X
j=bti (n)+1
Applying Lemma 4.6, Equation (3.8), and the first result to each term, we conclude that ∂si2 ∂si1 xn (s) ∂si2 xn (s) ∂si1 xn (s) 1 − =O xn (s) xn (s) xn (s) α (n)
locally uniformly in s.
j
j 2 (xn (˜ s)) .
Property (ii) is now a direct consequence of Equation (4.31) and Lemma 4.8, (iii) and (iv) follow from Equation (4.32) and Lemmata 4.10 and 4.8. Property (v) can easily be deduced from Equation (4.33) and Lemmata 4.10 and 4.8. Acknowledgements H.S. is supported by a PhD scholarship from Deutsche Telekom Stiftung.
RANDOM PERMUTATIONS WITHOUT MACROSCOPIC CYCLES
26
References [1] R. Arratia, A. D. Barbour, and T. Tavaré. Poisson process approximations for the Ewens sampling formula. Ann. Appl. Probab., 2(3):519–535, 1992. [2] R. Arratia, A.D. Barbour, and S. Tavaré. Logarithmic combinatorial structures: a probabilistic approach. EMS Monographs in Mathematics. European Mathematical Society (EMS), Zürich, 2003. [3] R. Arratia and S. Tavaré. The cycle structure of random permutations. The Annals of Probability, pages 1567–1591, 1992. [4] R. Arratia and S. Tavaré. Limit theorems for combinatorial structures via discrete process approximations. Random Structures Algorithms, 3(3):321–345, 1992. [5] A. D. Barbour. [poisson approximation and the chen-stein method]: Comment. Statistical Science, 5(4):425– 427, 11 1990. [6] V. Betz. Random permutations of a regular lattice. Journal of Statistical Physics, 155(6):1222–1248, 2014. [7] V. Betz and H. Schäfer. The number of cycles in random permutations without long cycles is asymptotically Gaussian. ALEA, 14:427–444, 2017. [8] V. Betz, D. Ueltschi, and Y. Velenik. Random permutations with cycle weights. The Annals of Applied Probability, 21(1):312–331, 2011. [9] P. Billingsley. Convergence of Probability Measures. Wiley, 1999. [10] L. V. Bogachev and D. Zeindler. Asymptotic statistics of cycles in surrogate-spatial permutations. Communications in Mathematical Physics, 334(1):39–116, 2015. [11] A. Cipriani and D. Zeindler. The limit shape of random permutations with polynomially growing cycle weights. ALEA Lat. Am. J. Probab. Math. Stat., 12(2):971–999, 2015. [12] J. Delaurentis and B. Pittel. Random permutations and Brownian motion. Pacific Journal of Mathematics, 119(2):287–301, 1985. [13] N. M. Ercolani and D. Ueltschi. Cycle structure of random permutations with cycle weights. Random Structures Algorithms, 44(1):109–133, 2014. [14] W.J. Ewens. The sampling theory of selectively neutral alleles. Theoret. Populations Biol., 3:87–112, 1972. [15] P. Flajolet and R. Sedgewick. Analytic Combinatorics. Cambridge University Press, 2009. [16] J. F. C. Kingman. The population structure associated with the Ewens sampling formula. Theoret. Population Biology, 11(2):274–283, 1977. [17] E. Kowalski and A. Nikeghbali. Mod-Poisson convergence in probability and number theory. Int. Math. Res. Not. IMRN, 2010(18):3549–3587, 2010. [18] E. Manstavičius and R. Petuchovas. Local probabilities for random permutations without long cycles. Electron. J. Combin., 23(1):Paper 1.58, 25, 2016. [19] G. Pólya. Kombinatorische Anzahlbestimmungen für Gruppen, Graphen, und chemische Verbindungen. Acta Mathematica, 68:145–254, 1937. [20] A.A. Shmidt and A. M. Vershik. Limit measures arising in the asymptotic theory of symmetric groups. Theory Probab. Appl., 22, No.1:70–85, 1977. [21] A. L. Yakymiv. Random A-permutations: convergence to a Poisson process. Mathematical Notes, 81(5-6):840– 846, 2007. [22] A. L. Yakymiv. Limit theorem for the general number of cycles in a random A-permutation. Theory of Probability & Its Applications, 52(1):133–146, 2008. [23] A. L. Yakymiv. A generalization of the Curtiss theorem for moment generating functions. Mathematical Notes, 90(5):920–924, 2011.