Oct 23, 2017 - are i.i.d. random variables with mean zero, variance one and all moments finite, this converges to the Marcenko-Pastur law almost surely.
arXiv:1710.08160v1 [math.PR] 23 Oct 2017
BROWN MEASURE AND ASYMPTOTIC FREENESS OF ELLIPTIC AND RELATED MATRICES KARTICK ADHIKARI AND ARUP BOSE
Abstract. We show that independent elliptic matrices converge to freely independent elliptic elements. Moreover, the elliptic matrices are asymptotically free with deterministic matrices under appropriate conditions. We compute the Brown measure of the product of elliptic elements. It turns out that this Brown measure is same as the limiting spectral distribution.
1. Introduction Asymptotic ∗-freeness (in short freeness) of a sequence of random matrices, as the dimension increases, was introduced by Voiculescu [21]. It is a central object of study in free probability. Later it has been studied extensively in the literature by [6], [12], [18], [20], [22] and others. In particular it is known that under suitable assumptions, independent standard unitary matrices and deterministic matrices are asymptotically free, and so are independent Wigner and deterministic matrices. Further, let Yp×n be a p × n random rectangular matrix where the entries are i.i.d. Gaussian random variables with mean zero and variance one. It is known that (see ′ Theorem 5.2 in [5]) independent copies of n1 Yp×n Yn×p are also asymptotically free. A generalisation of the Wigner matrix that has caught recent attention is the elliptic matrix where the entries are as in a Wigner matrix except that the (i, j)th and (j, i)th entries have a correlation which is same across all pairs. This is clearly a non-symmetric matrix. One may also allow some specific pattern of correlation instead of the constant correlation. We call the latter a generalised elliptic matrix. It is known that under suitable conditions the limit spectral distribution (LSD) of the elliptic matrix is the uniform distribution on an ellipse whose axes depend on the value of the correlation. See [8], [10], [15]. We first show that under suitable conditions, the sequence of generalised elliptic matrices converges in ∗-distribution (see Theorem 1). In particular any sequence of elliptic matrices converges to an elliptic element. Next we show that under appropriate conditions, independent elliptic matrices, with possibly different correlation values, converge jointly in ∗-distribution and are asymptotically free. They are also asymptotically free of appropriate collection of deterministic matrices. See Theorem 2. The joint convergence should remain true for generalised elliptic matrices but freeness is not expected to remain valid. To keep things simple we decided not to pursue these ideas. Date: October 24, 2017. The work is partially supported by National Post-Doctoral Fellowship, India, with reference no. PDF/2016/001601, and also supported by J. C. Bose National Fellowship, Department of Science and Technology, Government of India. 1
2
KARTICK ADHIKARI AND ARUP BOSE
′ when Now consider the empirical spectral distribution (ESD) of n1 Yp×n Yn×p p/n → y 6= 0. When the entries of Yp×n are i.i.d. random variables with mean zero, variance one and all moments finite, this converges to the Marˇcenko-Pastur law almost surely. Other variations under weaker assumptions are also known. See [1], [14], [23] and [24]. Now suppose Yp×n is elliptic. We show that then the expected ESD still converges to the Marˇcenko-Pastur law. See Theorem 3. Again, we have not pursued the almost sure convergence of the ESD for simplicity. We ′ also show that independent copies of n1 Yp×n Yn×p converge jointly in ∗-distribution and are asymptotically free. See Theorem 4. The Brown measure for any element of a non-commutative probability space was introduced by Brown [4]. There has been a lot of work done in the past decade to find connection between the limiting spectral distribution (LSD) of a sequence of random matrices and the Brown measure of the ∗-distribution limit of the sequence. Often they are not equal. A very simple example is given in [19]. However, often they are equal. For example the i.i.d. matrix converges in ∗distribution to the circular element and its LSD is the uniform distribution on the unit disc. The latter is indeed the Brown measure of the circular element. See [11]. The LSD of the elliptic matrix is the uniform probability measure on an ellipse. At the same time, the Brown measure of an elliptic element is also the uniform probability measure on an ellipse (see [2], [13]). Similalry, in [9], it has been shown that the LSD of bi-unitarily random matrices is actually the Brown measure of the ∗-distribution limit. The LSD of product of elliptic matrices has been calculated in [17]. On the other hand, from Theorem 2, we know that this product converges in ∗-distribution to product of free elliptic elements. We calculate the Brown measure of such a product and show that it is the same as the LSD. See Theorem 5. We introduce the basic definitions and facts in Section 2 and state our results in Section 3. In Sections 4 and 5 we give the proofs of Theorems 1 and 2 respectively. Proof of Theorems 3 and 4 are given in Section 6 and the proof of Theorem 5 is presented in Section 7.
2. Preliminaries We first recall some basic definitions and facts from free probability theory. A self-adjoint element s, in a non-commutative probability space (NCP) (A, ϕ), is said to be a (standard) semi-circular element if Z 2 p 1 ϕ(sp ) = tp 4 − t2 dt, for p ∈ N. 2π −2 √ 1 These are moments of the probability density 2π 4 − t2 on the interval [−2, 2] (see [16], p. 29). Elements (called random variables) a1 , a2 , . . . , an ∈ A in an NCP (A, ϕ) are said to be free if for any polynomials p1 , p2 , . . . , pk whenever ϕ(pj (ai(j) )) = 0, 1 ≤ j ≤ k and i(1) 6= i(2) 6= · · · 6= i(k), we have ϕ(p1 (ai(1) ) · · · pk (ai(k) )) = 0. An element c ∈ A is called circular if c = √12 s1 + √i2 s2 , where s1 and s2 are two free semi-circular elements. An element e ∈ A is called elliptic with parameter ρ
BROWN MEASURE AND ASYMPTOTIC FREENESS OF ELLIPTIC AND RELATED MATRICES 3
q q 1−ρ s + i if e = 1+ρ 1 2 2 s2 , where s1 and s2 are free semi-circular elements. Note that ρ = 1 and ρ = 0 yield respectively the semi-circular and the circular element. Let (An , ϕn )n≥1 be a sequence of NCP. Let (ani )i∈I be a collection of random variables from An which converges in ∗-distribution to some (ai )i∈I in (A, ϕ). Then (n) (ai )i∈I are said to be asymptotically free if (ai )i∈I are free. Let An be the algebra of n × n random matrices whose entries have all moments finite. It is equipped with the tracial state ϕn (x) = n1 ETr(x) for x ∈ An . Clearly a sequence of random matrices (An ) from An , converges in ∗-distribution to some element a ∈ A if for every choice of ǫ1 , ǫ2 , . . . , ǫk ∈ {1, ∗} we have lim ϕn (Aǫn1 · · · Aǫnk ) = ϕ(aǫ1 · · · aǫk ).
n→∞
∗-dist Then we write An −→ a. If An is in addition hermitian, then the above condition is same as saying lim ϕn (akn ) = ϕ(ak ) exists for all non-negative integers k. Then we say An converges to a in the distribution sense. The joint convergence of several sequences is expressed in an analogous manner. There are other related notions of convergence of random matrices. Pn Let An be an n × n random matrix with eigenvalues λ1 , . . . , λn . Then n1 k=1 δλk , where δx denotes the Dirac delta measure at x, is the empirical spectral measure of An . Equivalently, the empirical spectral distribution (ESD) is given by
F An (x, y) =
|{k : ℜ(λk ) ≤ x, ℑ(λk ) ≤ y}| , n
for x, y ∈ R,
where | · | denotes the cardinality and ℑ and ℜ denote the imaginary and real parts. Clearly F An is a random distribution function. If, as n → ∞, it converges (almost surely) to a non-random distribution function F∞ weakly, then F∞ is said to be the almost sure limiting spectral distribution (LSD) of An . Often it is easier to show the convergence of the non-random expected ESD function E[F An (x, y)]. This limit is also called LSD and coincides with the earlier limit if both exist. If the sequence of matrices is hermitian, then its convergence in the distribution sense yields the candidate LSD whose moments are ϕ(ak ). We now state a few well known facts, which will be used in the proofs of our results. We need the following notation: P2 (2k) := the set of all pair partitions of {1, 2, . . . , 2k},
N C2 (2k) := the set of non-crossing pair partitions of {1, 2, . . . , 2k}, γ := the cyclic permutation with one cycle, i.e.,
γ(i) = i + 1 for i = 1, . . . , 2k − 1 and γ(2k) = 1
|A| = the cardinality of A,
γπ(r) := γ(π(r)) for π ∈ P2 (2k), where π(r) = s and π(s) = r if (r, s) ∈ π, Ik := {(i1 , . . . , ik ) ∈ Nk : 1 ≤ i1 , . . . , ik ≤ n},
Fact 1 (Moments-free cumulants). Let a1 , a2 , . . . , an ∈ (A, ϕ). Then X Y ϕ(a1 a2 · · · an ) = κ(V )[a1 , a2 , . . . , an ], π∈N C(n) V ∈π
4
KARTICK ADHIKARI AND ARUP BOSE
where N C(n) denotes the set of all non-crossing partition of {1, . . . , n} and κ(V ) denotes the usual multiplicative extension of the free cumulant function. See [16] for details on the definition of free cumulants and its multiplicative extension. Fact 2 (Wick’s formula). Let G1 , G2 , . . . , Gj be jointly Gaussian random variables with E[Gi ] = 0 for i = 1, 2, . . . , j. Then X Y E[Gr Gs ]. E[G1 G2 · · · Gj ] = π∈P2 (j) (r,s)∈π
Fact 3. Let π ∈ P2 (2k) and |γπ| denote the cardinality of the set of partition blocks in γπ. Then |γπ| ≤ k + 1, and equality holds if and only if π ∈ N C2 (2k). For the proofs of the above three Facts refer to [16], pages 176, 360 and 367 respectively. To illustrate Fact 3, let {1, . . . , 6} so that k = 3. Let π = (13)(24)(56) be a crossing pair partition. Then γπ = (14325)(6), using the convention that if (r, s) ∈ π then r = π(s) and s = π(r), so |γπ| = 2. On the other hand, if π = (12)(34)(56), which is non-crossing, then γπ = (135)(2)(4)(6) and |γπ| = 4. Now we introduce the Brown measure. Let (A, ϕ) be an NCP. Then the FugledeKadison determinant (see [7]) ∆(a) of a ∈ A is defined by 1 ∆(a) = exp[ ϕ(log(aa∗ ))]. 2 If a is not invertible, then ∆(a) := limǫ→0 ∆ǫ (a), where ∆ǫ (a) denotes the regularized Fuglede-Kadison determinant ∆ǫ (a) = exp[ 12 ϕ(log(aa∗ + ǫ2 ))] for ǫ > 0. The Brown measure of a ∈ A is defined by [4] 1 2 ∂ ∂ ∂2 ∂2 log ∆(a − λ) = µa = + log ∆(a − λ). 2π ∂(ℜλ)2 ∂(ℑλ)2 π ∂λ ∂λ One can show that in fact µa is a probability measure on C. Consider any n × n matrix An . Then n p 1X δλ ∆(An ) = n | det An |, and µAn = n i=1 i
respectively, where λ1 , . . . , λn are the eigenvalues of An . So the Brown measure is the ESD of the matrix. See [19] for details. However, even when the ESD converges, there is no guarantee that the limit is the Brown measure of the limit element. 3. Main results The following assumption is basic to us.
Assumption 1. The variables {aii : 1 ≤ i} ∪ {(aij , aji ) : 1 ≤ i < j} is a collection of independent random variables, and satisfy E[aij ] = 0, E[a2ij ] = 1 for all i, j, and have uniformly bounded moments of all orders. (i) (Elliptic matrix) E[aij aji ] = ρ for 1 ≤ i 6= j ≤ n. (ii) (Generalised elliptic matrix) E[aij aji ] = ρ|i−j| for 1 ≤ i 6= j ≤ n. Note that (i) is a particular case of (ii). Any rectangular matrix Xp×n is defined to be elliptic or generalised elliptic by adapting the above definition in the obvious way. In the literature an elliptic matrix is assumed to also satisfy the condition that {aii : 1 ≤ i} and {(aij , aji ) : 1 ≤ i < j} are collections of i.i.d. random variables. The first result is on the ∗-distribution limit for generalised elliptic matrices.
BROWN MEASURE AND ASYMPTOTIC FREENESS OF ELLIPTIC AND RELATED MATRICES 5
Theorem 1. Suppose An is a sequence of generalised elliptic matrices whose entries satisfy Assumption 1(ii). Suppose further that {ρi } satisfies: for all ǫ1 , . . . , ǫ2k ∈ {1, ∗}, and for all π ∈ N C2 (2k), 1 X Y δǫr ǫs (1) ρ|ir −is | δir is+1 δis ir+1 < ∞, . lim k+1 n→∞ n I2k (r,s)∈π
An √ n
converges in ∗-distribution. Moreover, if ρi ≡ ρ for all i, then Then An := (1) holds and the limit is elliptic with parameter ρ. Condition (1) also holds if ρ|i−j| = f |i−j| , where 0 ≤ f ≤ 1 is a bounded conn
tinuous function on [0, 1]. We elaborate on this in Section 4. The next result gives asymptotic freeness of independent elliptic matrices with deterministic matrices. (1)
(2)
(m)
Theorem 2. Let An , An , . . . , An be m independent elliptic random matrices (with possibly different correlations) whose entries satisfy Assumption 1(i). Then (m) ∗-dist
(1)
An , . . . , An
−→ e1 , . . . , em ,
where e1 , . . . , em are free and elliptic. (ℓ) (2) (1) (m) (2) (1) In addition, suppose that An , An , . . . , An are Gaussian, and Dn , Dn , . . . , Dn are constant matrices such that ∗-dist Dn(1) , . . . , Dn(ℓ) −→ d1 , . . . , dℓ , as n → ∞, for some d1 , . . . , dℓ ∈ (A, ϕ). Then
(1) (m) ∗-dist An , . . . , An , Dn(1) , . . . , Dn(ℓ) −→ e1 , . . . , em , d1 , . . . , dℓ
where e1 , . . . , em , {d1 , . . . , dℓ } are free.
Now we move to rectangular random matrices. Theorem 3. Suppose Xp×n is an elliptic rectangular random matrix whose entries ′ satisfies Assumption 1(ii). If np → y > 0 as p → ∞, then X p = n1 Xp×n Xn×p , converges to a free Poisson element of rate y. Its expected ESD converges to the corresponding Marˇcenko-Pastur law with parameter y. Now we claim that independent matrices of the form X p are asymptotically free. (1)
(m)
Theorem 4. Let X p , . . . , X p be independent and as described in Theorem 3 with possibly different correlation parameters. If np → y > 0 as p → ∞, then (1)
(m)
Xp , . . . , Xp
are asymptotically free.
We now give the Brown measure of product of free elliptic elements. Theorem 5. Let k ≥ 2 and e1 , . . . , ek be k free elliptic elements in (A, ϕ) with possibly different parameters. Then the Brown measure µk of e1 · · · ek is rotationally invariant and can be described by the probabilities 2 t k for t ≤ 1 µk ({z : |z| ≤ t}) = 1 for t > 1.
Note that this Brown measure is also the LSD of E1 · · · Ek , where E1 , . . . , Ek are independent elliptic matrices as defined in [17]. The result does not hold for k = 1, as the Brown measure of elliptic element is uniform distribution on ellipse. So the condition k ≥ 2 is crucial.
6
KARTICK ADHIKARI AND ARUP BOSE
4. Proof of Theorem 1 We first make the following remark to clarify condition (1). For any π ∈ N C2 (2k) and ǫi ∈ {1, ∗}, 1 ≤ i ≤ 2k, define T (π) = |{(r, s) ∈ π : δǫr ǫs = 1}|. Remark 1. We show that how in two cases, the limit condition (1) in the statement of Theorem 1 can be verified. (I) First consider the case where ρi = ρ for some |ρ| ≤ 1 and π ∈ N C2 (2k). Then (1) gives 1 X Y δǫr ǫs 1 X Y δǫr ǫs lim k+1 ρ|ir −is | δir is+1 δis ir+1 = lim k+1 ρ δir is+1 δis ir+1 n n I2k (r,s)∈π
I2k (r,s)∈π
= lim
=
1 nk+1
Y
(r,s)∈π
X Y
ρδǫr ǫs δir iγπ(r) δis iγπ(s)
I2k (r,s)∈π
ρδǫr ǫs lim
1 nk+1
2k XY
δir iγπ(r) = ρT (π) .
I2k r=1
The last equality follows from Fact 3. Hence the limit condition (1) holds. (n) (II) If ρi = f ( ni ), for some continuous function |f | ≤ 1. Then the limit in Condition (1) can be written explicitly in terms of function of U1 , . . . , Uk , depending on π, where U1 , . . . , Uk are i.i.d. uniform random variables in [0, 1]. We explain it by examples. Note that Y
δir is+1 δis ir+1 =
Y
(r,s)∈π
(r,s)∈π
δir iγπ(r) δis iγπ(s) =
2k Y
δir iγπ(r) .
r=1
The product will be non-zero when all the variables with index from same block of γπ are equal. Let π = (16)(25)(34) ∈ N C2 (6). Then γπ = Q (1)(26)(35)(4), and (r,s)∈π δir is+1 δis ir+1 = 1 if i2 = i6 and i3 = i5 . Thus, for π = (16)(25)(34), we have 1 X Y δǫr ǫs lim 4 ρ|ir −is | δir is+1 δis ir+1 n→∞ n I6 (r,s)∈π δ δ δ i1 i2 i3 i2 ǫ1 ǫ6 i3 ǫ2 ǫ5 i4 ǫ3 ǫ4 1 X f − f − f − = lim 4 n→∞ n n n n n n n I4
=E[f (|U1 − U2 |)δǫ1 ǫ6 f (|U2 − U3 |)δǫ2 ǫ5 f (|U3 − U4 |)δǫ3 ǫ4 ],
as f is a bounded continuous function. Similarly, it can be shown that, for π = (12)(34)(56), γπ = (135)(2)(4)(6) and ǫ1 , . . . , ǫ6 ∈ {1, ∗}, 1 X Y δǫr ǫs ρ|ir −is | δir is+1 δis ir+1 lim n→∞ n4 I6 (r,s)∈π
=E[f (|U1 − U2 |)δǫ1 ǫ2 f (|U1 − U3 |)δǫ3 ǫ4 f (|U1 − U4 |)δǫ5 ǫ6 ].
Similarly the limit in condition (1) can be calculated explicitly in all other cases.
BROWN MEASURE AND ASYMPTOTIC FREENESS OF ELLIPTIC AND RELATED MATRICES 7
Before proving Theorem 1 we state the following lemmas, which will be used in the proof of Theorem 1. We give the proofs of the lemmas at the end of this section. The first lemma gives the moments of an elliptic element. Lemma 1. Let e be an elliptic element with parameter ρ in an NCP (A, ϕ). Then, for ǫ1 , . . . , ǫp ∈ {1, ∗}, P ρT (π) if p = 2k ǫp ǫ1 ǫ2 π∈N C (2k) 2 ϕ(e e · · · e ) = 0 if p = 2k + 1.
The next lemma is key in proving Theorem 1. It will be used repeatedly. We use the notation: a′ (r, s) = δir is δir+1 is+1 and b′ (r, s) = δir is+1 δis ir+1 .
Lemma 2. Consider a′ (r, s) and b′ (r, s) as defined above. Then lim
1 nk+1
k XX
X
I2k ℓ=1 1≤j1 0, we get
(22)
1
Sk (t) = 1 − t− k .
Clearly, ϕ(c1 · · · ck c∗k · · · c∗1 ) = 1 and ϕ((c1 · · · ck c∗k · · · c∗1 )−1 ) = ∞, as ci c∗i has the quarter-circle distribution. Therefore using (22) in Result 3 we have 2 t k for t ≤ 1 µk ({z : |z| ≤ t}) = 1 for t ≥ 1. Hence the result.
It remains to prove Lemma 4. Proof of Lemma 4. Since e1 · · · ek e∗k · · · e∗1 and c1 · · · ck c∗k · · · c∗1 are self-adjoint elements, it is enough to show that all their moments agree. From the proof of Lemma 1, we have κ2 (e, e) = κ2 (e∗ , e∗ ) = ρ and κ2 (e, e∗ ) = κ2 (e∗ , e) = 1. It is easy to see that κ2 (c, c) = κ2 (c∗ , c∗ ) = 0 and κ2 (c, c∗ ) = κ2 (c∗ , c) = 1. We write , (e1 · · · ek e∗k · · · e∗1 )n = eǫτ11 eǫτ22 · · · eǫτ2kn 2kn
where τ2mk+j = τ2(m+1)k−j+1 = j , e2mk+j = 1 and e(2m+1)k+j = ∗ for j = 1, . . . , k and m = 0, . . . , n − 1. For positive integer n, applying the moment-free cumulant formula, as the higher order free cumulants are zero, we have X Y κ2 (eǫτrr , eǫτss ), ϕ((e1 · · · ek e∗k · · · e∗1 )n ) = π∈N C2 (2nk) (r,s)∈π
where τr , τs ∈ {1, . . . , k} and ǫr , ǫs ∈ {1, ∗}. Observe that if (r, s) ∈ π ∈ N C2 (2nk), then one of {r, s} is odd and the other is even. Further, if eτr appears at an odd place then e∗τr appears only at an even place and vice-versa. Indeed, consider the
18
KARTICK ADHIKARI AND ARUP BOSE
word e1 · · · ek e∗k · · · e∗1 . Then er appears at the r-th place and e∗r appears at the (2k + 1 − r)-th place. Clearly if er appears at an odd place then e∗r appears at an even place and vice-versa. Therefore κ2 (eǫτrr , eǫτss ) will be of the form either κ2 (e, e∗ ) or κ2 (e∗ , e) and in both cases equal to 1. Thus, as the mixed free cumulants are same for circular and elliptic elements, we have X Y κ2 (cǫτrr , cǫτss ) = ϕ((c1 · · · ck c∗k · · · c∗1 )n ). ϕ((e1 · · · ek e∗k · · · e∗1 )n ) = π∈N C2 (2nk) (r,s)∈π
This completes the proof.
References
[1] Zhidong Bai and Jack W. Silverstein, Spectral analysis of large dimensional random matrices, second ed., Springer Series in Statistics, Springer, New York, 2010. MR 2567175 [2] Serban Belinschi, Piotr Sniady, and Roland Speicher, Eigenvalues of non-hermitian random matrices and brown measure of non-normal operators: hermitian reduction and linearization method, arXiv preprint arXiv:1506.02017 (2015). [3] Arup Bose, Rajat Subhra Hazra, and Koushik Saha, Patterned random matrices and method of moments, Proceedings of the International Congress of Mathematicians. Volume IV, Hindustan Book Agency, New Delhi, 2010, pp. 2203–2231. MR 2827968 [4] L. G. Brown, Lidskii ’s theorem in the type II case, Geometric methods in operator algebras (Kyoto, 1983), Pitman Res. Notes Math. Ser., vol. 123, Longman Sci. Tech., Harlow, 1986, pp. 1–35. MR 866489 [5] M. Capitaine and M. Casalis, Asymptotic freeness by generalized moments for Gaussian and Wishart matrices. Application to beta random matrices, Indiana Univ. Math. J. 53 (2004), no. 2, 397–431. MR 2060040 [6] Ken Dykema, On certain free product factors via an extended matrix model, J. Funct. Anal. 112 (1993), no. 1, 31–60. MR 1207936 [7] Bent Fuglede and Richard V. Kadison, Determinant theory in finite factors, Ann. of Math. (2) 55 (1952), 520–530. MR 0052696 [8] F. G¨ otze, A. Naumov, and A. Tikhomirov, On minimal singular values of random matrices with correlated entries, Random Matrices Theory Appl. 4 (2015), no. 2, 1550006, 30. MR 3356884 [9] Alice Guionnet, Manjunath Krishnapur, and Ofer Zeitouni, The single ring theorem, Ann. of Math. (2) 174 (2011), no. 2, 1189–1217. MR 2831116 [10] H. Sompolinsky H. Sommers, A. Crisanti and Y. Stein., Spectrum of large random asymmetric matrices, Phys. Rev. Lett. 60 (1988). [11] Uffe Haagerup and Flemming Larsen, Brown’s spectral distribution measure for R-diagonal elements in finite von Neumann algebras, J. Funct. Anal. 176 (2000), no. 2, 331–367. MR 1784419 [12] Fumio Hiai and D´ enes Petz, Asymptotic freeness almost everywhere for random matrices, Acta Sci. Math. (Szeged) 66 (2000), no. 3-4, 809–834. MR 1804226 [13] F. Larsen, Brown measures and r-diagonal elements in finite von neumann algebras, PhD Thesis, Department of Mathematics and Computer Science, University of Southern Denmark (1999). [14] V. A. Marˇ cenko and L. A. Pastur, Distribution of eigenvalues in certain sets of random matrices, Mat. Sb. (N.S.) 72 (114) (1967), 507–536. MR 0208649 [15] Hoi H. Nguyen and Sean O’Rourke, The elliptic law, Int. Math. Res. Not. IMRN (2015), no. 17, 7620–7689. MR 3403996 [16] Alexandru Nica and Roland Speicher, Lectures on the combinatorics of free probability, London Mathematical Society Lecture Note Series, vol. 335, Cambridge University Press, Cambridge, 2006. MR 2266879 [17] Sean O’Rourke, David Renfrew, Alexander Soshnikov, and Van Vu, Products of independent elliptic random matrices, J. Stat. Phys. 160 (2015), no. 1, 89–119. MR 3357969 [18] Dimitri Shlyakhtenko, Limit distributions of matrices with bosonic and fermionic entries, Free probability theory (Waterloo, ON, 1995), Fields Inst. Commun., vol. 12, Amer. Math. Soc., Providence, RI, 1997, pp. 241–252. MR 1426842
BROWN MEASURE AND ASYMPTOTIC FREENESS OF ELLIPTIC AND RELATED MATRICES 19
´ [19] Piotr Sniady, Random regularization of Brown spectral measure, J. Funct. Anal. 193 (2002), no. 2, 291–313. MR 1929504 [20] Roland Speicher, Free convolution and the random sum of matrices, Publ. Res. Inst. Math. Sci. 29 (1993), no. 5, 731–744. MR 1245015 [21] Dan Voiculescu, Limit laws for random matrices and free products, Invent. Math. 104 (1991), no. 1, 201–220. MR 1094052 , A strengthened asymptotic freeness result for random matrices with applications to [22] free entropy, Internat. Math. Res. Notices (1998), no. 1, 41–63. MR 1601878 [23] Kenneth W. Wachter, The strong limits of random matrix spectra for sample matrices of independent elements, Ann. Probability 6 (1978), no. 1, 1–18. MR 0467894 [24] Y. Q. Yin, Limiting spectral distribution for a class of random matrices, J. Multivariate Anal. 20 (1986), no. 1, 50–68. MR 862241 Stat-Math Unit, Indian Statistical Institute, Kolkata 700108, India E-mail address: kartickmath [at] gmail.com Stat-Math Unit, Indian Statistical Institute, Kolkata 700108, India E-mail address: bosearu [at] gmail.com