Random Matrix Product States and the Principle of Maximum Entropy

8 downloads 291 Views 1MB Size Report
Random Matrix Product States and the Principle of Maximum Entropy. arXiv: 1201.6324. David Pérez-García (Madrid) joint work with. Benoît Collins (Ottawa ...
Random Matrix Product States and  the Principle of Maximum Entropy

arXiv:1201.6324

David Pérez-García (Madrid)

joint work with

Benoît Collins (Ottawa, Lyon)

Carlos González-Guillén (Madrid)

The Principle of Maximum Entropy

• Origin: “Laplace principle of Insufficient Reason”.Two events are to be assign equal probabilities if there is no reason to think otherwise.

• No-knowledge = uniform distribution. How to include prior information in the problem?

• If the prior information is some symmetry invariance and we have enough symmetries (compact group), then uniquely defined probability distribution: Haar measure. If not?

Information theory comes into play

• Shannon’48: Entropy is the “only” measure of uncertainty.

• Principle of maximum entropy (Jaynes’57): Among all

possible probability distributions compatible with our prior information the best choice is the one that maximizes the entropy.

• Recovers easily the predictions of thermodynamics.

The prior information is the expectation value of an observable (linear restriction).

• Controversy: how far can one push this principle?





Concentration of measure comes into play

• Popescu et al ’06. A consequence of Levy’s lemma:

Imposing some energy constraint (linear restrictions) and given a sufficiently small subsystem of the universe, almost every pure state is such that the subsystem is approximately in the state maximizing entropy.

• Recover the prediction of thermodynamics.

• Recent theoretical and experimental work seem to

validate the principle of maximum entropy in relaxation processes.

What if we consider less standard prior information?

1)  1D quantum spin systems

2)  Finite range and homogeneous interactions

3)  Zero temperature

Starting point: Condensed matter physics gives a set of quantum states with a natural way of sampling (Haar measure).

Averaging (Weingarten Calculus) + Concentration of measure

The conclusion: Maximum entropy in the accesible region, except with very small probability.

Matrix Product States In a 1D quantum spin chain, each site has associated a finite dimensional Hilbert space with basis {

i }d

i=1

d

ψ =

∑c

i1 i2 iN i1 ,,iN =1

i1,,iN €



Matrix Product States Hastings’07 proved that zero temperature states of 1D quantum spin chains with local interactions can be well approximated by MPS with poly(N) size in the matrices.

The homogeneity of the interactions translates into taking the matrices site-independent in the bulk.

If we trace the systems at the boundary then one has the state

Moreover, we can assume wlog

1.



2.



(boxes are isometries)

, for some diagonal, positive

3. The identity is the only fixed point of

Ensemble of MPS

ρl

Ensemble of MPS

ρl

Ensemble of MPS

ρl

Ensemble of MPS

ρl

Theorem: Let Then

be taken at random with

except with probability exponentially small in D

.

Weingarten function Definition(Collins’03) Wg(n, ) is a function of an integer n and a permutation with

Theorem(Collins’03) Let n be a positive integer and

and

be p-tuples of positive integers from Then

.

Theorem(Collins-Sniady): For any where

is upper bounded by



Theorem(Collins-Sniady): For any where



is upper bounded by

Theorem: Let with . Then, there exists a constant C depending only on k such that for any

Theorem(Collins-Sniady): For any where



is upper bounded by

Theorem: Let with . Then, there exists a constant C depending only on k such that for any

Theorem(Montanaro): For any

and

Collins and Nechita (2010) introduce a graphical paradigm in order to simplify the computation of the average of polynomials over the unitary group. Consider a polynomial P(U) of degree n in and ,

then

Collins and Nechita (2010) introduce a graphical paradigm in order to simplify the computation of the average of polynomials over the unitary group. Consider a polynomial P(U) of degree n in and ,

then

Collins and Nechita (2010) introduce a graphical paradigm in order to simplify the computation of the average of polynomials over the unitary group. Consider a polynomial P(U) of degree n in and ,

then

• 

• 

• 

with

• 

• 

• The function

is

holomorphic in a neighboirhood of zero.

• As

, for any

• Writing estimate ai,λ

≤ ep 2i



we have that

we obtain the Cauchy

• 

• 

Using that:

•  • 

2i a ≤ ep

i,λ



• 

Using that:

•  • 

2i a ≤ ep

i,λ



• 

• 

Using that:

•  • 

2i a ≤ ep

i,λ



• 

• 

• 

Using that:

•  • 

2i a ≤ ep

i,λ



• 

• 

• 

can be bounded by a constant C

Proof of the main theorem

• Compute and .

• Compute the Lipschitz constant of both functions.

• Apply concentration of measure phenomenon to both functions

• Argue about concentration of the function

Thank You

Suggest Documents