Transport Phenomena in Eukarya: Continuous and ...

2 downloads 0 Views 426KB Size Report
Apr 4, 2005 - coin game, designed to illustrate the so called Parrondo's Paradox. But it is also a simple minded illustration of the lever arm model for this type ...
Transport Phenomena in Eukarya: Continuous and Discrete Models for Molecular Motors M. Chipot, J. Dolbeault, D. Heath, D. Kinderlehrer, M. Kowalczyk April 4, 2005

1

Introduction

These lecture notes are the summary of lectures originally given in the CIMPA Summer School in Valdivia in January of 2004. The school was attended by graduate students of Mathematics, Physics and Biology and the purpose the lectures was to introduce this diverse audience to the theory of molecular motors based on the ratchet models. To to introduce the basic model of this theory it was necessary to present some concepts and tools of Mechanics, Statistical Mechanics, Probability and PDE’s. As it turns out our considerations touch upon relatively new developments in Mathematics, especially in the theory of Monge-Kantorovich mass transport problem. Most of the material of those notes that are directly related to the molecular motors comes from the author’s own work done in the collaboration with M. Chipot, D. Kinderlehrer, D. Heath and J. Dolbeault. We will also describe recent results of Chipot, Hastings and Kinderlehrer, [7]. In addition, especially to present material of more experimental nature we use other works such as The Way Things Move: Looking Under the Hood of Molecular Motor Proteins by R. Vale and R. Miligan [36], Experimental Studies of the Myosin-Actin Motor by Y. Ishii, E. Esaki and T. Yanagida [18] and Mechanics of Motor Proteins and the Cytoskeleton by J. Howard (physics and biology of molecular motors); [17], Topics in Optimal Transportation by C. Villani (the theory of mass transport) [37]. An interested reader is referred to the list below for further references. This notes are organized as follows. Firstly we present a summary of what is know about the molecular motors and their environment from the physical and chemical point of view. Secondly we give an overview of the randomness, diffusion and Brownian motion. Then we present the basic ratchet model for the molecular motor, the so called flashing ratchet. We explain how this model can be obtained on a basis of some macroscale considerations using the mass transport. Then we present the analysis of the long time behavior of the flashing ratchet and show that it represents transport. Lastly we consider some semi-discrete and discrete ratchet models.

1

2 2.1

Environment in which molecular motors move What are the molecular motor proteins?

Leaving cells have the ability to generate motion and forces. The motion of the muscles, motion of a cell and transport of cargoes within the cell itself are generated by protein molecules whose movement is in turn driven by chemical reactions. Thus molecular motor proteins are enzymes that are specialized to work as motors. These motors are tiny, nanoscale size engines with a complicated internal structure that changes dynamically during the motion thanks to the energy obtained from the hydrolysis of a phosphate bond of adenosine triphosphate (ATP). One of the most important properties of the molecular motors is the directionality of their motion. Indeed motors always move along their tracks in the direction specific for a given motor: most motors move from the minus to the plus end of the track although there are known examples of motors moving in the opposite direction. For instance the motion of muscles is driven by molecular motors called myosin (discovered in 1864). Myosin cross-bridges are attached to thick filaments with two heads that attach and detach from the thin actin filaments. When in contact with actin myosin moves the actin filaments by the rotating motion of its lever arm. This highly synchronized motion requires a conformational change of the motor protein. The mechanism described here is known as the lever-arm model. In addition to muscle contraction myosin are involved in cytokinesis, cell movement, membrane transport and other various other biological activities. To perform for example muscle contraction a cooperative and synchronized action of many motors moving in the same direction is required. In other words without the directionality myosin could not be responsible for muscle functions. As we have mentioned above, directionality is actually a universal characteristic of the motors. Smaller intracellular cargoes are driven by molecular motors called kinesins that move unidirectionally along protein microtubules. Conventional kinesin (discovered 1985) moves procesively for a distance of order of micrometers stepping from one subunit of the microtubule to the other. Unlike myosin, which with few exceptions detaches itself from actin after one step, kinesin motor may take hundreds steps before detaching from the mictrotubule. One possible explanation of why this happens is that during the motion only one head of the kinesin motor is attached to the microtubule. Thus kinesin moves forward by alternating the position of each of the two heads: the trailing head steps over the forward head to become the forward head and so on. This model is known as the hand-over-hand model.

2.2

Various experiments with an assembly of motors

Besides the macroscale manifestations of the work done by the molecular motors one can observe the motion of molecular motors in various experiments. For instance in motility assays one can observe a manifestation of the work done by the whole assembly of motors. In this type of experiment both motors and filaments are present in a suitably treated solution. Actin filaments are in addition fluorescent and their motion can be observed under the microscope. If motors are are attached to a substrate one observe a sliding 2

motion of the actin. With the help of other techniques, for instance using optical traps or microneedles one can perform single motor experiments. This allows to measure the force exerted by one motor [18]. Average velocities, number of steps along the tracks and other quantities can also be measured. Some other the basic quantities can be deduced on a basis of fairly simply elementary physics considerations that take into account easy to measure values like the weight of proteins etc. We will present some of this in the next section.

2.3

Molecular motor in its environment

The arm-lever model for myosin and hand-over-hand model for conventional kinesin use commonly known analogs to describe a complicated sequence events such as chemical reactions and conformational changes that lead to a molecular motor making a step along its track [36]. Indeed, finding the right metaphors allows to visualize, conceptualize, discuss and investigate events that take place in the unfamiliar nano-scale by placing them within the range of the very familiar macro-scale language. Mathematical models for the molecular motors serve a similar purpose however their language is the language of physical quantities such as force and energy, and mathematical concepts such as function and probability. At the end their purpose is the same since those are the very same concepts that are used to describe common macroscale events, like for instance an object falling to the ground. If this object is an apple falling from the tree then describing, through every day world analogs, all the chemical reaction that lead to the growth of the apple and in consequence increase in weight, will not be a satisfactory explanation of why the apple eventually fell. What is the missing element in this ”model” ? Clearly it is the ”environment” of the apple since, as everyone we know, the apple falls from the tree because it is attracted by a much larger body in its vicinity i.e. the earth. This may sound too obvious now but this is only because we are all used to the Newtonean language- 400 or more hundreds year this would not be an obvious proposition at all. (An inspiring discussion of those issues in a much more general context can be found in [21]). One of the purpose of these lecture notes is to put the molecular motor and its motion within its proper environment. Such as the prevailing force for the apple falling from the tree is the gravity, in the nanoscale world of the living cell the prevailing factor which can not be ignored are thermal fluctuations. In the first part of the lectures we will assume that a typical representative of the family of molecular motors is a protein with mass 100 kDa. Based on some straightforward physical considerations we can estimate for instance an average displacement of such a protein due to thermal fluctuation to be of the order of 10 µm per s. Experiments with molecular motors on the other hand suggest that their velocity is of the order of several µm/s [18]. Thus the two quantities are of comparable size. Following further this line of thought we can estimate the size of the thermal energy to be ≈ kT ≈ 4.116 pN×nm. On the other hand the energy obtained from the hydrolysis of single AT P molecule is ≈ 80 pN×nm [18]. This again are comparable quantities. One of the most important properties of thermal fluctuations is their random character. In other words the location, velocity etc of a particle moving due to thermal fluctuations are not deterministic and can only be given with certain probability. Given its environment 3

the mystery of the molecular motor can be summarized in the following problem: how is it possible that a motor moves progressively for several hundreds steps in a pre-determined direction ? The question stated this way implicitly assumes that each motor moves in the same way. However in the experiment while most motors move in the direction of the plus end of the track some are seen to make few steps in the opposite direction and others detach very quickly from the track. Thus it is more correct to speak of the assembly of motors and state the question for some ”average” motor. In the recent years a number of scientists [1, 2, 3, 20, 33, 34, 35] have put forward a theory according to which molecular motors are able to harness thermal fluctuations and use them to advance along the track in the desired direction. The energy necessary to achieve this without violating the Second Law of Thermodynamics is provided through the hydrolysis of AT P (see [31] for the discussion of the energetics of molecular motors); while the unidirectionality of the motion is the result of the polarity of the motor-track interaction (see [16] where this issue is discussed for a synthetic molecular motor). Models based on these ideas belong to the larger category of ratchet models and have been studied in the context of various other physical phenomena: transport and separation devices for mesoscopic particles, microfluidic pumping, the photoalignment of liquid crystals, to name a few [9, 24, 25, 32]. Experimental evidence of the applicability of the ratchet model for molecular motors can be found for instance in [18, 26, 27]. In the second part of these lectures we will introduce some mathematical concepts necessary to formulate the basic ratchet model called the flashing ratchet. In this model it is assumed that each step in the motion of a single molecular motor has two alternating phases: the on phase during which the motor is tightly attached to the track and the diffusion phase during which the motor diffuses freely along the track. Indeed some types of myosin motors and mutant one headed kinesin motors appear to move, roughly speaking, in this way [18, 26, 27]. We will study the properties of the flashing ratchet and show that it predicts the unidirectional transport of mass. This provides some verification of the ratchet model since we can say that to some extent it can predict what is seen experimentally. In particular the flashing ratchet if properly ”tuned” has the efficiency of more than 90%, by this we mean that more than 90% of the motors will move for a long time in the desired direction. This is a quite remarkable fact since during the diffusion phase the motor is subject onely to random, undirected fluctuations. One of the main differences between the flashing ratchet model and the hand-over-hand (kinesin) and the lever arm model (myosin) is the presence of the diffusion step in the motion of molecular motors. The latter models assume that during the whole motion a motor is tightly attached to the track [36]. This being the case one can not forget that, given the environment of the motor and its scale, thermal fluctuation will have, undesirable o desirable, influence on the motion. Later in these notes we will present a two state ratchet model (see [20, 7] and the references therein) based in the idea that in each of it conformational steps a molecular motor is attached to the track but it also experiences thermal fluctuations. It turns out that such model also predicts a high efficiency transport. It also teaches us something about the level of coordination required for the motion since the location of the motor during the conformational change seems to be crucial to achieving such a transport and overcome the randomness thermal fluctuations. 4

Finally, we will describe a discrete ratchet model. This is an intellectual construct, a coin game, designed to illustrate the so called Parrondo’s Paradox. But it is also a simple minded illustration of the lever arm model for this type of myosins whose along the track motion is not processive. As we will see the unidirectional motion (or gain of the capital) is achieved by resetting in each step the system to its previous state in a manner similar to that of a screw with stripped thread.

2.4

Forces, elasticity, damping, work in the nanoscale

Proteins and other micromolecules are small, hence the inertial forces are small. On the other hand the viscous forces are large and dominate the mechanical responses. Various other forces play also an important role, as the table below shows (this table and a lot of information of this section has be adopted form [17]). On the molecular scale forces are measured in pico Newtons (1pN=10−12 N). Types of forces Elastic Covalent Viscous Colisional Thermal Gravity Centrifugal Electrostatic, Van der Waals Magnetic

Magnitudes 1100 pN 10 000 pN 1–1000 pN 10−9 –10−12 pN 100–1000 pN 10−9 pN < 10−3 pN 1–100 pN 10−6 pN

Below we will discus some of the forces mentioned above and their influence in more detail. 2.4.1

Elastic forces

If a body with stiffness κ is distorted from the resting length by the quantity x the force applied is equal to: F = κx. For a protein, and in particular a molecular motor one can assume that κ ≈ 1 mN/m=1 pn/nm, which leads to the force F ≈ 1pn if the protein is stretched 1 nm (1 nm = 10−9 m). 2.4.2 Viscous forces. An object moving in a liquid experiences a viscous (drag) force from the liquid. If v relative velocity of the object and γ the drag coefficient then F = γv Denoting η viscosity of the liquid and applying the Stokes law we get: γ = 6πrη 5

For a protein of mass 100 kDa, with radius r = 3 nm we have γ = 60 pNs/m. Thermal speed of such protein ≈ 8 m/s, results in the viscous force ≈ 480 pN. (1 Da= 1.66 × 10−24 g=mass of 1 hydrogen atom). 2.4.3 Collisional and thermal forces. We recall that by the Newton’s Second Law the force is equal to the rate of change of the momentum, namely d F = (mv). dt In its environment a molecular motor is constantly bombarded by other smaller molecules, for example water molecules. The mass of such molecule is ≈ 30 × 10−27 kg and its average speed is average speed ≈ 600 m/s which yields the momentum ≈ 18 × 10−24 kg m/s. Then force resulting from one elastic collision with water molecule can be calculated to be ≈ 36 × 10−12 pN. This is an extremely small number compared with other forces. However a large number of collisions takes place and the resulting thermal force on a 100 kDa protein is on the order of the viscous force ≈ 500 pN. 2.4.4 Spring and dashpot In a very simplified mechanistic model a molecular motor protein can be thought of as a spring. As it was pointed out above the viscous forces that the motor proteins experience are relatively large and therefore they are, as springs, overdamped. On the other hand the inertial terms in the equation for the spring can be ignored since the gravity forces, as shown in the table above are relatively small. Therefore the applied force is opposed only by the sum of the viscous and and elastic forces: F =γ

dx + κx dt

Assuming F to be a constant force we get x(t) =

F (1 − e−t/τ ), κ

τ = γ/κ

This formula allows us to find the time scale of protein conformational changes. We observe that the time in which the protein will relax from the strained conformation is of the order of τ . For a typical protein with mass m≈ 100 kDa we have that γ ≈ 60 pNs/m, κ ≈ 4 pN/m and the relaxation time τ ≈ 15 ns. Chemical changes take place in pico seconds and thus we see that the conformational changes are much slower. 2.4.5 Work, energy, heat We recall that the total work done by the force F (s) is given by the formula ! x0 w= F (s) ds 0

Usually as the unit of work one uses: 1 Joule =1 N×m, however on the nanoscale it is more appropriate to use 10−21 J = 1 pN×nm.

6

Integrating the equation for the overdamped spring: ! x0 ! x0 ! x0 dx dx + γ κx dx = F dx dt 0 0 0 we get, after changing the variables ! t ! x0 1 γ v 2 ds + κx20 = F dx. 2 0 0 This can be written as: heat+Potential Energy = work. This is nothing else but the First Law of Thermodynamics for the mass and dashpot system at rest (or an approximate statement for the overdamped system). General form of this law, taking into account the inertial forces, is Kinetic Energy+heat+Potential Energy=work: ! t ! x0 1 2 1 2 2 mv + γ v ds + κx0 = F dx. 2 2 0 0

2.5

Thermal forces and Diffusion

Thermal forces acting on proteins arise from a large number of collisions with water and other proteins in the in their environment (which has a character of a very viscous fluid). In general the movement of particles resulting from a large number of collisions with other particles is called thermal motion. An object subject to such movement is said to have thermal energy. The forces resulting from the collisions are randomly directed and the motion resulting from them is characterized by rapid changes in the direction. On a macroscopic scale the effect described is seen as diffusion. The diffusion of a single particle is called Brownian motion. 2.5.1

Boltzmans Law

A given particle always tends to the energetically more favorable state i.e. a state with the lower energy. However due to the molecular collisions particles do not spend all the time at the lowest energy levels but actually move around them. Putting things more graphically they are being bumped in and out of the wells of the energy. Since the collisions are random events we should speak of the probability of finding a particle on the given level of the energy. The Boltzman Law says that if particles are in thermal equilibrium then, " # # " $ Ui 1 Ui , Z= exp − pi = exp − Z kT kT i

where pi is the probability of finding a particle in a state i, Ui is the energy of state i. Constant Z is called partition function, k is the Boltzman constant, T temperature. Given that k = 1.38 × 10−23 J/K, at 25C (≈ 298.15 K), we have for instance kT= 4.116 × 10−21 = 4.116pN×nm. This quantity gives the order of the thermal energy since the probability of finding particles on higher energy level (say 100 times higher) is extremely small. 7

For our purpose it is convenient to use Boltzman Law to define the equilibrium systems. Namely we say that a given system is at at equilibrium if Boltzman Law holds. If a system is at equilibrium then this implies that it is at steady state i.e. the average properties of the system do not change in time. But not the other way: steady state allows constant flux of particles while equilibrium not. 2.5.2

Diffusion as a random walk.

Diffusion is a result of abrupt, random changes of direction of motion that are caused by collisions with surrounding molecules. If molecules are moving in random directions then they move from the areas of high concentration to the areas of low concentration. The rate of movement of molecules per unit area is called the concentration flux. Below we will compare two quantities: the concentration and the flux of particles. Concentration [molecules/m3 ] = c(x, t) Flux [molecules/unit area × sec] = J(x, t) One relation between the two is given by the Ficks law and says: 1 J(x + ∆x)∆t = [c(x) − c(x + ∆x)]∆x 2 The right hand side of this equation is the average change concentration between two points: x and x + ∆x; the left hand side is the change in the flux through x + ∆x in a short time interval ∆t. This can be recast as: J(x + ∆x) = −D

c(x + ∆x) − c(x) , ∆x

D≡

∆x2 ∆t

Another relation between the concentration and the flux is the continuity equation. It accounts for the change in the concentration due to the flux through a given area A: ∆c =

AJ(x, t)∆t − AJ(x + ∆x)∆t , A∆x

which in turn can be written as: J(x + ∆x, t) − J(x, t) ∆c =− ∆t ∆x Passing to the limit: ∆x → 0, ∆t → 0,

D=

we get from the Fick’s Law J(x, t) = −D while the continuity equation yields ∂J ∂c =− ∂t ∂x 8

∆x2 = const. ∆t ∂c ∂x

hence we get the Diffusion equation ∂2c ∂c =D 2 ∂t ∂x

(1)

To find out the unit of the diffusion coefficient we recall that the flux J is measured in [molecules/unit area×sec] while the concentration c is given in [molecules/m3 ]. This means that diffusion coefficient D is measured in [m2 /s]. 2.5.3

Fokker-Planck equation.

Assuming that the total number of particles involved is constant in time we can write the diffusion equation for the probability:

instead of c. This yields:

c(x, t) p(x, t) = % c dV

∂p ∂2p =D 2 ∂t ∂x Let now F (x, t) be an external force which forces molecules to move with velocity v = F/γ. This motion (directed) is called the drift and it is superimposed on the diffusion (random). To understand how the introduction of the drift modifies the diffusion equation (1) we define the probability flux as follows j(x, t) = −D

∂p F + p(x, t). ∂x γ

From the previous considerations we get: ∂2p ∂p ∂ =D 2 − ∂t ∂x ∂x

"

F p γ

#

Typically F ia a potential force, i.e. we have F =−

∂U ∂x

and the above equation is written in the form ∂2p ∂ ∂p =D 2 + ∂t ∂x ∂x

"

∂U p ∂x

#

(2)

This equation is called Fokker-Planck equation and it is a basic model we will work with here.

9

2.5.4

Determining the diffusion coefficient

For a system at steady state the so called Einstein relation holds: −D

dp 1 dU − p = j0 = const. dx γ dx

On the other hand for a system at equilibrium we have j0 = 0 hence & ' U (x) " # ! exp − Dγ U (x) , Z = exp − p(x) = Z Dγ p here is called the Gibss distribution. The equilibrium state is defined above as that at which the Boltzman Law holds, thus ' & (x) exp − UkT p(x) = Z Consequently,

kT γ

D=

For example the diffusion coefficients of ions can be calculated from this formula to be D ≈ 1.33 × 10−9 m2 /s at 25 C. It means that on the average an ion diffuses 1µm in 1 ms. On the other hand for the 100 kDa protein we get that D ≈ 60 × 10−12 m2 /s. 2.5.5

Special solution to the diffusion equation.

We will consider the special case of the Fokker-Planck equation taking ψ ≡ 0 i.e. the equation pt = Dpxx . We will look for a solution in the form p(x, t) = w(t)v(

x2 ) t

After some calculations we get v " = Ce −s/4D , hence v = Ce−x

s= 2 /4dt

Choosing a normalizing constant C so that p(x, t) = √

%

,

x2 , t

w" 1 =− w 2t

w = t−1/2

p = 1 we get

1 2 e −x /4Dt 4πDt

Function p is the Gaussian with expectation µ = 0 and variance σ 2 = 2Dt. Probability density p(x, t) describes the probability of finding a particle at location x at time t provided 10

that at t = 0 this particle was located at the origin x = 0. Indeed with p defined above we have that: pt = Dpxx , −∞ < x < ∞, t > 0

p(x, 0) = δ0

Using the value of the diffusion coefficient D ≈ 60 × 10−12 m2 /s for a 100 kD protein we find that on average it diffuses approximately 8µm in 1 and 80µm in 1 min. Taking into account that for instance the average speed of myosin molecules is of the order of several µm/s we see that the displacements coming from thermal fluctuations and those which are proper for the motion of molecular motors are of similar orders. 2.5.6

Langevin equation

Taking another point of view one can describe Brownian motion and consequently the diffusion process looking at individual particles and their trajectories. We recall that the motion of the particles is due to collisions with other particles and that the force resulting from those collisions and acting on a particle has random character. To be more precise we let x(t) denote the location of a particle at time t and suppose that this particle is moving subject to a random force f (t) and a drift ψ. dx = −ψx + f (t) dt We assume that: (1) f is independent on x; (2) average of f is 0; (3) f oscillates rapidly relative to changes of x(t). Such f is called white noise. To make the definition of f rigorous actually requires quite a complicated construction. For example it can be shown that f satisfying (1)–(3) can not be a differentiable function of t. In the theory of Stochastic Processes a suitable function, which is a stochastic process, called Brownian motion and denoted Bt , is defined. The above equation is then written in the form: dx = −ψx dt + 2DdBt This is the so called Langevin equation. One can write formally a solution to this equation with x(0) = x0 : ! t ! t x(t) = x0 − ψx (x(s), s) ds + 2D dBs 0

%t

0

Again one has to make sense of the integral 0 dBs (Itˆ o integral) and define the exact meaning in which the above formula represents a solution to the initial value problem for the Langevin equation. This issues are beyond the scope of our notes and we refer the interested reader to [13] for further details. Given a particle whose trajectory is described by the Langevin equation one can try to find the probability that the particle at time t is located at x given that at t = 0 it is located at x0 . It turns out that the probability density is described by: Dpxx + (ψx p)x = pt , p(x, 0) = δx−x0 i.e. we obtain the Fokker-Planck equation. 11

3 3.1

Ratchet mechanism and molecular motors Modelling molecular motors

In order to develop a mathematical model for the molecular motor we will work under the following basic assumptions: (1) A motor is subject to thermal fluctuations. (2) A motor consumes the energy energy (ATP-ADP hydrolization). (3) The motor/potential interaction has polar character resulting in unidirectional motion. Starting from those general principles denoting by x the position of the motor then its motion is described by the following Langevin equation: γ

dx = −∂x ψ(x, t) + f (t) dt

Where f (t) is the random force satisfying (f (t)) = 0 (mean 0), ( f (t)f (t)) = 2γT δt−t i.e. f (t), f (t) at t *= t are independent (Brownian motion). As we have mentioned above f corresponds to random fluctuations. Function ψ(x, t) is periodic, asymmetric in x and fluctuating in t. Introducing such ψ we model the transduction of the chemical energy into motion. In the simplest case ψ oscillates between a periodic, asymmetric potential and ψ = 0 (see Figure 1 below). In general ψ is assumed to be smooth in x. If the oscillations are in addition periodic in t then we speak of the flashing ratchet. With some abuse of notation we will denote ( ψ(x), 0 ≤ t < Ton ψ(x, t) = 0, Ton ≤ t < T

where Ton time on or time of transport; Tdif f = T − Ton diffusion time or time off. As we have said above ψ(x) is asymmetric-by this we mean that the wells of the potential ψ are located in an asymmetric way in within the periodic cell of ψ. The asymmetry of ψ in the on phase is crucial here. This is a way we represent the polarity of the motor/filament interaction. Without the asymmetry our model would not have represented transport. In addition left or right moving motors can be modelled with the same system and the only difference between them is the change of the asymmetry of the location of the wells of ψ. Flashing ratchet will be the basic model for the molecular motors we will study. It belongs to a family called Brownian ratchets. These are models for phenomena in which directed motion in a periodic medium occurs in the presence of an unbiased on average forces and interactions. Examples vary from molecular motors and molecular pumps to Janossy effect in liquid crystals. Historically, a Brownian ratchet, as a abstract construct, can be traced back to a lecture of Feynman [12]. The idea that the ratchet mechanism may be behind the motion of molecular motors has it source in the works of Huxley in the 50’s. Experiments of Okada and Hirokawa on single headed kinesin motors suggest the flashing ratchet as an adequate model to explain processive motion of this motor [26, 27]. More 12

8 16

14 6

12

10

4

8

6

2

4

2

0

0.2

0.4

0.6

0.8

0

1

0.2

0.4

0.6

0.8

1

Figure 1: A periodic asymmetric potential ψ and its Gibbs distribution (left) and the periodic state at two different instants (right).

complicated two state ratchet was used by Badoual, Julicher and Prost to explain qualitatively [4] experiments of Endow and Higuchi on motility assays [10]. This model suggest that in a long time, roughly speaking, the system is close to periodic. Other ratchet models include tilting ratchets, hamiltonian ratchets, quantum ratchets however they arise in different contexts and we will not consider them here. 3.1.1

Fokker-Planck description of the flashing ratchet.

We have explained above duality between the Stochastic ODE (Langevin equation) and the Fokker-Planck equation. Using that this we derive the Fokker-Planck model for the flashing ratchet in the form (equivalent to the basic Langevin equation) : σ 2 uxx + (ψx u)x = ut , ux + ψx u = 0, u(x, 0) = u0 ,

0 < x < 1, t > 0 x = 0, x = 1 %1 0 u0 = 1

(3)

Where ψ is the flashing, smooth potential. We have conveniently rescale the domain here so that x ∈ (0, 1). We will further assume that ψx = 0 at x = 0, 1 so that the boundary condition in (1) is Neumann. In Figure 1 we show a 4 well potential and its Gibbs state as well as the results of long time simulations for the flashing ratchet based on this potential.

4

A variational principle for molecular motors

The purpose of this section is to derive the flashing ratchet model again. The idea is to view the molecular motor more as a mechanical device, or rather an assembly of such devises. 13

Their motion has two components: the mechanical one corresponds to the conformational change of an individual motor. The second component of the motion is due to the thermal fluctuations. Our starting point will be the total energy of such an assembly and the evolution equation (3) will at the end be obtained through a variational principle associated with the energy functional. To develop this theory some preliminaries are needed.

4.1

Kantorovich-Wasserstein distance and its relation to transport

Let f, f ∗ be two probability densities i.e. ! ! f= f∗ = 1 Ω



and φ be a map that rearrange into f i.e. φ is one-to-one and ! ! φ : Ω → Ω, ζ(y)f (y) dy = ζ(φ(x))f ∗ (x) dx, ∀ζ ∈ C(Ω) f∗





A good way to make the concept of mass rearrangement is to think of moving a pile of dirt whose distribution is given by f ∗ into a hole given by f according to the plan given by φ. From now on we will assume that Ω = (0, 1). If in addition φ is strictly increasing and φ(0) = 0, φ(1) = 1 then we say that f is the push forward of f ∗ by φ and φ is the associated transfer function. Sometimes we denote f ∗ = φ# [f ]. Let χA be a characteristic function of A ⊂ Ω, ! ! f (y) dy = f ∗ (x) dx φ−1 (A)

A

or with E = φ(A),

!

f (y) dy =

φ(E)

in particular if E = (0, x)

!

φ(x)

f (y) dy =

0

In one dimension

! !

f ∗ (x) dx E x

f ∗ (x" ) dx"

0

F (φ(x)) = F ∗ (x) where F, F ∗ are distribution functions of f, f ∗ respectively. Thus φ(x) = F −1 (F ∗ (x)) Keeping the analogy with moving dirt assume (following Monge) that we have a rearrangement φ∗ and that the total work associated with this rearrangement is ! 1 ∗ I[φ ] = |x − φ∗ (x)|2 f ∗ (x) dx 0

i.e. the cost is proportional to distance2 moved. 14

Consider the problem of minimizing the total cost. It turns out that the optimal mass transfer plan is given by φ described above I[φ∗ ] I[φ] = min ∗ φ

Moreover I[φ] defines a distance on the set of probability measures called KantorovichWasserstein distance ! 1 ∗ 2 |x − φ(x)|2 f ∗ (x) dx, φ(x) = F −1 (F ∗ (x)) d(f, f ) = 0

Let now f = f (x, t), 0 < t < τ be a family of probability measures, let f ∗ (x) be a probability measure and let be φ(x, t) the associated transfer functions so that ! 1 ! 1 ζ(x)f (x, t) dx = ζ(φ(x, t))f ∗ (x) dx 0

0

i.e f (·, t) = φ# (·, t)[f ∗ ]. It follows !

φ(x,t)

f (y, t) dy =

0

!

x

f ∗ (x" ) dx"

0

Differentiating with respect to x f (φ(x, t), t)

∂φ = f ∗ (x) ∂x

Differentiating once again with respect to t fξ (φ(x, t), t)

∂φ ∂φ ∂φ ∂2φ + ft (φ(x, t), t) + f (φ(x, t), t)) =0 ∂t ∂x ∂x ∂t∂x

We define a velocity v(φ, t) =

∂φ ∂t

so that ∂2φ ∂φ = vξ (φ, t) ∂t∂x ∂x

which gives the following continuity equation ft + (vf )ξ = 0 One can also check the converse i.e. start with the continuity equation and define a rearrangement φ(x, t) between f (x, t) and f ∗ by φt = v(φ, t). By the result of Benamou-Brenier [5] ! τ! ∗∗ ∗ 2 d(f , f ) = τ min v 2 (x, t)f (x) dx v

0



where ft + (vf )x = 0 15

in Ω = (0, 1) and f (x, 0) = f ∗ (x),

f (x, τ ) = f ∗∗ (x)

To see this we use a map φ(x, t) that rearranges f ∗ into f (x, t) !

1

!

v(ξ, t)2 f (ξ, t) dξ =

0

so τ

! τ! 0

by

1

v(φ(x, t), t)2 f ∗ (x) dx

0

1

v(ξ, t) f (ξ, t) dξdt = τ 2

0

! τ! 0

1

φt (x, t)2 f ∗ (x) dxdt

0

v(φ(x, t), t) = φt (x, t) Since f (x, 0) = f ∗ (x) then φ(x, 0) = id hence ! τ |φt (x, t)| dt |x − φ(x, τ )| ≤ 0

≤ τ

1/2

"!

τ

0

φt (x, t) dt 2

#1/2

by Schwarzs inequality This yields: ∗

∗∗ 2

d(f , f )

!

1

|x − φ(x, t)|2 f ∗ (x) dx ! τ! 1 φt (x, t)2 f ∗ (x) dxdt ≤ τ 0 0 ! τ! 1 v(ξ, t)2 f (ξ, t) dξdt = τ ≤

0

0

hence ∗

∗∗ 2

d(f , f ) ≤ inf τ v

0

! τ! 0

1

v(ξ, t)2 f (ξ, t) dξdt

0

Choosing φ(x, t) = x + t(φ∗ (x) − x)/τ , where φ∗ is the optimal transfer plan between f ∗ and f ∗∗ we get the equality above. This gives the result of Benamou and Brenier. 4.1.1

Dissipation and the Kantorovich-Wasserstein distance

Recall that the motion of a mass-spring-dashpot system without forcing is given by mξ "" + γξ " + κξ = 0,

ξ(0) = x, ξ " (0) = ν

The conservation of energy (the First Law of Thermodynamics) gives ! τ κ 1 " 2 m|ξ (τ )| + γ |ξ " (t)|2 dt + |ξ(τ )|2 = const. 2 2 0 16

We consider an overdamped system i.e γ / mκ. Assume that the assembly of springs is initially distributed with density f ∗ (x). Define φ(x, t) = ξ(t). Then the energy dissipated in the system is ! ! τ

δ=γ

0

1

φt (x, t)2 f ∗ (x) dxdt

0

We also define the transported density f (x, t) by ! 1 ! 1 ζ(ξ)f (ξ, t) = ζ(φ(x, t))f ∗ (x) dx 0

0

and denote the terminal distribution f (x, τ ) = f ∗∗ (x). Set v(x, t) = φt (x, t). We get that f (x, t) satisfies the continuity equation ft + (vf )x = 0 and ! τ! 1 ! τ! 1 2 |v(x, t)| f (x, t) dxdt ≥ γ inf |v(x, t)|2 f (x, t) dxdt δ = γ 0

=

v

0

γ d(f ∗∗ , f ∗ )2 τ

0

0

(4)

Similarly as in the verification of the Benamou-Brenier result we can show that the inf is actually attained for some choice of the initial velocity field ν = ν(x) for the assembly of springs. Namely, the mass-spring-dashpot system has an explicit solution that can be written in the form φ(x, t, ν) = a(t)x + b(t)ν Let φ∗ be the optimal transfer plan between f ∗ and f ∗∗ . Set ν(x) =

φ∗ (x) − a(τ )x b(τ )

With this choice of ν, velocity v = φt yields an equality in (1). Summarizing: for an assembly of springs with initial distribution f ∗ and terminal distribution f ∗∗ the KantorovichWasserstein distance is a measure of the energy dissipated in the system due to heat in the sense ! τ! 1 γ γ inf φt (x, t, ν)2 f (x, t) dxdt = d(f ∗∗ , f ∗ )2 ν τ 0 0 where inf is taken over all initial velocity fields ν. 4.1.2

Dissipation principle

In this section we will summarize the results obtained in [6]. Given potential ψ consider the following energy functional defined for probability distributions f ! 1

E[f ] =

[σ 2 f log f + ψf ]

0

%1 log f represents the entropy of the system and 0 ψf is the potential Above, term 0 energy. Entropy measures the contribution of the diffusive motion to the total energy and %1

σ2f

17

σ 2 is the diffusion coefficient. Assume for brevity that γ = 1/2. Suppose further that the system (of springs) starts from a distribution f ∗ and relaxes to a distribution f ∗∗ during time t = τ . Dissipation principle we postulate requires: Energy dissipated due to heat + entropy + potential energy after relaxation is less then the Entropy + potential energy initially. In other words ! 1 1 d(f, f ∗ ) + [σ 2 f log f + ψf ] 2τ 0 ! 1 ≤ [σ 2 f ∗ log f ∗ + ψf ∗ ] 0

To achieve that we can minimize the expression on the left. Namely, given a probability density f ∗ we seek a probability density f such that ! 1 1 ∗ 2 d(f, f ) + [σ 2 f log f + ψf ] = min 2τ 0 Since f = f ∗ is an admissible test function and d(f ∗ , f ∗ ) = 0 the requirement that the dissipation decreases when the system relaxes is automatically satisfied. 4.1.3

Dissipation principle and Fokker-Planck equation

Now we will show that the energy consideration we have presented above lead eventually, after a suitable limiting procedure to the same model PDE model as (3). To this end we consider the following implicit scheme: given τ > 0, and a probability distribution u(0) determine u(k) , k = 1, such that ( ) 1 1 d(u(k−1) , u(k) )2 + E[u(k) ] = min d(u(k−1) , u)2 + E[u] u 2τ 2τ where E[u] =

!

1

[σ 2 u log u + ψu]

0

Let

u(τ ) (x, t)

be defined by

u(τ ) (x, t) = u(k) (x) for t ∈ [kτ, (k + 1)τ ), x ∈ (0, 1). The result of Jordan, Otto, Kinderlehrer [19] states that: (1) there exists a unique solution to this scheme; (2) as τ approaches 0, u(τ ) converges strongly in L1 ((0, t) × (0, 1)) to the solution of σ 2 uxx + (ψx u)x = ut with the natural boundary conditions at x = 0, 1. We refer the reader to [28, 29, 23, ?, ?] and their references for further information on the connection between the gradient flows with respect to the Wasserstein distance and parabolic PDE’s. 18

4.2

Cooperative motion of molecular motors

Endow and Higuchi studied motility assays of mutant motors that lack directionality [10]. They observed that the filaments keep the given velocity for a while and then suddenly move in the opposite direction with similar velocity. Badoual, Julicher and Prost [4] studied this phenomenon theoretically. Motility assay is essentially a system of several rigidly coupled motors and they modelled it with a system of springs whose one end is fixed and the other moves freely along a saw tooth potential. They studied both directional (asymmetric potential) and non-directional (symmetric potential) motors. We concentrate on the directional case and consider the long time behavior of a motility assay. Monte Carlo simulations in [4] suggest that in the long time limit the springs cooperate to produce unidirectional motion. We will use even more simplified model in which the potential oscillates periodically. We let the motors move relative to the fixed filament represented by ψ. The initial distribution of motors is denoted by u(0) . This is a very idealized model which however suffices to study the following question: Can diffusion and asymmetry (of the potential) alone produce unidirectional transport? During the on phase we postulate that the system relaxes in such a way that our dissipation principle is satisfied ! 1 1 d(u, u(k−1) )2 + [σ 2 u log u + ψu] = min 2τ 0 During the off phase ψ = 0 and the dissipation principle now reads ! 1 1 (k−1) 2 d(u, u ) + σ 2 u log u = min 2τ 0

As it was described above this time stepping procedure leads to the flashing ratchet model with ψ flashing between ψ (Ton ) and 0 (Tdif f ) periodically with period T = Ton + Tdif f . More precisely we have, in the limit of small relaxation time τ , the following PDE: σ 2 uxx + (ψx (x, t)u)x = ut The significance of all this is that Fokker-Planck equation arises naturally both from stochastic considerations and mechanistic arguments based on the dissipation principle.

5

Long time behavior of the flashing ratchet

From now on we will consider mainly the flashing ratchet model as follows: σ 2 uxx + (ψx u)x = ut , ux = 0, u(x, 0) = u0 (x),

x ∈ (0, 1), t > 0 x = 0, 1 %1 0 u0 (x) = 1

We assume that ψ is a smooth potential, both in x and t and it oscillates in t with period T . The boundary conditions above are natural if ψx = 0, x = 0, 1. In this section we will describe some of the results obtained in [22] and [8]. 19

5.0.1

Some simple properties of the solution

Integrating the equation from 0 to 1 in x and using the boundary conditions we get: ! 1 ! 1 u(x, t) dx = u0 (x) dx = 1 0

0

Using the comparison function of the form v = minx u0 (x)e −M t with large M > 0 we see that u > 0 for all times provided that u0 > 0. Thus u(x, t) represents a probability density as expected. 5.0.2

The energy estimate and Poincar´ e inequality.

To understand the main idea of the considerations below we look at the following example uxx = ut ,

x ∈ (0, 1), t > 0

ux = 0,

%1

u(x, 0) = u0 (x),

0

x = 0, 1

u0 (x) dx = 1

Multiply the above equation by u and integrate by parts in x over (0, 1) ! 1 ! 1 d 1 2 − u2x dx = u dx 2 dt 0 0 Thanks to Poincar´e inequality applied to probability densities on (0, 1) ! 1 ! 1 2 ux dx ≥ C (u − 1)2 dx 0

which yields

1d 2 dt

or alternatively

!

1

0

d dt

0

(u − 1) dx ≤ −C 2

!

1 0

(u − 1)2 dx

( ) ! 1 2Ct 2 e (u − 1) dx ≤ 0 0

Integrating now over (0, t) gives ! 1 ! 1 2 −2Ct (u − 1) dx ≤ e (u0 − 1)2 dx 0

0

via Gronwall’s inequality. From this two conclusions can be made: (1) solution with initial data in L2 converges to 1 in L2 norm; (2) the rate of convergence is exponential in time. To obtain similar conclusions for the flashing ratchet model this approach has to be modified since in this case during the on phase we have to deal with the Fokker-Planck potential with a non-trivial potential ψ. Most of the deep mathematical results we will use below can be found in [37] and we refer the interested to this book and the references therein for more information. 20

5.0.3

Entropy and entropy production

To follow the above argument we will first state some inequalities that will replace Poincar´e inequality. These are deep results in analysis and their proofs, which are not easy, will be omitted. Let u, v be two functions in L2 . ! 1& ! 1 * & '*2 '2 *d u * u * * Σ2 [u|v] = − 1 v dx, I2 [u|v] = 2 * dx v * v dx v 0 0

Σ2 is called the relative entropy and I2 the relative entropy production. They satisfy the following inequality (weighted Poincar´e inequality): Σ2 [u|v] ≤ Cv I2 [u|v] We also define Σ1 , I1 (again called the entropy and entropy production) by Σ1 [u|v] = and

!

1 0

u log

&u' v

dx = 2

I1 [u|v] = 4

!

0

These quantities satisfy

!

0

1 +&

u '1/2 v

,2

log

& u '1/2 v

v dx

* & ' *2 1/2 * *d u * v dx * dx v *

1*

Σ1 [u|v] ≤ Cv I1 [u|v]

which is known as the logarithmic Sobolev inequality. It is clear that Σp , Ip can be defined appropriately for 1 < p < 2; we will not use these quantities here. Just like the Poincar´e inequality seems to be tailored for the heat equation the entropy-entropy production inequalities can be tailored to the Fokker-Planck equation with different potentials by choosing suitable weights. To see that we consider a time independent potential V = V (x) and the equation uxx + (Vx u)x = ut (5) Let u be a solution with initial data u0 of (5) with the natural boundary conditions. One can calculate then: ! & u ' d 1 d −V Σ1 [u|e ] = u log − V dx dt dt 0 e ! 1 - & . u ' ut log −V + 1 dx = e 0 ! 1 . d - −V & u ' . - & u ' log + 1 dx = e e−V x e −V 0 dx , ! 1 +& u '1/2 2 dx = −I1 [u|e −V ] = −4 −V e 0 x ≤ −CΣ1 [u|e −V ] 21

The logarithmic Sobolev inequality and Gronwall inequality give, similarly as for the heat equation, Σ1 [u|e −V ] ≤ e −Ct Σ1 [u0 |e −V ]

This implies exponential convergence of the solution to the Fokker-Planck equation to its Gibbs state (in the sense of the relative entropy). Now we will consider the flashing ratchet. We will mimic the same computation with the potential ψ = ψ(x, t), which is smooth and periodic in t with period T . For simplicity we will assume that σ = 1. Define uψ (x) = % 1 0

e −ψ(x) e −ψ(y) dy

Computing the time derivative of Σ1 [u|uψ ] in this case gives ! 1 u d Σ1 [u|uψ ] = −I1 [u|uψ ] − uψ,t dx dt 0 uψ ≤ −Cψ Σ1 [u|uψ ] + Kψ Gronwall’s inequality in turn yields Σ1 [u(T )|uψ ] ≤ Σ1 [u0 |uψ ]e −Cψ T + Kψ (1 − e −Cψ T )/Cψ Consider now a set of initial data u0 such that Σ1 [u0 |e−Cψ T ] ≤ Kψ /Cψ . From the above it follows that Σ1 [u(T )|uψ ] ≤ Kψ /Cψ This means that the map which to given initial data u0 (x) assigns solution u(x, T ) of the Fokker-Planck equation with the periodic potential leaves invariant set of functions ) ( ! 1 u = 1, Σ1 [u|uψ ] ≤ Kψ /Cψ Y = u | u ≥ 0, 0

The map u0 1→ u(T ) is called the time map and. From the above above considerations we see that the time map applies the set Y into itself. This is the basic set up for the application of the following abstract result of functional analysis called Shauders theorem: (1) if set Y is convex (2) and if it is also precompact then there exists a fixed point of the time map. A fixed point of time map is a periodic solution the the Fokker-Planck equation and thus we conclude that there exists a periodic solution for the flashing ratchet. We will from now on denote it by U = U (x, t). Below we show that U is the unique periodic solution and that all other solutions tend to U as time approaches ∞.

22

5.0.4

The rate of convergence to the periodic state

We consider now the relative entropy of two solutions to the flashing ratchet, say u1 and u2 . We will compute the entropy production of Σ1 [u1 |u2 ]. It turns out that calculations are very similar for Σ2 [u1 |u2 ] so we will omit them. For Σ1 [u1 |u2 ] we get: d Σ1 [u1 |u2 ] = −I1 [u1 |u2 ] dt Logarithmic Sobolev inequality (or weighted Poincare inequality) applied to Σ1 [u|U ] and I1 [u|U ] (or Σ2 [u|U ] and I2 [u|U ]) gives d Σ1 [u|U ] ≤ −CΣ1 [u|U ] dt hence This means:

Σ1 [u(t)|U (t)] ≤ e −Ct Σ1 [u0 |U ]

(1) any solution of the flashing ratchet approaches a periodic state in the sense of the relative entropy (2) periodic state is unique (if there are two periodic states U1 , U2 then from the above they have to get closer and closer one another) (3) the rate of convergence is exponential Since Σ2 [u|U ] =

!

0

1&

'2 u − 1 U dx ≥ C U

!

0

1

(u − U )2 dx

therefore we also have the exponential convergence to U in the L2 sense. In fact, the relative entropy Σ1 [u|U ] can be related to the L1 distance between u and U by the Csiszar-Kullback inequality, namely: +! 1 ,2 |u − U | dx Σ1 [u|U ] ≥ C 0

From which it follows that any solution to the flashing ratchet converges to the unique periodic state at an exponential rate in L1 sense. By another deep result called Talagrand inequality (a suitable version of it was proven by Bobkov and Goetze) one can relate Σ1 [u|U ] with the Wasserstein distance Σ1 [u|U ] ≥ Cd(u, U )2 , which implies exponential rate convergence in the sense of Wasserstein distance, namely we have shown / d(u, U ) ≤ Ce −Ct Σ1 [u0 |U ]

In summary, long time behavior of the flashing ratchet is determined by the properties of the periodic state U since any solution converges to it at an exponential rate in any reasonable norm. 23

5.0.5

Numerical simulations for the periodic state

Numerical simulations performed over sufficiently large time give a good approximation of the periodic state for the flashing ratchet (see Figure 1). From these simulations it can be seen that the efficiency of the flashing ratchet is 90% in the sense that mass accumulated the left of 1/2 =

!

1/2

U (x, t) dx > 0.9,

0

t→∞

% 1/2 Following the behavior of 0 U (x, t) dx over the time period T an interesting observation can be made: during the on phase there is a net flux of particles to the right whereas during the diffusion phase the is a net flux to the left i.e. we ”gain” during the diffusion and ”loose” during the transport.

6

The Mechanism of Transport

The purpose of this section is to provide further details about the behavior of the periodic state. To do this we will find a suitable approximation of this state in the singular limit, that is we will assume that the diffusion constant σ ≈ 0. At the end our procedure will yield the conditions that the potential ψ should satisfy for the flashing ratchet to produce transport.

6.1

Markov chain approximation

Assume that the N -well potential ψ is periodic with period 1/N and that between its maxima has exactly one, asymmetrically located minimum. A good model to keep in mind is a saw tooth potential with ”smooth” corners. Let x0 = 0, x1 = 1/N, . . . xi = i/N, . . . , xN = 1 be the location of the maxima and a1 , . . . , aN , xi−1 < ai < xi of the minima of ψ. Let gσ (x, t; a) be the fundamental solution of the heat equation with the Neumann boundary conditions i.e. gσ solves gt = σ 2 gxx ,

0 < x < 1, t > 0

g(x, 0) = δa ,

0 0 we can chose parameters σ, Ton , Tdif f such that $ d(U (·, Tdif f ), u∗i δai ) ≤ K0 ε 3U (·, T ) −

$

i

u∗i gσ (·, Tdif f ; ai )32L2

≤ K0 e−πσ

i

2T dif f

ε

where d(·, ·) denotes the Wasserstein distance between the probability measure. 6.1.3

Transport for the Markov chain.

We have just seen that it is possible to adjust the parameters To n, Td if f and σ (small) so that the flashing ratchet is ”tuned” to the Markov chain in the sense that the distribution of the mass for the periodic state is given, with good accuracy, by the stationary vector of the approximating Markov chain u∗ . Thus to show that the flashing ratchet yields unidirectional transport we should try to show the same property for its discrete counterpart. In turn, to verify transport for the Markov chain suffices to show that under suitable conditions we have for instance u∗i > u∗i+1 (transport to the left). For simplification we will consider the two state Markov chain that corresponds to the 2 well potential with wells at a and 1/2 + a, 0 < a < 1/4. Thus we have the transition matrix of the chain in the form: # " 1 − P1 P1 P = 1 − P2 P2 where P1 =

!

1/2

gσ (x, Tdif f ; a) dx,

P2 =

0

!

1

gσ (x, Tdif f ; 1/2 + a) dx

1/2

The stationary vector of P can be explicitly computed u∗1 =

1 − P2 , 2 − (P1 + P2 )

u∗2 =

1 − P1 2 − (P1 + P2 )

If 1 − P2 > 1 − P1 then u∗1 > u∗2 implying transport to the left. With the potential with wells at 1/8 and 5/8 *see Figure 2) we get 1 − P2 ≈ 0.1615 and 1 − P1 ≈ 0.0015 and u∗1 ≈ 0.99 > u∗2 ≈ 0.009. Observe that in general u∗1 (1 − P1 ) = u∗1 P12 = u∗2 (1 − P2 ) = u∗2 P21 and so P12 < P21 implies u∗1 > u∗2 . In general unidirectional transport is difficult to verify even for the Markov chain with more than 2 states since the stationary vector is difficult to compute. However it is interesting to observe that unidirectional transport is implied by a well know property of the Markov chain called detailed balance and the asymmetry of the potential. We say that detailed balance holds for the Markov chain given by the matrix P if u∗i Pij = u∗j Pji 27

4

3

y 2

1

0

0.2

0.4

0.6

0.8

1

x

Figure 2: A comparison between two Gaussians, one centered at 1/8 and the other at 5/8. (this property is trivial for the 2 state Markov chain). Detailed balance is still difficult to check however we believe (on the basis of numerical simulations) that it is true in general. It is not too hard to check that Pij < Pji

if

i 0 in each state. Time evolution of the whole system is then governed by the following system of coupled Fokker-Planck equations: ρ1,t = (σρ1,x + ψ1,x ρ1 )x − ν1 ρ1 + ν2 ρ2 , ρ2,t = (σρ2,x + ψ2,x ρ2 )x + ν1 ρ1 − ν2 ρ2 ,

σρi,x + ψi,x ρi = 0, ρi (x, 0) =

x = 0, 1, t > 0, i = 1, 2

ρ0i (x)

> 0,

(x, t) ∈ (0, 1) × (0, ∞) (x, t) ∈ (0, 1) × (0, ∞)

x ∈ (0, 1)

(6) (7) (8) (9)

Just like the flashing ratchet, the two state ratchet model can be derived on the basis of elementary considerations involving the probability flux of each species. Namely, denoting Ji = σρi,x + ψi,x ρi we see that (6)–(7) are equivalent to ρi,t = Ji,x − νi ρi + νi# ρi#

while (8) is the no-flux boundary condition. The similarity between the flashing ratchet and two state ratchet goes further since one can derive the two state ratchet as the gradient flow of a suitable functional in the topology of the Wasserstein distance. To explain this we let # " −ν1 ν1 , P (τ ) = id + τ ν ν= ν2 −ν2

where τ is a relaxation time. Notice that if ρ∗ = (ρ∗1 , ρ∗2 ) is an initial distribution of the two states then ρ = ρ∗ P (τ ) would be a distribution of the two states after time τ if the only effect present were the conversion between them. To take into account the effect of diffusion and transport we define $! 1 (σρi log ρi + ψi ρi ) dx. E[ρ] = i=1,2 0

The dissipation principle analogous to the one described above for the flashing ratchet takes (0) (0) (k) (k) form: given an initial distribution ρ(0) = (ρ1 , ρ2 ) determine a sequence ρ(k) = (ρ1 , ρ2 ), k = 1, . . . such that $ 1 (k) d(ρi , (ρ(k−1) P (τ ) )i )2 + E[ρ(k) ] = min . 2τ i=1,2

This implicit scheme yields the solution ρ to (6)–(9), namely if we define ρ(τ ) (x, t) = ρ(k) (x),

then ρ(τ ) → ρ on each finite time interval [6]. 29

kτ ≤ t < (k + 1)τ,

7.2

Long time behavior of the two state ratchet

Since the two state ratchet and the flashing ratchet can be derived within the framework of the dissipation principle one would hope that the similarity between them goes further and that we can analyze the long time behavior of (6)–(9) using the entropy-entropy production method. However the definition of the entropy and entropy functional for the system is not clear. Consequently in [7] another approach is taken to study the limiting state and the presence of transport for (6)–(9). We will now describe these results. The stationary state for the two state ratchet satisfies the following system (σρ1,x + ψ1,x ρ1 )x − ν1 ρ1 + ν2 ρ2 = 0, (σρ2,x + ψ2,x ρ2 )x + ν1 ρ1 − ν2 ρ2 = 0, σρi,x + ψi,x ρi = 0,

x ∈ (0, 1)

(10)

x = 0, 1, i = 1, 2

(12)

x ∈ (0, 1)

It is shown in [7] that (1) There exists a unique solution ρs for (10)–(12) such that ρsi > 0, i = 1, 2 and ρs2 ) dx = 1.

(11)

%1 0

(ρs1 +

(2) Let ρ(x, t) be a solution to (6)–(9). There exists constants M0 > 0 and γ > 0 such that sup |ρ(x, t) − ρs (x)| ≤ M0 e −γt . x∈(0,1)

Once we know that any solution converges, in fact exponentially fast in time, to the unique stationary state ρs to prove transport one only has to analyze the limiting state. We refer the reader to in [7] where the proof have been done in detail. Here we will only summarize their main result in a somewhat simplified version. Theorem 1 Assume that ψi , i = 1, 2 are saw-tooth, periodic and asymmetric, smooth potentials such that (a) ψ1" > 0 on each interval where ψ2" ≤ 0 and ψ2" > 0 on each interval where ψ1" ≤ 0. (b) At each minimum ai of ψi we have νi (ai ) > 0. Then there are constants K0 > 0 and c > 0 such that ρs1 (x) + ρs2 (x) ≤ Ke −c(x−ξ0 )/σ ,

x > ξ0 ,

where ξ0 denotes the first maximum of ψ2 .

To explain this results we observe that Assumption (a) means that the minima of the two potentials are asymmetrically located with respect to one another. An example (schematic) of this situation is presented in Figure 3. Assumption (b) means that the rates of chemical conversions should be non-zero at least near the wells of the potentials. Under these hypothesis the density ρs1 (x)+ρs2 (x) is exponentially decaying away from the first maximum of the potential ψ2 . This means, that for small σ most of the mass will be concentrated to the left of ξ0 . Of course it is easy to envision the situation when the transport of particles occurs from left to right instead. For that Assumption (a) should be modified to: ψ1" < 0 on each interval where ψ2" ≤ 0 and ψ2" < 0 on each interval where ψ1" ≤ 0. 30

2

1.5

1

0.5

0

0.2

0.4

0.6

0.8

1

x

Figure 3: Periodic, asymmetric potentials ψ1 (top) and ψ2 (bottom) of the type described in the Theorem.

8

Parrondo Game

Speaking of the molecular motor at the beginning we used the analogy with a gambler that plays two different games in alteration each of them corresponding to a different phase in the one-step cycle. We will now exploit this analogy further and consider a game that on some abstract level describes this situation. We will see that such a game has a surprising property, called Parrondo’s paradox [30, 14]. The results described below can be found in [15]. 8.0.1

The rules

Consider two distinct coin games. Game A is played with a fair coin with probability 1/2 of winning or loosing. Game B is played with two coins. One of them has the probability of winning p < 1/2 and the other p" > 1/2. When the capital we have currently have at our disposal is a number divisible by 4 coin whose probability of winning is p is tossed; otherwise we toss the other coin. Probabilities p (unfavorable) and p (favorable) are chosen such that the expectation of game B is zero (or perhaps even less than zero). Game A is obviously a fair game. Parrondo game consists of playing games A and B in alteration for example ABAB or AABBAABBAA Playing game A alone does not lead to the accumulation of the capital. The same is true for game B. It turns out however that whatever way we alternate this is always a winning strategy, i.e. we will accumulate the capital. This is what we call Parrondos paradox.

31

8.0.2

Relation with the flashing ratchet

Parrondo game can be realized by the pair of difference equations. To see that we let ρik be the probability of capital k at game i. We have for game A 1 ρi+1 − ρik = (ρik−1 − 2ρik + ρik+1 ), k 2 and

1 1 − ρik = (ρik−1 − 2ρik + ρik+1 ) + (bk+1 ρik+1 − bk−1 ρik−1 ) ρi+1 k 2 2 for Game B, where ( −(2p − 1), k = 0 mod 4 bk = −(2pk−1 − 1) = k = 1, 2, 3 mod 4 −(2p" − 1), We recognize that this is a finite difference scheme for the flashing ratchet with time step τ = 1, spatial step h = 1 and σ = 1. Potential ψ is piecewise linear with slopes (2p − 1) and (2p − 1). Say that p, p are chosen such that: 3 1 EB = (2p − 1) + (2p" − 1) = 0. 4 4

(13)

A flashing ratchet can be found with the potential ψ whose slopes satisfy (13). Numerical simulations for this ratchet do not however lead to the accumulation of mass. Also playing the game B alone was found to be losing and not fair. The problem here is the definition of fairness given by (13). 8.0.3

Fairness

To generalize assume that game B is played with k = 4 coins with probabilities of success pk . We play coin k if the present capital c satisfies c = (k − 1) mod 4. Denote PBij = Pr (c mod 4 = j − 1 | c mod 4 = i − 1), and

1 ≤ i, j ≤ 4



 0 p1 0 1 − p1  1 − p2  0 p2 0  PB =   0 1 − p3 0 p3  p4 0 1 − p4 0

Matrix PB is the transition matrix for the B game. Game  0 1/2 0 1/2  1/2 0 1/2 0 PA =   0 1/2 0 1/2 1/2 0 1/2 0

A has the parity matrix    

We will call the B game fair (asymptotically) if its expectation with respect to the stationary distribution ρs is zero $ (2pk − 1)ρsk = 0. Easymp B = 32

It turns out that this condition is equivalent to ρsi PBij = ρsj PBji This is the detailed balance condition for the game B (which is itself a Markov chain). Another way of stating it is 7 pk =1 1 − pk The na¨ıve way of saying that B is a fair game is Eunif B =

1$ (2pk − 1) = 0 4

We call this expectation with respect to the uniform distribution. Clearly Easymp B=0 and Eunif B = 0 are different conditions. 8.0.4

The game

Playing the B game is playing a Markov chain with the transition matrix PB , and playing the A game is playing a Markov chain with the transition matrix PA . If we play both games in succession alternating between then the transition matrix P (k) of the kth game is given by P (k) = PA , k odd, P (k) = PB , k even If we start with the capital distributed according to ρ0 = (ρ01 , ρ02 , ρ03 , ρ04 ) then after k trials the distribution of the capital is ρk = ρ0 P (1) · · · P (k) and the expectation at the kth trial is E (k) =

$

(2pi − 1)ρki

It seems that the matrices PA and PB have nothing to do with one another but they do   1/2 0 1/2 0  0 1/2 0 1/2   PB PA = PA2 =   1/2 0 1/2 0  0 1/2 0 1/2

This means that the sequence of distributions ρk is periodic. Starting with ρ0 = (1/4, 1/4, 1/4, 1/4) we get by playing the sequence BABA ρ1 = ρ0 PB ,

E (1) = EB ρ0 =

ρ2 = ρ0 PB PA = ρ0 PA2 = ρ0 , ρ3 = ρ0 PB ,

E (3)

1$ (2pi − 1) 4

E (2) = EA ρ0 = 0 1$ (2pi − 1) = E B ρ0 = 4 33

ρ4 = ρ0 PB PA = ρ0 PA2 = ρ0 ,

E (4) = EA ρ0 = 0

and so on Choosing the B game to be asymptotically fair and such that its expectation with respect to the uniform distribution is positive results in a winning strategy for the alternating sequence of (fair) games. Explicitly we can choose p and p such that p 1−p

"

p" 1 − p"

#3

= 1,

so that game B is fair but 1 3 EB ρ0 = (2p − 1) + (2p" − 1) > 0. 4 4 It suffices to take for example p=0.2, p=0.65. We observe that Parrondo’s game is governed by a different mechanism then the flashing ratchet. In fact playing the A resets the distribution back to the uniform one. Thus thinking of Parrondo’s paradox it we should use the analogy with a screw whose threads are stripped rather than with the ratchet mechanism. This does not mean that a different Parrondo game which has the same mechanism as the flashing ratchet can not be devised. In fact it is expected that games played with 5 or more coins are close analogs of the flashing ratchet.

References [1] A. Ajdari and J. Prost, Mouvement induit par un potentiel p´eriodique de basse sym´etrie: dielectrophorese pulse, C. R. Acad. Sci. Paris t. 315, S´erie II (1992), 1653. [2] R.D. Astumian and M. Bier, Fluctuation driven ratchets: molecular motors, Phys. Re. Lett. 72 (1994), 1766. [3] R.D. Astumian, Thermodynamics and kinetics of a Brownian motor, Science 276 (1997), 917–922. ¨licher and J. Prost, Bidirectional cooperative motion of molec[4] F. Badoual, F. Ju ular motors, PNAS 99 6696–6701 (2001). [5] J.-D. Benamou and Y. Brenier, A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem, Numer. Math. 84 (2000), 375–393. [6] M. Chipot, D. Kinderlehrer and M. Kowalczyk, A variational principle for molecular motors, preprint. [7] M. Chipot, S. Hastings and D. Kinderlehrer, Transport in a molecular motor system, ESAIM: M2AN Vol 38, no 6 1011–1034 (2004). [8] J. Dolbeault, D. Kinderlehrer and M. Kowalczyk, Remarks about the flashing ratchet in PDE and Inverse Problems, Contemp. Math 362, AMS Providence, RI, 167– 175 (2004) 34

[9] W. E and P. Palffy-Muhoray, Orientational ratchets and angular momentum balance in the Janossy effect, Mol. Cryst. Liq. Cryst. 320 (1998), 193–206. [10] S.A. Endow and H. Higuchi, A mutant of the motor protein kinesin that moves in both directions on microtubules Nature 406 913–916 (2000). [11] T.C. Elston and C.R. Doering, Numerical and analytical studies of nonequilibrium fluctuation-induced transport process, Jour. Stat. Physisc 83 nos. 3/4 (1996), 359–383. [12] R. P. Feynman, R. B. Leighton and M. Sands, The Feynman Lectures on Physics Addison-Wesley, Reading, MA, vol 1. chpt. 46 (1963). [13] C.W. Gardiner, Handbook of Stochastic Methods Springer-Verlag 3rd ed (2004). [14] G. Harmer and Abbott D. Loosing strategies can win by Parrondo’s Paradox Nature 402 864 (1999). [15] D. Heath, D. Kinderlehrer and M. Kowalczyk, Discrete and continuous ratchets: from coin toss to molecular motor, Discrete and continuous dynamical systems Ser. B 2 no. 2 (2002), 153–167. ´ndez, E.R. Kay and D. A. Leigh, A reversible synthetic rotary molec[16] J. A. Herna ular motor, Science 306 1532–1537 (2004). [17] J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associates, Inc., 2001. [18] Y. Ishii, S. Esaki and T. Yanagida, Experimental studies of the myosin-actin motor, Appl. Phys. A 75, 325–330 (2002). [19] R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the Fokker-Planck equation, SIAM J. Math. Anal. Vol. 29 no. 1 (1998), 1–17. ¨licher and J. Prost, Cooperative molecular motors, Phys. Rev. Lett. 75 [20] F. Ju (1995), 2618. [21] E. Fox Keller, Making Sense of Life: Explaining Biological Development With Models, Metaphors, and Machines, Harvard Univ. Press, Cambridge (2002). [22] D. Kinderlehrer and M. Kowalczyk, Diffusion-mediated transport and the flashing ratchet, Arch. Rat. Mech. Anal. 161 (2002), 149–179. [23] D. Kinderlehrer and N. Walkington, Approximation of parabolic equations based upon Wasserstein’s variational principle, Math. Model. Numer. Anal. (M2AN) 33 no. 4 (1999), 837–852. [24] T. Kosa, W. E and P. Palffy-Muhoray, Brownian motors in the photoalignment of liquid crystals, Int. J. Eng. Science 38 (2000), 1077–1084.

35

[25] H. Linke Ed, Ratchets and Brownian motors: Basics, Experiments and Applications, Appl. Phys A 75 (Special issue) 167 (2002). [26] Y. Okada and N. Hirokawa, A processive single-headed motor: kinesin superfamily protein KIF1A, Science Vol. 283, 19 February 1999. [27] Y. Okada and N. Hirokawa, Mechanism of the single headed processivity: diffusional anchoring between the K-loop of kinesin and the C terminus of tubulin, Proc. Nat. Acad. Sciences 7 no. 2 (2000), 640–645. [28] F. Otto, Dynamics of labyrinthine pattern formation: a mean field theory, Arch. Rat. Mech. Anal. 141 (1998), 63-103 [29] F. Otto, The geometry of dissipative evolution equations: the porous medium equation, Comm. PDE 26 (2001), 101-174 [30] J.M.R. Parrondo, Harmer G and Abott D., New paradoxical games based on Brownian motors Phys. Rev. Let. 85 24 5226–5229 (2000). [31] J.M.R. Parrondo and B.J. de Cisneros, Energetics of Brownian motor: a review, Appl. Phys A 75 179–191 (2002). [32] A. van Oudenaarden and S. Boxer, Brownian ratchets: molecular separation in lipid bilayers supported on patterned arrays, Science 285 (1999), 1046–1048. ¨licher, A. Adjari and J. Prost, Energy transduction of [33] A. Parmeggiani, F. Ju isothermal ratchets: generic aspects and specific examples close and far from equilibrium, Phys. Rev. E, 60 no. 2 (1999), 2127–2140. [34] C.S. Peskin, G.B. Ermentrout and G.F. Oster, The correlation ratchet: a novel mechanism for generating directed motion by ATP hydrolysis, in Cell Mechanics and Cellular Engineering (V.C Mow et.al eds.), Springer, New York, 1995. [35] P. Reimann, Brownian motors: noisy transport far from equilibrium, Phys. Rep. 361 nos. 2–4 (2002), 57–265. [36] R.D. Vale and R.A. Milligan, The way things move: looking under the hood of motor proteins, Science 288, 7 April 2000, 88–95. [37] C. Villani, Topics in Optimal Transportation, Graduate Studies in Mathematics vol 58, AMS, Providence, Rhode Island (2003).

36