Electron transport in quantum waveguides

3 downloads 0 Views 7MB Size Report
Buffington et al. [85] and Odero et al. [86] utilised a high ...... [44] C. Kim, A. M. Satanin, Y. S. Joe, and R. M. Cosby, Phys. Rev. B 60, 10962 (1999). [45] H. Q. Xu ...
Electron transport in quantum waveguides

J. B. Wang and S. Midgley

School of Physics, The University of Western Australia 35 Stirling Highway, Crawley, WA 6009, AUSTRALIA Phone: (+61) 8 93803790, Fax: (+61) 8 93801014 email: [email protected]

abstract

As electronic circuits get progressingly smaller down to the nanometer scale, device analysis based on classical or semi-classical transport theories no longer works since the quantum wave nature of the electrons starts to play a dominant role. Contemporary advances in semi-conductor fabrication technology have already allowed construction of nanostructured devices from 1nm to 100nm in size and confined in two, one and zero dimensions. This paper reviews recent work on electron transport and quantum interference in nano-electronic devices, focusing mainly on the theoretical and computational aspects. A general quantum waveguide theory is presented and a wide range of computational methods for solving the corresponding Schr¨odinger’s equations are discussed in detail. This provides a basis for computer simulations of various quantum phenomena emerging in the nanometer domain.

1

Contents

I. Introduction

1

II. Theoretical model

3

III. Computational methods

9

A. Time independent methods

9

1. Mode matching

10

2. Finite element

13

3. Green’s function

16

B. Time dependent methods

21

1. Finite difference

23

2. Split operator and the Baker-Campbell-Hausdorff formula

26

3. Time dependent Green’s function

28

4. Chebyshev propagation scheme

30

IV. Numerical issues

33

A. Wave function splitting algorithm

35

B. Numerical differentiation

39

C. Numerical accuracy and stability

42

1. Conservation of norm and energy

43

2. Time reversal propagation

43

3. Comparison with exact solution: free space propagation

45

4. Comparison with exact solution: uniform magnetic field

46

i

V. Conclusions

53

Acknowledgments

54

References

55

ii

I.

INTRODUCTION

As electronic circuits get progressively smaller approaching the nanometre scale, device analysis based on classical or semi-classical transport theories will eventually fail since the quantum wave nature of the electrons starts to play a dominant role. Very recent advances in semi-conductor fabrication technology have already allowed construction of mesoscopic structures from 100 nm to 1 nm in size and confined in two, one and zero dimensions. These nanostructures, typically made out of silicon (Si), gallium arsenide (GaAs) or other semiconductor materials, will become the building blocks of the next-generation electronics. Consequently, electron transport in quantum cavities, for which the term “quantum waveguide” was often used due to their similarity to acoustic, optical or electromagnetic waveguides, is receiving considerable attention worldwide.

Since the characteristic dimensions of nanometre-scale electronic devices are comparable to the wavelengths of electrons with energy from meV to a few eV , theoretical calculations of device properties require a full quantum mechanical treatment. For instance, the wavelength of an electron with energy of 1 meV is about 40 nm and an electron with energy of 1 eV has a wavelength of approximately 1.2 nm. To make sense of this size, consider a single atom of iron or silicon. The average radius of an iron atom is of the order of 0.14 nm, while silicon is of the order of 0.11 nm. Therefore, a device 10 nm in size is approximately equivalent to 100 atoms. Even very simple nanostructures of this size, such as straight wires or square cavities, have very different electrical characteristics when compared to corresponding classical structures.

Unavoidably, the next generation ultrafast computers with lower power consumption and 1

more compact circuits will need to take advantage of quantum mechanical phenomena. One of the possibilities is for the devices to operate by controlling the phase of a few electrons rather than the electron density as in present day devices. In this way, less energy is required and fast switching time can be achieved. Another possibility is to establish communications between quantum wires and quantum dot cells through the non-local nature of the electron waves without the need of traditional wires to propagate information.

The basic operation of quantum wires and quantum transistors will play an ever-increasingly important role in future semiconductor products. With the construction of such devices becoming possible within the last decade, many novel devices have already been developed, such as quantum wires [1–5], quantum resistors [6], resonant tunneling diodes and band filters [7, 8], quantum transistors and stub tuners [9–11], quantum switches [12], and quantum sensors [13–16].

For example, it has been demonstrated that a single electron can be pumped around a circuit using a tunable double barrier potential [17–19]. A quantum transistor was effectively controlled by changing the side gate voltage to achieve on and off switch operations based on constructive and destructive quantum wave interference [9]. The application to quantum computing has also been suggested by Akis and Ferry [20], whereby a quantum waveguide array allows the computation of Fourier transforms using a parallel quantum approach. Bertoni et al. [21] have shown that a universal set of quantum logic gates can be realised using solid-state quantum bits based on coherent electron transport in quantum wires. Similarly, Harris et al. [22] and Gilbert et al. [23] examined the implementation of a quantum controlled-not gate using magnetically switched quantum waveguide qubits. They demonstrated that a quantum waveguide inverter gate, under the application of an appro-

2

priate magnetic field, shows promise for a semiconductor realisation of quantum computing. This is just the beginning of a new electronic era. These studies demonstrated fundamental quantum phenomena and raised the possibility of radically new electronic devices with fascinating physics.

A pressing challenge to theorists is to provide an accurate prediction of quantum transport and interference in these nano-structures, including resonant tunnelling; low dimensionality effects; Aharanov-Bohm interference; effect of magnetic and electronic fields, and the coupling to classical systems. Detailed theoretical studies will provide the essential quantum mechanical basis for the design and analysis of quantum devices. They are also crucial for the eventual realization of quantum computers, where the very quantum characteristics is utilized to provide the tremendous boost to computer power. In fact, the size of electronic components cannot be scaled down much further without a proper understanding of the quantum effects that emerge at the nanometre scale.

II.

THEORETICAL MODEL

Nano-electronic devices are typically made up of many layers of semi-conductor material with metallic gates used to form the patterned structures. Depending on the semiconductor material and the layout of the metallic gates, devices with different properties can be constructed. The most common semi-conductors in use are gallium arsenide (GaAs), aluminium gallium arsenide (AlGaAs) and to a lesser extent silicon (Si). Gallium arsenide and aluminium gallium arsenide are often used due to the high mobility that can be achieved. Silicon offers much higher purity with no doping atoms, resulting in more uniform lattices

3

and less scattering at the expense of mobility.

GaAs-AlGaAs heterojunctions, for example, result in a two-dimensional electron gas (2DEG) between the GaAs and AlGaAs layers. Since the Fermi energy of the AlGaAs is initially higher than the GaAs, electrons tend to spill from the AlGaAs into the GaAs at the interface. This deforms the band energies in the GaAs layer and causes a local increase in the electron density about the junction as shown in Figures 1 and 2. These electrons are trapped in the triangular well. The large energy difference between the two levels E1 and E2 and the low thermal energy of the electrons results in an accumulation of electrons only at the first level E1 . This means that motion in the z-direction is effectively prohibited and the electrons are confined to the x-y plane, giving rise to the name two-dimensional electron gas (2DEG).

To create various nanostructures, the conduction band can be depleted by applying a negative voltage to a metallic gate deposited on the surface of the AlGaAs. By controlling the bias on the gates, potentials of varying strengths can be constructed with controllable interactions and properties. Figure 3 shows a simple nanostructured device fabricated by Meirav, Kastner and Wind [24, 25], where the gray area are metallic, white area is insulating (AlGaAs) and dark area is semiconducting (GaAs). When a negative voltage is applied to the two metal stripes on the top surface of the semiconductor, the accumulated electrons in the metal stripes form potential walls and barriers due to Coulomb interactions, which control the two-dimentional motion of the conduction electrons in the semiconductor.

Furthermore, many of the devices developed and studied operate at milli Kelvin temperatures. At such temperatures the mean free path of the electrons is considerably larger than the devices being studied. For example, at 0.1 K the propagation electrons normally have

4

a mean free path exceeding 100 µm, which is considerably larger than the size of nanostructured devices [2] and therefore electron transport in these devices can be considered as ballistic.

The dynamical properties of a quantum system are governed by the time dependent Schr¨odinger equation ih ¯

∂ψ(r, t) = Hψ(r, t) , ∂t

(1)

where the system Hamiltonian H=

1 (− i h ¯ ∇ + e A(r))2 + V (r) , 2m

(2)

r ≡ (x, y) represents the two dimensional spatial coordinates, H is the system Hamiltonian, V is the electronic potential and A is the magnetic vector potential, where the magnetic field B = ∇ × A.

The above equation provides both spatial and phase information of the system at all times, and thus a complete knowledge of all possible observables of the system under study. However, solving this equation for all electrons and particles in the semi-conductor is not at all practical. Certain approximations need to be made by taking into account physically the most important aspects of the system. Guidance is taken from previous and current experimental and theoretical studies as to which effects can be neglected and which are important, with attempts made to understand the effects on the final solution.

In most nanostructured devices currently constructed, carrier concentration typically ranges from 1010 /cm2 to 1012 /cm2 which is 0.0001/nm2 to 0.01/nm2 . Devices approximately 10 nm × 10 nm in size are often referred to as single electron devices since, on average, only one electron is within the device at any given instant [2, 26, 27]. Electron-electron interac5

tions and correlations thus play a very minor role in determining the transport properties of such devices. Consequently, theoretical modelling of single electron transport through such devices closely matches the reality of these experiments. This approximation allows theoreticians to model the most important aspects of experiments without taking into account minute details. As alluded to by Rolf Landauer who was quoted as saying “Yes, in the vast phase space between heaven and hell there are undoubtedly some corners where Tomonaga–Luttinger liquids, Wigner crystals and Unicorns flourish. But you have to hunt for them!” [28]. The resultant wave function from the single-electron model will have probability amplitude that reflects the statistical nature of a large number of single electrons propagating through the device. By studying the transmission and reflection of a single electron wavefunction, experimental quantities such as conductance and energy spectrum can be inferred as detailed in the following sections.

As commented on in the introduction, the systems being studied are of the order of only a few hundred atoms in size and of the same order as the wavelength of an electron with a few eV of energy. It is quite conceivable that the potential felt by the transport electron would be very complicated and more lattice like in its structure. However, it has been shown in theory and by experimentation [29–31] that such lattice effects and other local effects can be approximated by an electron with a reduced or effective mass, which is material and temperature dependent. For homogeneous gallium arsenide at a few kelvin the effective mass m∗ is approximately 0.0667 me [31, 32]. The Schr¨odinger equation then becomes ∂ψ(r, t) 1 ih ¯ = (− i h ¯ ∇ + e A(r))2 + V (r) ψ(r, t) , ∂t 2m∗ 



(3)

where V represents the effective electronic potential and A represents the magnetic vector potential. 6

A number of assumptions are made about the lattice: it is uniform; its interaction with the propagation electron is incorporated into the effective mass; and it is non-deforming due to screening (ie. the presence of the conduction electron does not alter the shape of the lattice potential). Since the electrons and their conjugate ‘exchange-correlation holes’ in the Fermi gas are free to move, the conjugate exchange-correlation holes move toward the propagation electron. This leads to a surplus of positive charge near the propagation electron, thus screening the lattice potential from the propagation electron at some distance from the electron [33].

It is worth noting though that the effective mass is energy dependent. For example, for gallium arsenide, m∗ (E) = (0.0665 + 0.0436E + 0.236E 2 − 0.147E 3 ) me ,

(4)

where E is the single-particle energy above the band edge in eV and is normally quite small [30, 34]. Chen and Bajaj [35] also studied the effect of two dimensional quantum confinement on the value of the effective mass, such as in nano quantum wires. They found a large enhancement of the effective mass for wire width below 10 nm. However, the effective mass of gallium arsenide approaches the bulk material value of 0.0667 me for patterned structures larger than 10 nm.

The main confinement potential is the electronic potential, V (r). This can be modelled by setting the potential height to different values according to V (r), which represent the different features of a device. For example a double barrier potential, as shown in Figure 4, is formed by setting V (x, y) =

V0 V0 + , 2 cosh ((x − b)/a) cosh ((x + b)/a) 2

7

(5)

where V0 , b and a determine the height, position and width of the barriers. If required, an additional electric field can be applied across the devices. This results in the whole potential pattern described above being tilted in accordance with V = eεd for distance d along the device and electric field strength ε, as shown in Figure 5.

An improvement to this process is to lay down a pattern for the device, including gates held at the ground potential and then to solve the two dimensional Poisson’s equation, ∇ · A = ρ, in the region between the gates. A more complete solution would require solving the three dimensional equation (including changes in ρ due to the layering of different materials) and then take a two dimensional slice through the potential in the region where the 2DEG exists. However, such a full solution can be computationally intensive. In fact, the application of the 2-dimensional solution provides most of the information required and allows the development of structures with similar characteristics to actual devices built.

With the main device structure being formed by the electric potential V (r), more subtle quantum effects can be probed by applying an external magnetic field B. The application of a weak magnetic field allows the phase of the wave function to be altered without affecting the overall electron density distribution. For example, by applying a magnetic field to the Aharonov-Bohm Ring [36, 37], the transmission through the ring can be made to undergo oscillations as the electron wave function constructively and destructively interferes in the output lead.

The magnetic field enters the Hamiltonian via the vector potential A, i.e. 1 (− i h ¯ ∇ + e A(r))2 + V (r) 2m∗  1  2 2 2 2 = −¯ h ∇ − i h ¯ e ∇A(r) − i h ¯ e A(r)∇ + e A(r) + V (r) . 2m∗

H =

8

(6) (7)

where A is defined by B = ∇ × A.

(8)

For a given magnetic field B, there are an infinite number of ways to form the magnetic vector potentials A (namely the different gauges), all of which should provide the same results for physically measurable quantities. Once the electronic potential V and the magnetic vector potential A are known, the system Hamiltonian H is set up and Eq. (3) can be solved to provide detailed dynamical information about the system under study. Many computational methods have been developed to solve the Schr¨odinger equation accurately and efficiently. Their strengths and limitations are reviewed in the following section.

III.

COMPUTATIONAL METHODS

This section details the computational techniques developed and utilized by the quantum waveguide community in their study of electron transport in nanostructured devices and quantum information processing.

A.

Time independent methods

Time independent methods are based on the separability of time and spatial variables, which leads to the time-independent Schr¨odinger equation,  1  2 (− i h ¯ ∇ + e A(r)) + V (r) ψ(r) = Eψ(r) , 2m∗

where E and ψ(r) are the eigenenergies and eigenfunctions of the system respectively. 9

(9)

The textbook approach is to discretize the Hamiltonian operator onto a lattice of points, which converts the differential equation to a matrix equation, and then solve the eigenvalue and eigenvector problem directly [38]. However, this is often the least efficient computational method for this type of problem, because they often result in a set of very large and poorly conditioned matrix equations. The following sections outline several more efficient and commonly used methods for solving the time independent Schr¨odinger equation.

1.

Mode matching

By splitting the potential up into different regions, see Figure 6 as an example, known analytical solutions can be used to derive a total wave function for the complete system. This method uses the general solution to each of the individual regions and then matches the solutions and their derivatives at the boundaries between regions. This leads to a set of coupled linear equations, which can be solved for the originally undetermined constants in the general solutions.

For example, in the three regions j = 1, 2 and 3 with different transverse waveguide width wj as shown in Figure 6, the Schr¨odinger equation becomes h ¯2 − ∗ 2m

∂2 ∂2 + ψ j (x, y) = Eψ j (x, y) , ∂x2 ∂y 2 !

(10)

since V1 (x, y) = V2 (x, y) = V3 (x, y) ≡ 0 inside the channel. The solution to the above equation with infinite boundary wall conditions is known analytically as ψ j (x, y) =

Xh







ajn exp − i knj x + bjn exp i knj x

i

χjn (y) ,

(11)

n

where the exponential terms represent the transmitted and reflected plane waves in the x 10

direction with wave number s

knj =

2Em∗ nπ − , wj h ¯2

(12)

and s

χjn (y)

=

nπ 2 sin ( (y − yj )) wj wj

(13)

are a set of transverse eigenstates in the confined channels. The coefficients ajn and bjn represent the amplitudes of the incident, transmitted and reflected waves for each transverse mode χjn (y). They can be interconnected through a scattering matrix S, 

b1





S 11

        2  b  =  S 21      

b3

S 31

S 12

S 13

22

23

S

S 32

S

S 33



a1



     2 a  ,    

(14)

a3

ij where aj = {aj1 , aj2 , ..., ajN }, bj = {bj1 , bj2 , ..., bjN }, and the scattering matrix Snm is determined

by matching the wave functions and their derivatives at the boundaries. The transmission coefficients for electrons incident from port i in mode n to port j in mode m are given by [39, 40] ij Tnm (E) =

j km |Snm |2 . kni

(15)

For a given bias voltage V and temperature T , the current through the device (in the limit of linear response) can be calculated from e Z ∞ X ij I(µF , V, T ) = − T (E)[f i (E − µiF − eV ) − f j (E − µjF )]dE , π¯ h 0 nm,ij nm

(16)

where f j represents the Fermi distribution and µjF is the chemical potential at port j. The conductance between port i and port j is given by the Landauer-B¨ uttiker formula Gij (E) =

2e2 X ij T (E) . h nm nm

11

(17)

Sometimes it is also instructive to calculate the local density of states by integrating the density probability in a given region (Ω) for a given incident energy (E), i.e. D(E) =

Z Z

ψ ∗ (x, y)ψ(x, y)dxdy .

(18)



The mode matching method has been used by many researchers to study the nanostructure of regular shapes such as a two-port single stub [39], a four-port single branch doubler [39], a four-port double branch doubler [39], a crossed wire [41], single and double bends [42], serial stubs [43], and quantum waveguides with rectangular impurity [44]. This method can also be extended to study electron transport under the influence of an external magnetic field [45–47]. Xu [45] calculated the conductance of lateral surface superlattices under the influence of a constant perpendicular magnetic field. Xia and Li [46] studied the electronic structure and transport properties of quantum rings in a magnetic field and demonstrated a dramatic change in transmission in the presence of an external magnetic field. Shi and Chen [47] made further studies on the AB rings by including a spin dependent term H0 =

ge¯ h ~ ~σ · B 4m∗ c

(19)

in the system Hamiltonian, where g is the electron g-factor and ~σ is the Pauli matrices. As shown in their work, the period of the conductance oscillation decreases as the magnetic field increases.

The mode matching method relies on the fact that analytical solutions are known for all individual segments. However, the number of known solutions is small (e.g. infinite well, step well, top hat well, rectangular quantum wire, constant magnetic field, and a small number of others), which greatly restricts which structures can be studied using this method. It is possible to divide an arbitrarily complex system into many segments [40], each of which is 12

approximated as a rectangular potential well with a finite width. However, this can result in a very large matrix to be inverted and the computation can become unstable.

2.

Finite element

The finite element method aims to reduce the number of segments required in the calculation by using an irregular numerical mesh. In principle, the finite element method allows a potential of arbitrary shape to be studied [48–50]. By choosing a suitable set of basis functions (often piecewise polynomials or the Kronecker delta functions which are nonzero only in a local region of space), the potential, Laplacian and wave function can be discretised. This approximates functions locally and formulates the problem as functional evaluations on grid points on the boundary of each element. As a result, the partial differential equations are replaced by a set of algebraic equations.

In this method, the space is usually partitioned into a “device” region Ω0 and several “lead” regions Ω1 , Ω2 , ... , Ωn . The lead regions extend to infinity and can be generally regarded as straight wires with widths Γ1 , Γ2 , ... , Γn and constant potential in the lead directions. Consequently, the wave function ψ in each lead region Ωi can be written as [48] i

ψ(ξi , ηi ) =

N X

[ain exp(− i kni ηi ) + bin exp( i kni ηi )]χin (ξi ) +

n=1

∞ X

bin exp(−kni ηi )χin (ξi ) ,

(20)

n=N i +1

where ξi and ηi are the transversal and longitudinal coordinates in lead Ωi , χin (ξi ) is the nth eigenstate with eigenenergy Eni of the one-dimensional time-independent Schrodinger equation in the ξi coordinate, ain are the coefficients of the incoming waves in the ηi direction, bin are the coefficients of the out-going or vanishing waves, the wave vector kni =

q

2m∗ |E − Eni |/¯ h2 , E is the electron incident energy, and N i is the maximum n such 13

that Eni < E. This equation is essentially the same as Eq. (11) but in ξi and ηi coordinates. The “device” region contains a spatially varying potential and is divided up into small polygons such as quadrilaterals or triangles with their size and density varied according to the potential shape and boundary conditions. In this fashion, areas in which the potential is changing rapidly can be computed with higher accuracy while areas in which it changes very little (e.g. free space) can be computed with fewer polygons, thus increasing the speed of calculation. This is the main advantage of the finite element method over the simpler finite difference method, which uses a uniform rectangular mesh. The wave function can then be discretized on the nodal points of these polygons r1 , r2 , ..., rM and approximated as ψ(ri ) =

P

j

uj φj (ri ), where φj (ri ) = δj,i are the Kronecker delta functions, for example,

and uj are the expansion coefficients. At the boundaries between the ’device’ and ’lead’ regions the wave function should be continuous and smooth, thus requiring the match of values of ψ(ξi , ηi ) and ψ(ri ) and their derivatives. This leads to, in the weak variational form, a matrix equation to be solved, (T + V + C)u = P ,

(21)

where the (M × M ) matrices T and V are defined as Ti,j

h ¯2 Z = [∂x φi (r)∂x φj (r) − ∂y φi (r)∂y φj (r)]d2 r , 2m∗ Ω0

(22)

and Vi,j =

Z Ω0

[V (r) − E]φi (r)φj (r)d2 r .

(23)

The partial matrix C and partial vector P are associated with the Mi nodal points on the lead boundary Γi , given by Ni h ¯2 X h ¯2 i T i k ϕ ϕi,n + Ci = − ∗ 2m n=1 n i,n 2m∗

14

∞ X n=N i +1

kni ϕTi,n ϕi,n ,

(24)

and Ni h ¯2 X Pi = − ∗ 2 i ain kni ϕTi,n , 2m n=1

(25)

where ϕi,n is a (1 × Mi ) matrix defined as ϕi,n =

Z Γi

χin (ξi )[φ1 (ξi ), φ2 (ξi ), ..., φMi (ξi )]dξi .

(26)

The (M × M ) matrix C and the M vector P are obtained by embedding Ci and Pi , respectively.

The matrix equation (21) is solved as an eigenvalue problem to provide the system wave function ψ(ri ) =

P

j

uj φj (ri ). Technically speaking this is a sparse matrix equation and can

be solved using efficient sparse-matrix numerical techniques (for example, see [51, 52]). The transmission coefficient can then be evaluated through a scattering matrix as described in Section III A 1. Lent et al. [48] calculated the probability of transmission through a rectangular resonant cavity and a circular bend in a quantum channel using the finite element method. They used the circular geometry to demonstrate the flexibility of this method in treating non-rectilinear boundaries. Polizzi et al. [50] examined the current-voltage characteristic of a four-port single branch coupler also based on finite element analysis. They obtained similar peaks and valleys in the current-voltage curves as those obtained by Burgnies et al. using the mode matching method [39]. Tachibana and Totsuji [49] used the finite element method to study a variety of structures as well as the effects of geometrical deformations. Wang et al. [53] extended this method further to include an external magnetic field and calculated the transmission coefficient of a stadium-shaped quantum structure as a function of external magnetic field.

However, this method has its limitations. Firstly, the solution on each element is only an 15

approximation to the actual wave function, often requiring very small elements to be used in order to obtain accurate spatial and phase information. This is especially important for highly excited or continuum states, since their wave functions vary rapidly over the nanostructure. For this reason, the resulting set of algebraic equations becomes unavoidably large and difficult to solve, and the particular advantage of the finite element method over the finite difference method is lost. Secondly, changing the size of the elements and their density can produce high frequency noise, which may introduce considerable error into the phase of the wave function. Thirdly, for the Schr¨odinger equation, where the wave function is defined over all space for smooth potential barriers of finite height, it often proves difficult to define exact boundary conditions (i.e. the values of the wave function and its derivatives at the boundaries), resulting in an imprecise time independent model.

3.

Green’s function

The Green’s function method transforms a system of differential equations and boundary conditions into integral equations over the internal boundaries of the system, which can then be discretised and solved numerically [27, 54–59]. To solve the time independent Schr¨odinger equation 

1 (− i h ¯ ∇ + e A(r))2 + V (r) − E ψ(r) = 0 , 2m∗ 

(27)

with boundary condition a(r)n(r)(− i h ¯ ∇ + e A(r))ψ(r) + b(r)ψ(r) = c(r) ,

(28)

one can first find the Green’s function, which satisfies 

1 (− i h ¯ ∇ + e A(r))2 + V (r) − E G+ (r, r0 ) = h ¯ δ(r − r0 ) , 2m∗ 

16

(29)

with boundary condition a(r)n(r)(− i h ¯ ∇ + e A(r))G+ (r, r0 ) + b(r)G+ (r, r0 ) = 0 .

(30)

The time independent wave function is then obtained by integrating over the boundary S 0 with normal n(r0 ),   i Z 0 0 0 0 + 0 + 0 0 0 Ψ(r) = dS n(r ) ψ(r ) [− i h ¯ ∇ − e A(r )] G (r, r ) − G (r, r ) [− i h ¯ ∇ + e A(r )] ψ(r ) . 2m∗ S 0

(31) If Dirichlet’s boundary condition is used (i.e. a(r) = 0 and b(r) = 1), the above equation becomes   i Z 0 0 0 + 0 dS n(r ) c(r )(− i h ¯ ∇)G (r, r ) . Ψ(r) = 2m∗ S 0

(32)

The Green’s function G+ (r, r0 ) in a weak magnetic field can be approximated as [55] 0

G+ (r, r0 ) ≈ e i θ(r,r ) G0 (r, r0 ) ,

(33)

where G0 (r, r0 ) is the Green’s function at zero magnetic field and the phase factor eZ ∇G0 (ς, r0 ) θ(r, r ) = − A(ς) dς . h ¯ C(r0 →r) |∇G0 (ς, r0 )| 0

(34)

The integration path C(r0 → r) is along the gradient direction of G0 and ς is a point on the integration line.

The Green’s function method can provide solutions to arbitrarily complex geometries with sharp corners as well as highly excited states with many peaks and valleys. Koonen et al. and Novik et al. [58, 59] carried out a detailed analysis on the transmission probability of electrons injected and detected via quantum point contacts (QPC) in the presence of impurity and external magnetic field. They obtained the system wave function for the 2DEG 17

region between the QPC injector and detector by using the Green’s function method with Dirichlet’s boundary conditions. Any potential fluctuations or impurities in this 2DEG region would cause spatial modification of this wave function, resulting in variations in the transmission probability of the electrons. By comparing their calculated results with experimentally observed transmission data, they obtained valuable information on the distribution, size, and the energetic height of the impurity potential. Another strength of this method is that the Green’s functions G+ (r, r0 ) are independent of the possibly very complicated potential boundary structures and they are known functions for simple potential geometries. For example, the zero-magnetic-field Green’s function is well-known for regions where the electric potential is constant [55, 56]. In the general case, the Green’s function is obtained by inverting the matrix h ¯2 2 G (r, r ) = − ∗ ∇ + V (r) − E 2m "

0

#−1

0

(35)

with an appropriate discretisation of the space and a representation of the potential and Laplacian. However, this matrix can be very large and poorly conditioned, rendering it difficult to factorise.

Venugopal et al. [60] introduced a mode-space Green’s function method, which decouples the x and y variables in the 2D Schrodinger’s equation and provides the solution as an expansion of the system Hamiltonian in the decoupled mode space. This approach greatly reduces the size of the Green’s function matrix, but it is only valid when the potential profile V (x, y) changes slowly in shape along the x or y direction. As shown in reference [60], the decoupled mode-space Green’s function method provides accurate results for the current versus voltage curves when the perturbing potential is small, but it becomes less reliable when the potential profile is squeezed by a strong gate voltage. 18

If one is interested mainly in transport properties such as the transmission coefficients of quantum waveguides, a very efficient algorithm (namely the recursive Green’s function method) can be used [61–67]. In this method the waveguide is divided into sections, and for each separate section, the Green’s functions G0 are either known analytically or readily available from numerical calculations. The Green’s function G for the complete system is built up from G0 by including one section at a time, starting with the section at the right end of the system, through repeated solution of the Dyson equation G = G0 + G0 Vˆ G ,

(36)

where Vˆ denotes the hopping potential between two adjacent sections. Figure 7 illustrates this recursive procedure [64], where the Green’s function GA+B for the composite system A + B can be obtained from the Green’s functions GA and GB of the separate sections A and B, i.e. A B −1 A GA+B = GB jp j,q+1 Vq+1,q (1 − Gqq Vq,q+1 Gq+1,q+1 Vq+1,q ) Gqp ,

(37)

A+B A B A B −1 A Gpp = GA pp + Gpq Vq,q+1 Gq+1,q+1 Vq+1,q (1 − Gqq Vq,q+1 Gq+1,q+1 Vq+1,q ) Gqp ,

(38)

where Vq+1,q = Vq,q+1 = −¯ h2 /(2m∗ ∆x2 ) and ∆x is the numerical grid spacing. The above relationships are derived from the general Dyson equation (36). In the subsequent iteration, the A + B region becomes the new B region and another section can be added in the calculation until all separate sections are included. Each iteration only requires the inversion of an N × N matrix, where N is the number of mesh points in the slice. This matrix is dramatically smaller than that of Eq. (35) and the recursive technique is numerically very stable.

19

The transmission and reflection coefficients can be readily obtained from tnm = − i 2V

q

sinθn sinθm e i (θm L−θn R) hn|GRL |mi ,

q

rnm = − sinθn /sinθm e i 2(θm +θn )L ( i 2V sinθm hn|GLL |mi + δnm ) ,

(39) (40)

where L and R are indices of sections on the left and right of the quantum waveguide, and θn ≡ kn ∆x. Note that the Green’s function in the above equations is retarded, which means G+ (E) ≡ lim+ η→0

1 . E − H + iη

(41)

Sols et al. [63, 64] investigated the possibility of using the semiconductor T-structure as a quantum modulated transistor. The basic working principle is illustrated in Figure 8, where VS is the source, VD is the drain and VG is the side gate voltage. The side gate VG can be adjusted to control the effective depth L∗ in the side stub, which in turn changes the relative phase of path 1 and path 2 of the wave function. In other words, by changing the magnitude of the gate voltage, one can modulate the phase difference between the two paths to achieve transistor-like actions. Using the recursive Green’s function method, Sols et al. [63, 64] confirmed the prediction based on quantum interference that the system undergoes a transition from complete transmission through to complete reflection. The exact side gate voltage required for the device to operate correctly is highly dependent on the width and length of the stub structure, as well as the energy of the transport electron. He et al. [68] studied the coherent electron transport through a quantum point contact (QPC) in the presence of an atomic force microscope (AFM) tip using the recursive Green’s function method. The effects of the AFM tip position on the conductance plateaus of the QPC were examined. It was found that the calculated results were consistent with the experimental data of Topinka et al. [69]. 20

Rotter et al. [67, 70] also studied ballistic quantum transport using the recursive Green’s function method. In particular, they included an external magnetic field in their formulation. Through the decomposition of nonseparable structures into separable subregions that are joined by recursive solutions of the Dyson equation, they were able to obtain transmission coefficients and scattering wave functions very efficiently. This allowed them to study systems in states with high mode numbers and in the presence of high magnetic fields.

It is worth noting that the recursive Green’s function method is not designed to provide the wave function in the device region [53, 67, 70]. Although this is possible in principle through Eq. (31), the calculation of the wave function would require the Green’s function throughout the entire region, while the transmission and reflection coefficients only require the Green’s function connecting the inlets and outlets of the device. If the Green’s function G+ (r, r0 ) was to be evaluated everywhere in the entire region, the recursive Green’s function method would be just as costly as a direct inversion of the matrix given by Eq. (35).

B.

Time dependent methods

Although the time-independent methods as described above have yielded much insight into the quantum properties of nanostructured waveguides, there are severe limitations for these methods to provide information on transient behaviors of the system under study. In principle the general solution of the time-dependent Schr¨odinger equation can be constructed by a complete expansion of all allowed stationary energy eigenfunctions, but this often requires a very large number of discrete eigenstates as well as an integration over the continuous part of the energy spectrum. In addition, the separation of time and spatial variables implies an ex-

21

plicitly time-independent Hamiltonian and thus the electronic transportation properties can only be analyzed under steady-state conditions if the time-independent Schr¨odinger equation is used. Much remains to be studied in the temporal response of quantum electronic devices, which is of particular importance for analyzing high-speed quantum transistors and quantum switches.

In these situations one needs to solve directly the time dependent Schr¨odinger equation, which is stated as an initial value problem rather than a boundary condition problem. This time-dependent approach has a more natural correspondence to reality, i.e. starting from an initial state of the system and advancing it in time under the influence of internal and external interaction potentials, which can be either time-independent or time-dependent. In other words, it provides information on transient behaviours and allows direct visualisation of the transport process, where one can “watch” a system evolve in real time and as a result monitor intermediate stages of the process of interest. The insight thus afforded allows for a more intuitive understanding of the system being studied than is otherwise available. As an initial value problem, it is also comparatively easy to implement, flexible, and versatile in treating a large variety of quantum problems. In particular, it removes the need for boundary conditions, which can be difficult to implement in the time-independent methods for arbitrarily shaped potentials, as mentioned in the above section.

The time dependent Schr¨odinger equation has the formal solution ψ(r, t + ∆t) = U(∆t)ψ(r, t) ,

(42)

where U(∆t) = exp (− i H∆t/¯ h) is a unitary propagation operator. Since the wave function ψ(r, t) contains complete quantum-mechanical information about the system under study, one can derive from it all possible observables such as reflection and transmission coefficients, 22

electric current and current density, lifetime of trapped states, phase shifts, etc. The energy spectrum or the density of states can also be obtained from the time propagation of system wave functions by using the time-energy Fourier transform [71].

Although this general solution has been known for a long time (see, for example, [72, 73]), computational techniques for treating this exponential propagator have been slow to develop and practical calculations have had to await the arrival of powerful computers. The difficulty lies in the exponential propagation operator exp (− i H∆t/¯ h), which is in fact an infinite series expansion. Different approximations to the exponential time propagator, along with the technique used to evaluate the action of the Laplacian ∇2 on the wave function, lead to different time evolution schemes.

1.

Finite difference

Many different schemes have been employed to solve the time dependent Schr¨odinger equation. The simplest is the first-order finite-difference approximation to the exponential operator in Eq. (42), i.e. ψ(r, t + ∆t) = ψ(r, t) −

i H∆tψ(r, t) , h ¯

(43)

which is widely known as the Euler method. Since it only retains the first term in the Taylor expansion, it requires a large number of extremely small time steps ∆t, sometimes over a few million time steps, to develop a complete time evolution. It is neither symmetric with respect to time nor unitary, rendering it numerically unstable. It is therefore not a suitable scheme except for very simple cases. Konsek and Pearsall [74] used the Euler method to study the dynamics of electron transport through a one-dimensional resonant tunneling semiconductor 23

nanostructure. By tuning the potential height of the double barrier, they observed “on” and “off” resonance wavepackets, which are transmitted and reflected, respectively.

To avoid the numerical instability, McCullough and Wyatt [75] used the Crank-Nicolson approximation which is the following first-order difference (FOD) scheme combined with a unitarised approximation to the time evolution propagator ψ(r, t + ∆t) =

2 − ¯hi H∆t ψ(r, t) . 2 + ¯hi H∆t

(44)

This method is unitary and numerically very stable.

Using the Crank-Nicolson method, Mohaidat [76] solved the one-dimensional Schr¨odinger equation h ¯2 ∂ ∂ψ(z, t) =− ih ¯ ∂t 2 ∂z

!

1 ∂ ψ(z, t) , ∗ m (z) ∂z

(45)

which describes electron transport dynamics through heterostructures such as ultrathin gate oxides in metal-oxide-semiconductor (MOS) or metal-insulator-metal (MIM) structures. Note that the effective mass m∗ (z) is a function of z due to the different materials in the heterostructures. The numerical computation starts with a free electron wavepacket of the following form ψ(z, t = 0) = A exp( i kz) exp(−

(z − z0 )2 ). 4σ 2

(46)

where A is a normalisation constant, k is the wavepacket’s central wave number, σ defines the wavepacket’s spatial width, and this initial wavepacket is placed at z0 on the metal side of the system. The wavepacket is advanced in time according to Eq. (44) until it has emerged out of the insulator barrier at time t = tf . The tunnelling current density is then given by h ¯ J= 2m(z) i

∂ψ(z, tf ) ∂ψ ∗ (z, tf ) ψ (z, tf ) − ψ(z, tf ) ∂z ∂z ∗

24

!

,

(47)

and is found to exhibit oscillatory behaviour. The oscillation amplitude decreases as the oxide insulator thickness or the semiconductor effective mass increases.

However the Crank-Nicolson method involves the inversion of large and poorly conditioned matrices, which can be prohibitively expensive in terms of computer memory and CPU time. To overcome this problem, Asker and Cakmak [77] developed an explicit second-order differencing (SOD) scheme i ψ(r, t + ∆t) − ψ(r, t − ∆t) = −2 H∆tψ(r, t) , h ¯

(48)

which is also known as the “leap-frog” method. The SOD method is unitary, symmetric in time, and demonstrated to be conditionally stable.

Igarashi et al. [78] utilised the “leap-frog” method to study the motion of an electron wavepacket in rectangular and stadium-shaped domains under the influence of a perpendicular uniform magnetic field. They observed complex spatial patterns in the electron probability density in the stadium, an indication of quantum chaos. Glutsch et al. [79–81] used this method to study the optical absorption in semiconductor quantum structures and the Zener tunnelling effect of excitons in shallow superlattices. Also using the “leap-frog” method, Endoh et al. examined a two-dimensional electron wavepacket passing through a single slit [82], a double slit [83], and a narrow semicircular constrictions [84].

The drawback of the Crank-Nicolson method and the “leap-frog” method is that the associated truncation error is proportional to (∆t)2 . For this reason, the time step ∆t for each propagation still has to be very small and, therefore, the number of steps required for modeling a complete scattering event is very large. Although both the Crank-Nicolson and the “leap-frog” methods conserve the norm and energy, errors will accumulate in the phase. For 25

this reason, higher order finite difference methods have been developed, of which the truncation error is proportional to (∆t)n where n > 2. Buffington et al. [85] and Odero et al. [86] utilised a high order terminated Taylor expansion to solve the time dependent Schr¨odinger equation for hydrogen electron scattering. They expanded the unitary propagator as  ∞ X 1

n

i U(∆t) = − H∆t h ¯ n=0 n!

.

(49)

This allows larger time steps to be used, which reduces the number of steps required to reach the asymptotic limit. However, for each time step, it involves many more terms in the series expansion and thus greater computational complexity that can, in turn, slow down the calculation and bring about numerical errors. The convergence rate of the Taylor expansion is slow. Odero et al. [86] carried out a systematic study of these two competing factors and found the optimal time step for their calculation is around 0.005 a.u. and a single calculation requires about 3000 time steps.

There are other explicit and implicit propagation schemes based on a Taylor expansion of the time evolution operator. However, all these methods require small time steps and thus suffer the same problem of error accumulation, which can cause severe distortion of the wave packets at long propagation times.

2.

Split operator and the Baker-Campbell-Hausdorff formula

Another notable propagation scheme is the split operator (SPO) method devised by Feit et al. [87], which was later used by many researchers to solve time dependent Schr¨odinger equation and time-dependent Kohn-Sham equation (see, for example, [88–91]). This scheme

26

splits the exponential function into three parts i H0 ∆t i V ∆t i H0 ∆t ψ(r, t + ∆t) = exp − exp − exp − ψ(r, t) , 2¯ h h ¯ 2¯ h 











(50)

where H0 = −¯ h2 ∇2 /(2m∗ ). Note that the Laplacian ∇2 and potential V operators are diagonal in momentum and coordinate space, respectively. This scheme thus takes advantage of the ease of treating operators in their diagonal representations. The expansion error is determined by the next higher commutator between the potential and the kinetic energy operators and will vary with the value of these terms. This method is unconditionally stable and norm preserving since only unitary operators are involved. It has been used widely and, in some cases, successfully in propagating wave packets under the influence of system Hamiltonians. Nevertheless, this scheme neglects the commutators between the potential and kinetic energy operators and thus introduces error in both energy and phase of the wave function. The magnitude of the inaccuracy depends strongly on the system under investigation [92]. It is perhaps for this reason that the split operator scheme has not been used, as far as we know, in studying electron transport through quantum cavities.

Rau et al. [93, 94] developed a method utilising the Baker-Campbell-Hausdorff formula to expand Eq. (42) as i i ψ(r, t + ∆t) = exp(− H0 ∆t) exp(− V ∆t) h ¯ h ¯  i 1 × exp − 2 [H0 , V ]∆t2 + 3 [H0 , [H0 , V ]]t3 2¯ h 6¯ h  i 3 + 3 [V, [V, H0 ]]t . . . ψ(r, t) . 3¯ h

(51)

For certain potentials the expansion terminates, while for some potentials the expansion can be terminated after a certain number of terms with reasonable convergence. For those systems that the expansion converges or terminates, the solution obtained is exact and 27

easily computed numerically. Unfortunately, the expansion does not generally converge or terminate and is not practical when used in computations for arbitrarily shaped electronic and magnetic potentials.

3.

Time dependent Green’s function

Similar to the time-independent Green’s functions discussed earlier for solving the timeindependent Schr¨odinger equation, Green’s functions can also be used to study the time dependent Schr¨odinger equation. A general formulation can be derived where the final solution is given by [27] ψ(r, t) =

Z

d3 r0

Z

t

t0

dt0 K(r, t; r0 , t0 )ψ(r0 , t0 ) ,

(52)

where K(r, t; r0 , t0 ) is considered as a time evolution propagator. The retarded Green’s function, representing propagation forward in time, is defined as Gr (r, t; r0 ; t0 ) = − i θ(t − t0 )hK(r, t; r0 ; t0 )i = − i θ(t − t0 )h{ψ(r, t), ψ † (r0 , t0 )}i ,

(53)

and the advanced Green’s function, representing propagation backward in time, is defined as Ga (r, t; r0 ; t0 ) = i θ(t0 − t)hK(r, t; r0 ; t0 )i = i θ(t0 − t)h{ψ † (r0 , t0 ), ψ(r, t)}i ,

(54)

where θ is the Heaviside step function and the angle brackets refer to an ensemble average.

The Schr¨odinger equation for the Green’s functions (either retarded or advanced) is !

∂ ¯ δ(r − r0 )δ(t − t0 ) . ih ¯ − ∇2 − V (r) G(r, t; r0 ; t0 ) = h ∂t

28

(55)

A Fourier transform in both space and time yields G(p, ω) =

h ¯ , h ¯ ω − E(p) ± i η

(56)

where p is the momentum, ω is the frequency, E(p) is the energy eigenvalue of ∇2 + V , and η is of an infinitesimal value. In general, there are a set of energy eigenstates and the actual Green’s function is a sum over this set of states.

Again, computation of these eigenvalues proves prohibitive for quantum systems with complicated boundaries. The spatial Fourier transform allows easy representation of derivatives in H0 , but the representation of spatial potentials can lead to the summation of a large number of terms in p. For simple potentials with a simple integrable representation, solutions can be readily obtained. As an example, Meir et al. [95–97] modelled the electron transport through a two-level quantum dot using the Anderson Hamiltonian [98] H=

X

k,σ c†k,σ ck,σ +

X

σ c†σ cσ + U n↑ n↓ +

σ

σ,k∈L,R

X

(Vk,σ c†k,σ cσ + H.c.) ,

(57)

σ,k∈L,R

where c†k,σ (ck,σ ) creates (destroys) an electron with momentum k and spin σ in either the left (L) or the right (R) lead, c†σ (cσ ) creates (destroys) an electron with spin σ on the quantum dot, U n↑ n↓ describes the Coulomb interaction between the two localised spins, and the last term represents the hopping between the leads and the dot.

The conductance can then be calculated, in the linear response regime, using the Landauer formula   e2 X Z ΓLσ (ω)ΓR 1 σ (ω) r J= dω[fL (ω) − fR (ω)] L − =Gσ (ω) , h ¯ σ Γσ (ω) + ΓR π σ (ω)

where ΓL(R) (ω) = 2π σ

P

k∈L(R)

(58)

|Vk,σ |2 δ(ω − k,σ ), fL(R) (ω) is the Fermi-Dirac distribution in

the left (right) lead, Grσ (ω) is the Fourier transform of the retarded Green’s function and, in this case, Grσ (t) = − i θ(t)h{cσ (t), c†σ (0)}i. 29

The calculated conductance obtained by Meir et al. [95] is in good agreement with experimental data for a narrow electronic channel in GaAs interrupted by two controllable potential barriers. The experimental details were given in [24]. Yi and Wang [99] extended the work of Meir et al. to include multiple energy levels in the self-consistent field approximation. However, for more complex Hamiltonian systems such as arbitrarily shaped potentials with soft walls, the time-dependent Green’s function method often required a very large sum of terms. Consequently, the problem is transformed from one where the Laplacian is difficult to deal with to a situation where the potentials are difficult to model.

4.

Chebyshev propagation scheme

A more accurate and stable method is the Chebyshev propagation scheme, pioneered by Tal-Ezer and Kosloff [100–104] and used extensively in quantum chemistry. Wang and Scholz [105] applied this scheme to one-dimensional scattering of several model potentials and their results were in excellent agreement with exact solutions. Wang and Midgley [106– 111] extended the work to two dimensions to study electron transport in nano quantum waveguides.

The Chebyshev scheme approximates the exponential time propagator by a Chebyshev polynomial expansion, 

Ψ(r, t0 + ∆t) = exp −

X N i f (Emax + Emin )∆t an (α)Tn (− i H)Ψ(r, t0 ), 2¯ h n=0

(59)

where Emax and Emin are upper and lower bounds on the sampled eigenvalues of the system Hamiltonian H, α = (Emax − Emin )t/(2¯ h), an (α) = 2Jn (α) except for a0 (α) = J0 (α), Jn (α) are the Bessel functions of the first kind, Tn are the Chebyshev polynomials, and the 30

normalised Hamiltonian is defined as ˜= H

1 [2H − Emax − Emin ] . Emax − Emin

(60)

The normalisation of the Hamiltonian ensures that the expansion of the Chebyshev polynomials is convergent. Since the Bessel functions fall to zero exponentially as n increases beyond α (see Figure 9), it follows that terminating the expansion at N > α would yield accurate results. Note that α is proportional to the time step t and so is the number of terms required in the expansion. Since the time step t can be arbitrarily large, this scheme is often used as a one-step propagator to cover the complete interaction. In practice, rather than having N fixed beforehand, the expansion is built up until the norm of the wave function converges to within some specified threshold. Note that the Chebyshev expansion is not unitary by definition, and therefore convergence of the norm is a strong indicator of accuracy and this procedure gives an algorithm that efficiently adapts computational effort to give a specified accuracy. ˜ on the initial wave function ψ(r, 0) can be evaluated The action of the operator Tn (− i H) using the following recurrence relation: ˜ ˜ n (− i H)ψ(r, ˜ ˜ Tn+1 (− i H)ψ(r, 0) = −2 i HT 0) + Tn−1 (− i H)ψ(r, 0) ,

(61)

with ˜ T0 (− i H)ψ(r, 0) = ψ(r, 0)

(62)

˜ ˜ T1 (− i H)ψ(r, 0) = − i Hψ(r, 0) .

(63)

The calculation therefore boils down to a series of calculations of the scaled Hamiltonian ˜ acting on some initial wave function ψ(r, 0). Since the wave function ψ(x, y, t) contains H 31

complete quantum mechanical information about the system under study, one can derive from it all possible observables, such as reflection (R) and transmission (T ) coefficients, which in turn provide the experimentally measurable conductance via the Landauer formula.

Using the time-dependent Chebyshev scheme, Wang and Midgley examined the electrical conductivity of various nanostructured devices, such as resonant barriers and cavities [106, 108], quantum wires [110], quantum transistors [112], and the Aharanov-Bohm ring [109]. For example, Midgley and Wang [110] calculated the electrical conductance of a quantum wire connected to the source and drain by flanges. The geometry used in the calculations is similar to those shown in Kane et al. [2] and Sohn et al. [113]. Similar to earlier experimental and theoretical studies [2, 114, 114–116], the calculated conductance changes as a stepping function with respect to the wire width.

Midgley and Wang [111] extended the work further to include external magnetic fields in their simulation. While the main device structure is formed by the patterned electric potential V, more subtle quantum effects can be probed by applying an external magnetic field B. The application of a weak magnetic field allows the phase of the wave function to be altered without affecting the overall electron density distribution. For example, by applying a magnetic field to the Aharonov-Bohm Ring, the transmission through the ring can be made to undergo oscillations as the electron wave function constructively and destructively interferes in the output lead [36, 37, 117–119].

A schematic diagram of the Aharonov-Bohm ring is shown in Figure 10(a) with the confinement potential depicted in Figure 10(b). Electron transport between the source and the drain depends on quantum waves propagating around the two arms of the nano ring

32

and undergoing interference with each other. Understanding the phase relationship between the various parts of the quantum waves will allow such devices to be constructed and used with accuracy. Figures 11 and 12 show the effect of an external magnetic field on the wave function as it propagates through the device. As the comparison between Figure 11 and 12 demonstrates, a constant magnetic field of 0.018 T causes more of the wave packet to enter the ring, rather than reflecting off the central pillar of the potential. The transmission coefficients of several nano rings were calculated as a function of the external magnetic field strength. Aharonov-Bohm oscillations were observed in all cases, as shown in Figure 13. The same features were observed experimentally by Wiel et al. [37].

It is worth noting that, if this or other time-dependent scheme is applying directly, calculating the transmission coefficients or conductance for a wide range of energies could still be computationally expensive, amounting to repeated propagations of an initial wave function with various mean energies in the given range. An efficient method for computing the transmission coefficients was developed by Yiu and Wang [120], which arrives at the entire transmission spectrum for a given range of energies after only one propagation of an initial wave function with a broad energy distribution. Manouchehri and Wang [121] recently extended this method to two dimensions.

IV.

NUMERICAL ISSUES

Computer simulations of the dynamics of nanostructured devices often require large numerical grids to represent the system Hamiltonian and the wave functions. This can be prohibitively expensive in terms of memory and computational time. There are many op-

33

timisations that can be performed while writing computer codes to increase accuracy and performance and decrease memory usage. We will discuss the actual implementation on a computer in this section and will focus on, in particular, the Chebyshev propagation scheme. However, the many issues discussed in this section are common for other computational methods.

For example, several computational techniques can be used to reduce the effective propagation space size, which decreases the computing time required to evaluate the action of the full Hamiltonian and also reduces computer memory to store large temporary working arrays. One of the ideas is to split the wave function into an interaction part in the vicinity of the potential and a free space part away from the potential. The wave function in the interaction region needs to be propagated using the full Hamiltonian, while the free space wave function can be propagated trivially over the larger grid using the free space propagator. Another idea is to remove those components of the wave function that has left the interaction region and are going to propagate further as free waves. By doing so, one can explore the time evolution of the slower components of the wave function still under the influence of the interaction potentials, without the complication arising from the reflection (in the case of dirichlet, cauchy or other such boundary conditions) or wrap-around (in the case of periodic boundary conditions) of the faster components at the grid boundaries. These techniques together with numerical accuracy considerations are discussed below in detail.

34

A.

Wave function splitting algorithm

Heather and Metiu [122–124] suggested that for certain systems, namely those in which the potential function is highly localized, a considerable computational saving can be achieved by splitting the wave function into an interaction part in the vicinity of the potential, and a free space part away from the potential (see Figure 14(a)). The wave function ψ(x) is then multiplied by f (x) and 1 − f (x), where f (x) is a function that is 1 inside the interaction region and decays continuously to 0 near its boundaries, as shown in Figure 14(b). In this way ψ(x) is split into an interaction wave function, ψint (x) = f (x)ψ(x), and a free space wave function, ψf ree (x) = (1 − f (x))ψ(x). Nothing has been lost in this process, since one has only multiplied ψ(x) by 1 = f (x) + (1 − f (x)). Due to the linearity of the propagation operator U(∆t), one can propagate ψint (x) and ψf ree (x) separately and then combine the two parts at the end of the calculation.

For an electron propagating in free space, a very simple propagation method can be used since the propagation operator becomes [103] i H∆t i p2 ∆t U(∆t) = exp − = exp − h ¯ 2m∗ h ¯ 



!

,

(64)

which can be easily evaluated in momentum space. The propagation of a wave function in free space then involves (1) Fourier transformation of the initial wave function to momentum space; (2) multiplication by the above free space propagator, and (3) inverse Fourier transformation back to the coordinate space. That is ψ(x, t + ∆t) =

Z

i p2 ∆t exp − Ψ(p, t) exp( i px)dp 2m∗ h ¯ !

(65)

where Ψ(p, t) is the Fourier transform of ψ(x, t), namely the momentum space wave function. 35

In this way, the free space propagation can be evaluated extremely quickly for an arbitrary length of ∆t.

The wave function in the interaction region needs to be propagated using the full Hamiltonian, but this is operating on a much-reduced grid leading to vast memory and speed improvements. The two parts can then be recombined at the end of the calculation. This wave function splitting algorithm provides a substantial computational saving and thus allows for much larger numerical grids to be used. This is essential for the time dependent study of systems possessing metastable states that take a long time to decay. In this case, a very large grid is required to avoid artefact boundary reflection of the faster components in the wavepacket.

The original algorithm developed by Heather and Metiu [122–124] treats the grid outside the interaction region as asymptotic: once part of the wave function enters this region it does not interact with the potential again for the duration of the calculation. Although useful for the type of calculations reported there, this approach has several limitations. Firstly, the wave function is required to start off inside the interaction region, thereby causing it to be larger than otherwise necessary. Secondly, using this method it is not possible to handle multiple interaction regions separated by regions of free space. Thirdly, the entire wave function is only assembled at the end of the calculation, negating one of the primary benefits of the time dependent approach, i.e. showing the time development of the system under study.

Falloon and Wang [125] presented a modified wave function splitting procedure that overcomes these difficulties by allowing ψint (x) and ψf ree (x) to recombine at each step of the

36

propagation. The essence of the new approach is that ψ(x) is split into ψint (x) and ψf ree (x) as before, and each is propagated for the maximum time possible before either ψint (x) leaves the interaction region or ψf ree (x) reaches the potential. They are then recombined, split and propagated again. This process can be continued indefinitely, provided that no part of the wave function ever reaches the edge of the free space grid. At each recombination the complete wave function is obtained.

For this approach to work, the interaction grid needs to be divided into potential, splitting, and buffer regions, as shown in Figure 14(b). The potential region is the only place on the grid where V (x) is non-zero. On either side of this, the inner buffer regions provide a space through which ψf ree (x) can propagate inwards between splittings without encountering the potential. The splitting region is where f (x) changes from 1 to 0, and hence where wave function is split. Outside of this, the two outer buffer regions provide space for ψint (x) to propagate between splittings without reaching the boundary of the interaction grid.

It is important to note that a significant part of the interaction grid is actually free space; this may seem wasteful, but it is necessary to ensure that ψint (x) and ψf ree (x) do not propagate beyond their permitted regions between splittings. For example, when a splitting is carried out, there may remain a substantial portion of ψf ree (x) in the outer buffer region. During propagation this may move inwards, and so the inner buffer region is required to ensure that it does not penetrate the potential region. Similarly, after a splitting there may be a large portion of ψint (x) in the inner buffer region. This could then propagate outwards through the splitting region, and hence the outer buffer region is required to prevent it from reaching the edge of the interaction grid. In other words, ψint (x) and ψf ree (x) overlaps in the entire buffer region, allowing a smooth transition from one numerical grid to another.

37

The time taken for the wave function to traverse a given distance is determined by its highest momentum component, pmax = πn/L, where L and n are the free space grid length and the number of points respectively. Both the inner and outer buffer regions should have the same length of Lb and the propagation time is then given by tprop =

Lb Lb L = . pmax πn

(66)

If the above criterion is satisfied, the wave function splitting scheme introduces little numerical error, as shown in Figure 15.

The new procedure retains all the benefits of the earlier splitting algorithm, whilst being more useful for the study of quantum waveguides. This method is particularly useful when a “long time” solution is required, demanding a large space to contain the fast components of the wave function, most of which is free of external potentials.

One can also concentrate study on the interaction region by removing the fast components of the wave function outside this region, using lines of no return or complex optical potentials in the Hamiltonian [103, 107, 126, 127]. The “lines of no return” method sets components of the wave function to zero at the lines or multiplies the wavefunction by a decreasing value towards zero. However, this method introduces artificial reflection at the lines of no return and is a non-physical solution. An attractive method is the complex absorbing potential, which absorbs the wave function components just before they reach the grid boundaries. The complex potential is easy to implement, requires little extra computation power, and has been found to be very effective at absorbing the wavepacket at the boundaries. Midgley and Wang [107] carried out a detailed analysis of various complex absorbing potentials employed to eliminate reflection or wrap-around of wave packets at numerical grid boundaries.

38

B.

Numerical differentiation

A common numerical task is to evaluate the derivatives of a data set given on a numerical mesh. For example, in simulating the dynamics of quantum waveguides using the time dependent Schr¨odinger equation, the dominant computational effort is in the evaluation of the first and second-order spatial derivatives of system wave functions as discussed in the previous section.

In most practices, derivatives are calculated using a finite-difference method.

In

the five-point finite-difference method (FD5), the second derivative of a data set (f1 , f2 , . . . , fm , . . . , fN −1 , fN ) is given by

d2 f 1 = (−fm−2 + 16fm−1 − 30fm + 16fm+1 − fm+2 ) , 2 dx m 12∆x2

(67)

where ∆x is the grid spacing. If higher accuracy is required, the seven-point finite-difference method (FD7) can be applied

d2 f 1 = (2fm−3 − 27fm−2 + 270fm−1 − 490fm + 270fm+1 − 27fm+2 + 2fm+3 ) . (68) 2 dx m 180∆x2 However, finite-difference methods are based on local approximations to the derivative operator, which brings about error. A more accurate and computationally efficient scheme is based on the discrete Fourier transform (DFT) [103, 133, 134], which uses the fact that if f (x1 , x2 , . . . , xL ) =

Z Z

···

Z

F (k1 , k2 , . . . , kL )exp2π i (k1 x1 +k2 x2 +···+kL xL ) dk1 dk2 . . . dkL , (69)

for a function F (k1 , k2 , . . . , kL ), then the partial derivatives are f

(n1 ,n2 ,...,nL )

(x1 , x2 , . . . , xL ) =

Z Z

···

Z

((2π i k1 )n1 (2π i k2 )n2 . . . (2π i kL )nL )

F (k1 , k2 , . . . , kL ) exp2π i (k1 x1 +k2 x2 +···+kL xL ) dk1 dk2 . . . dkL . 39

(70)

It ing

follows

that

the

F (k1 , k2 , . . . , kL )

derivatives

using

DFT

can and

be secondly

evaluated taking

by

firstly

the

inverse

calculatDFT

of

(2π i k1 )n1 (2π i k2 )n2 . . . (2π i kL )nL F (k1 , k2 , . . . , kL ).

For N data points of a one-dimensional function f (x), separated by uniform spacing ∆x, where x ∈ [0, (N − 1)∆x], the continuous Fourier integral (70) can be approximated using DFT F (kj ) =

N −1 X

f (xm ) exp−2π i kj xm ∆x ,

(71)

m=0

where xm = m∆x. The corresponding inverse DFT is N −1 X

f (xm ) =

F (kj ) exp2π i xm kj ∆k ,

(72)

j=0

where ∆k = 1/(N ∆x) and kj = j∆k.

For any sampling interval ∆x, the maximum frequency in its Fourier transform is kmax = 1/(2∆x), which is the Nyquist critical frequency [135]. In other words, the valid frequency components are contained in F (kj ) with j ∈ [0, N/2], while F (kj ) = F ∗ (kN −j ) for j ∈ [N/2 + 1, N − 1], where



indicates complex conjugation. Consequently, the inverse Fourier

transform given by Eq. (70) is discretized to a good approximation as f (n) (xm ) =

N −1 X

(2π i ˜j∆k)n F (kj ) exp2π i xm kj ∆k ,

(73)

j=0

where ˜j = j for j ∈ [0, N/2] and ˜j = j − N for j ∈ [N/2 + 1, N − 1]. Similarly, for a two-dimensional data set of N1 × N2 points, the discrete form of Eq. (70) is f

(n1 ,n2 )

(xm1 , xm2 ) =

NX 1 −1 N 2 −1 X

(2π i ˜j1 ∆k1 )n1 (2π i ˜j2 ∆k2 )n2 F (kj1 , kj2 ) exp2π i (xm1 kj1 +xm2 kj2 ) ∆k1 ∆k2

j1 =0 j2 =0

(74)

The DFT representation of the derivative operator is global, giving it improved accuracy 40

over local representations. Moreover, the calculation of higher-order derivatives using the DFT scheme requires little extra computing in contrast to, for example, the finite-difference methods. The computer program is simple, its structure transparent, and it can be easily extended to higher dimensions, with possibly different grid spacings for each dimension. The DFT scheme is also efficient due to the fast-Fourier-transform (FFT) algorithm, which scales as O(m ln m). With the use of modern parallel computers the scaling can be further reduced to O(ln m). Almost all books on parallel algorithms have a chapter on the FFT and almost all supercomputers have FFT subroutines specifically tuned for the particular parallel architectures. Figure 16 shows the absolute error ∇2 fexact − ∇2 fnum in evaluating the second derivative of a Gaussian function. The numerical error of the DFT scheme was found to be nine and seven orders of magnitude smaller than that of the five-point and seven-point finite-difference method, respectively. They also checked the DFT scheme for several arbitrarily deformed wave functions. The same accuracy is achieved in evaluating their derivatives as long as the wave functions approach zero at the boundaries.

Wang [134] also examined the numerical errors of the FD5, FD7 and DFT methods in evaluating one-dimensional second-order derivatives as a function of the grid spacing ∆x. As shown in Table I, the DFT works much better for significantly larger ∆x. In general, the DFT scheme is many orders of magnitude more accurate than the FD5 and FD7 methods.

However, it is important to note that there is a limitation to the DFT scheme because the Fourier transformation implies periodic boundary conditions in which the last point of the data set needs to be connected smoothly to the starting point. In other words, if the fast

41

TABLE I: Numerical errors of the FD5, FD7 and DFT methods ∆x

FD5

FD7

DFT

0.01

1.3 × 10−8

5.0 × 10−11

7.0 × 10−11

0.02

2.1 × 10−7

1.9 × 10−10

1.2 × 10−11

0.05

8.3 × 10−6

4.7 × 10−8

2.0 × 10−12

0.1

1.3 × 10−4

2.9 × 10−6

5.9 × 10−13

0.2

2.0 × 10−3

1.7 × 10−4

4.1 × 10−14

0.5

6.2 × 10−2

2.5 × 10−2

6.6 × 10−5

Fourier transform FFT method is used to compute derivatives, care must be taken to ensure that the wave function is effectively zero along the boundaries at all times. Because the FFT method implies periodic boundary conditions, the wave packet crossing one boundary would re-emerge on the opposite side, giving rise to unwanted artefact.

C.

Numerical accuracy and stability

The main criticism against computational methods is how accurately they model the physical system and how stable the solutions are. If the results obtained are not reliable or stable, then the program is of little use. There is a general deep concern about the accuracy of the system wave functions obtained using time-dependent propagation approaches, since errors accumulated over many time steps (normally on the order of a million time steps) may cause severe distortion of the wavepackets. Even for a one-step time propagator, such as the Chebyshev scheme, errors may accumulate when using repetitively the recurrence relation

42

Eq. (59). Typical numbers of iterations range from a few hundred to several thousand. To ensure that the time dependent solution accurately reflects the system being modelled, the calculated results need to be checked against a set of criteria.

1.

Conservation of norm and energy

First of all, the norm of the wave function must be conserved throughout the time-evolution, because the exact time evolution operator is unitary. The energy of the system should also remain constant throughout the time-evolution. The preservation of norm and energy serve as a basic criterion to any propagation schemes. For the Chebyshev scheme these two attributes are particularly important, since the Chebyshev propagator is not by definition unitary and thus neither norm nor energy conserving by definition. In this case, conservation of norm and energy puts forward a strong test to the propagation scheme and the computational model. Wang and Midgley [106] found that both the norm and the energy were conserved to one part in 108 and in many cases to over 1015 . By relaxing this stringent condition in computer programs, faster convergence can be achieved, yielding results with no noticeable difference in macroscopic observables, such as the transmission coefficient and conductance.

2.

Time reversal propagation

Since the Schr¨odinger equation is symmetric with respect to time reversal, a stringent test of reliability of the solution is to reverse the evolution with time and the wave function should

43

return to its initial state [136]. Mathematically speaking, the solution to the time dependent Schr¨odinger equation(given by Eq. (42)) can be reorganised as i H∆t ψ(x, y, t + ∆t) = ψ(x, y, t) , exp h ¯ 



(75)

due to the Hamiltonian being Hermitian. This represents the propagation of the final wave function ψ(x, y, t + ∆t) for an amount −∆t back to the initial wave function ψ(x, y, t). Theoretically, this reversely propagated wave function is identical to the initial wave function. If errors were accumulated along the way, a reverse propagation would lead to something quite different from the starting wave function. A comparison between the two wave functions imposes a very stringent test on the stability of the computational method and numerical accuracy.

Wang and Midgley [106] performed this test by propagating an initial wave function forward in time by an amount ∆t and then propagating it backward in time by an amount −∆t. It was found that, for simple cases with a fine grid, the initial wave function can be reproduced to machine precision, a relative error of less than 10−15 . For more complicated cases, such as large space with applied electric and magnetic fields, the initial wave function can be reproduced to within a relative error of 10−8 . The time propagation of the wave packet illustrates this phenomenon (see Figure 17), where the wave packet is seen to spread, reflect and translate (under a magnetic field) to a final state; and then recombine and move back to its original form and original position under time reversal. The difference between the initial wave function and the reversely propagated wave function is plotted in Figure 18.

44

3.

Comparison with exact solution: free space propagation

By directly comparing the numerical results to known exact solutions for simple systems, computational errors can be further quantified. Several known solutions exist, for example, the electron propagation in free space (i.e. no external fields and no interactions).

A classical particle in free space travels along with no change in its direction, velocity, or acceleration. Similarly, a quantum particle in free space propagates with no change in direction, mean velocity or acceleration. Its wave function does, however, spread as it propagates. If a Gaussian wave packet is used as the initial wave function 1 x2 y2 i i ψ(x, y, 0) = exp − 2 − + px x + py y 2 2πwx wy 2wx 2wy h ¯ h ¯

!

,

(76)

the wave function some time later is given by [71, 137] ψ(x, y, t) = ×

16wx2 wy2 π2

!1 4

py y px x +i exp( i φ1 + i φ2 ) exp i h ¯ h ¯ 

i px 2 t m exp − 2 2 i h¯ t  2wx + m   

h

x−

i py 2   t m ih ¯t  2wy2 + 2 m  h



y−





,

(77)

where φ1 and φ2 are real and independent of x and y h ¯ p2x t 2m h ¯ p2y φ2 = −θ2 − t 2m 2¯ ht tan(2θ1 ) = mwx2 2¯ ht tan(2θ2 ) = . mwy2 φ1 = −θ1 −

(78) (79) (80) (81)

This exact solution allows direct comparison to the numerical results obtained using the Chebyshev scheme. Such a comparison is a strong indication that the computational model is valid and that the computer program written accurately reflects the model. 45

Figure 19 demonstrates how the wave function propagates in the direction of its initial momentum. Since the initial wave function was a Gaussian packet, the numerical results correctly display the spreading of the wave packet and the decreasing of its height. Figure 20 shows the comparison of the probability distribution P = ψψ ∗ with the analytic solution. The results show that the numerical and analytical solutions agree to almost machine precision (10−15 ).

4.

Comparison with exact solution: uniform magnetic field

A classical particle in a magnetic field undergoes circular motion. The same type of motion occurs for a wave packet in a constant uniform magnetic field, with its centre of mass following that of a classical particle. An analytical solution for the motion of a Gaussian wave packet in a uniform magnetic field was derived by Midgley and Wang [111], who followed some steps suggested by ter Haar [138]. The details are given below, and the derived exact solution is then compared with numerical results.

(1) Analytical solution

The Schr¨odinger equation for a charged particle moving in a constant uniform magnetic field is given by ∂Ψ h ¯2 2 h ¯ ∂Ψ ∂Ψ 1 ih ¯ =− ∇ Ψ + ωL x −y + mωL2 (x2 + y 2 )Ψ , ∂t 2m i ∂y ∂x 2 !

where the magnetic vector potential used is A = 2b (−y, x, 0) and ωL =

eb 2m

(82)

is the Larmor

frequency. To find a solution, a rotating frame of reference is chosen x = x0 cos(ωL t) + y 0 sin(ωL t) , 46

(83)

y = −x0 sin(ωL t) + y 0 cos(ωL t) ,

(84)

t = t0 ,

(85)

Ψ = Ψ0 .

(86)

In this rotating frame of reference, the operators become ∂ ∂ ∂ ∂ + ω x − y = L ∂t0 ∂t ∂y ∂x

!

(87)

∇02 = ∇2 .

(88)

With these substitutions, the Schr¨odinger equation is transformed to a simple harmonic oscillator h ¯ 2 02 0 mωL2 02 ∂Ψ0 ∇ Ψ + (x + y 02 )Ψ0 , ih ¯ 0 =− ∂t 2m 2

(89)

with well known solution. In two dimensions, Ψ0 ≡ ψ(x0 , y 0 , t) = C

X

An,l χn (αx0 ) χl (αy 0 ) exp (− i ωL t(n + l + 1)) ,

(90)

n,l



2 2

where χn (αx) = cn exp − α 2x



Hn (αx) are the eigenfunctions of the simple harmonic oscil-

lator, An,l = Axn Ayl are chosen to satisfy the initial conditions with Axn and Ayl referring to the x and y component of An,l respectively, Hn (αx) are the Hermite polynomials, α = and c2n =

q

mωL , h ¯

α√ . 2n n! π

Using the generating function for the Hermite polynomials exp(−λ2 + 2λη) =

X λn n

n!

Hn (η) ,

(91)

we have 2

h

1

ψ(x0 , y 0 , 0) = C exp −(Axn cn n!) n + 2(Axn cn n!) n αx0 47

i

α2 x02 × exp − 2 "

#

1

2

h

× exp −(Ayl cl l!) l + 2(Ayl cl l!) l αy 0 α2 y 02 × exp − 2 "

i

#

.

(92)

If the initial wave packet is given by a Gaussian 1 x02 y 02 i px x0 i py y 0 ψ(x , y , 0) = √ exp − 2 − + + wx wy π 2wx 2wy2 h ¯ h ¯ 0

!

0

,

(93)

then equating the coefficients in Equations (92) and (93) gives h i 2 2 1 y x n − (A c l!) l = C exp −(A c n!) , √ n n l l wx wy π

1 2wx2 1 − 2 2wy i px h ¯ i py h ¯



α2 , 2 α2 = − , 2

= −

(94) (95) (96)

1

= 2 (Axn cn n!) n α ,

(97)

1

= 2 (Ayl cl l!) l α .

(98)

From these, we obtain the following relations wx = wy =

1 , α

(99)

1 i px n = , n! 2¯ hα   1 i py l y Al c l = , l! 2¯ hα "     # α px 2 py 2 C = √ exp − − . π 2¯ hα 2¯ hα 

Axn cn



(100) (101) (102)

The evolution of the wave packet is therefore ψ(x0 , y 0 , t) = C

X 1  i p x n 1  i p y l n,l

n! 2¯ hα

l! 2¯ hα

α2 x02 α2 y 02 exp − Hn (αx0 )Hl (αy 0 ) × exp − 2 2 !

× exp (− i ωL t(n + l + 1)) . 48

!

(103)

By using again the generating function for the Hermite polynomials, the above expression can be further simplified as " 0

0

ψ(x , y , t) = C exp

α2 x02 α2 y 02 + 2αλx x − − λ2y + 2αλy y 0 − − i ωL t , 2 2 #

−λ2x

0

(104)

where i px exp(− i ωL t), 2¯ hα i py λy = exp(− i ωL t). 2¯ hα

λx =

(105) (106)

(2) Numerical results

As described earlier, the magnetic field enters the Hamiltonian via the vector potential A, i.e. H=−

1 (−i ∇ − e A)2 + e V, 2m

(107)

where A is defined by B = ∇ × A.

(108)

For a given magnetic field B, there are an infinite number of magnetic vector potentials A (namely the different gauges), all of which provide the same results for physically measurable quantities. However, different gauges have different properties in their numerical implementation. For example, three solutions for the magnetic vector potential A corresponding to a constant magnetic field B = (0, 0, b) are A = b(−y, 0, 0), b (−y, x, 0), 2 b A = (x − y, x − y, 0). 2 A =

49

(109) (110) (111)

A closer examination shows that Eq. (110) requires the most amount of memory and computation because it has two non-zero and non-symmetric x and y components, while Eq. (109) is not as numerically stable as Eqs. (110) and (111). The increased multiplication factor in Eq. (109), being b rather than b/2 for the other two gauges, doubles the size of the minute numerical error emerged from the boundaries each time the Hamiltonian is applied to the wave function. Consequently, Eq. (111) is computationally more favorable than the other two.

There is a further requirement on the magnetic vector potential A due to the way the action of the Laplacian operator on the wave functions is calculated. That is the potential should be continuous across the boundaries and preferably zero along the boundaries, if the Fourier transformation method is used to compute derivatives. As a result, any non-zero component of the wave packet crossing one boundary would re-emerge on the opposite side, giving rise to unwanted artefact.

To satisfy this requirement, one can define

A=

     (Ax (x, y), Ay (x, y), 0)    0

x or y not near a boundary ,

(112)

x or y near a boundary

which is then smoothed by convoluting with a Gaussian function or via a local averaging process. This magnetic vector potential A produces a constant magnetic field across the nanostructure under study but deviates from constant near the boundaries. In general, the smoothed vector potential can be expressed as A = (f (x, y), g(x, y), 0).

50

(113)

Expanding Eq. (107) as H=−

1 (−∇2 + i e ∇A + i e A∇ + e2 A2 ) + e V 2m

(114)

leads to !

1 ∂ ∂ H = − −∇2 + 2i h ¯ e g(x, y) + f (x, y) 2m ∂y ∂x ! !   ∂f (x, y) ∂g(x, y) 2 2 2 + ih ¯e + + e f (x, y) + g(x, y) +eV , ∂x ∂y

(115)

where f (x, y) and g(x, y) are smoothed functions, approximating the vector operator A in the region where propagation occurs while being zero at the boundaries.

For speed of computation the components independent of ψ, that is !

  ∂f (x, y) ∂g(x, y) ih ¯e + + e2 f (x, y)2 + g(x, y)2 , ∂x ∂y

(116)

can be computed once and stored. While the components dependent on ψ, that is !

∂ ∂ + f (x, y) , 2i h ¯ e g(x, y) ∂y ∂x

(117)

must be computed each time the Hamiltonian is computed.

Evaluation of the dependent components requires six arrays to be stored (f (x, y), g(x, y), ∂ , ∂ ∂y ∂x

and the application of each partial derivative on the wave function) and two reverse

Fourier transforms to be applied to the wave function. The net result is an increase in memory space required by seven arrays (one independent and six dependent components) and two extra reverse Fourier transforms (and the associated increase in multiplications), doubling the number of Fourier transforms computed.

The smoothing operation gives rise to moderately large spatial derivatives of the wave packet near the boundaries, which can introduce considerable numerical error if the wave packet 51

approaches the boundary. In all calculations presented, the spatial domain is set to be sufficiently large so that the wave packet does not encounter the boundary region.

Figure 21 (a) shows the electron wave packet propagating with an applied uniform constant magnetic field. The wave packet initially has energy 0.082 eV in the x− direction and momentum spread 2.3%. The propagation is under the influence of an applied magnetic field of 0.1 T . The time step is chosen to be 0.023 ps, so a full circle takes 0.23 ps. The wave packet is specially chosen to have a momentum spread corresponding to the Larmor frequency, which ensures that its shape is preserved while propagating. As expected, the wave function follows the same path as that of a classical electron in the same magnetic field. The difference between the analytical solution and the computational solution is shown in Figure 21 (b). The maximum error is about one part in a billion, which is close to machine accuracy.This provides strong evidence that the computational model is highly accurate and the approximation made for the vector potential is valid.

The method described above is applicable to more than just uniform constant magnetic fields. As long as an appropriate vector potential can be constructed, arbitrary magnetic fields can be studied. By smoothing the vector potentials at the boundary, the computation should proceed with little error, as with the results shown in this paper. While the Chebyshev propagation scheme is only valid for time-independent potentials, time dependent magnetic fields may be studied by using many small time steps, considerably smaller than the oscillation of the magnetic field. In this fashion, the method demonstrated is extensible to arbitrarily shaped and time varying magnetic fields.

52

V.

CONCLUSIONS

In this paper we have reviewed recent work on electron transport and quantum interference in nanostructured devices, focusing mainly on the theoretical and computational aspects. A general quantum waveguide theory is presented and a wide range of computational methods for solving the corresponding Schr¨odinger’s equations are discussed in detail. This provides a basis for computer simulations of various quantum phenomena emerging at the nanometre scale, for example, resonant tunnelling, resonant cavities, quantum interference and diffraction, Coulomb drag and Coulomb blockade, Aharonov-Bohm rings, quantum information processing and quantum logic manipulation, and the list goes on.

The fabrication of nanostructures is currently underway at many research institutes around the world. Much of the research undertaken thus far is concerned with the actual fabrication of these devices and observing fundamental quantum phenomena. Recent efforts have started to focus on using these structures to build usable computing devices such as the quantum transistor, single-electron devices, quantum bits and logic gates, and how these structures will interact with each other and their environment. Many studies have demonstrated non-intuitive behaviour and promising potential for novel electronic devices with outlandish functions. As the fabrication of these structures becomes more widely available and their properties understood, they will start to be increasingly important in the electronic industry. If the time line described by Sohn [139] is used as a guide, mainstream silicon chip manufacturers will be reaching the nanometre scale in about 10 years time.

However, to study these quantum phenomena systematically through experiments is difficult and very costly, because this would require a new device to be fabricated for each geometric 53

configuration. In this regard, computer simulations provide a very powerful way of providing detailed and often very accurate information about these systems. Through the use of computer code and an appropriate model description, potential problems and novel electronic devices may be identified and studied. Questions that previously could only be speculated on can now be addressed in great detail. With further development of theoretical models and new computational algorithms, considerable new information will be gained and added to the knowledge base of nano-electronic devices. This field of research is still in its infancy.

Acknowledgments

This work was supported by Australian Research Council, APAC (Australian Partnership for Advanced Computing National Facility) and iVEC (The Hub of Advanced Computing in Western Australia).

54

[1] T. Sugaya, J. P. Bird, M. Ogura, Y. Sugiyama, D. K. Ferry, and K. Y. Jang, App. Phys. Lett. 80, 434 (2002). [2] B. Kane, G. Facer, A. Dzurak, N. Lumpkin, R. Clark, L. Pfeiffer, and K. West, Appl. Phys. Lett. 72, 3506 (1998). [3] C. Dekker, Physics Today 52, 22 (1999). [4] A. Yacoby, H. L. Stormer, N. S. Wingreen, L. N. Pfeiffer, K. W. Baldwin, and K. W. West, Phys. Rev. Lett. 77, 4612 (1996). [5] Y. Hayamizu, M. Yoshita, S. Watanabe, H. A. L. Pfeiffer, and K. West, App. Phys. Lett. 81, 4937 (2002). [6] S. Frank, P. Poncharal, Z. L. Wang, and W. A. Heer, Science 280, 1744 (1998). [7] I. Kamiya, I. Tanaka, K. Tanaka, F. Yamada, Y. Shinozuka, and H. Sakaki, Physica E 13, 131 (2002). [8] A. K. Geim, P. C. Main, N. L. Scala, L. Eaves, T. J. Foster, P. H. Beton, J. W. Sakai, F. W. Sheard, and M. Henini, Phys. Rev. Lett. 72, 2061 (1994). [9] J. Appenzeller, C. Schroer, T. Schapers, A. v. d. Hart, A. Forster, B. Lengeler, and H. Luth, Phys. Rev. B 53, 9959 (1996). [10] J. Appenzeller and C. Schroer, J. Appl. Phys. 87, 3165 (2000). [11] P. Debray, O. E. Raichev, M. Rahman, R. Akis, and W. C. Mitchel, Appl. Phys. Lett. 74, 768 (1999). [12] A. S. Mel’nikov and V. M. Vinokur, Nature 415, 60 (2002). [13] K. Schwab, E. A. Henriksen, J. M. Worlock, and M. L. Roukes, Nature 404, 974 (2000).

55

[14] L. Kouwenhoven, Nature 403, 374 (2000). [15] S. Komiyama, O. Astafiev, V. Antonov, T. Kutsuwa, and H. Hirai, Nature 403, 405 (2000). [16] L. Kouwenhoven, Nature 403, 374 (2000). [17] J. M. Martinis and M. Nahum, Phys. Rev. Lett. 72, 904 (1994). [18] L. R. C. Fonseca, A. N. Korotkov, and K. K. Likharev, Appl. Phys. Lett. 69, 1858 (1996). [19] X. Jehl, M. W. Keller, R. L. Kautz, J. Aumentado, and J. M. Martinis, Phys. Rev. B 67, 165331 (2003). [20] R. Akis and D. K. Ferry, App. Phys. Lett. 79, 2823 (2001). [21] A. Bertoni, P. Bordone, R. Brunetti, C. Jacoboni, and S. Reggiani, Phys. Rev. Lett. 84, 5912 (2000). [22] J. Harris, R. Akis, and D. K. Ferry, App. Phys. Lett. 79, 2214 (2001). [23] M. J. Gilbert, R. Akis, and D. K. Ferry, App. Phys. Lett. 81, 4284 (2002). [24] U. Meirav, M. A. Kastner, and S. J. Wind, Phys. Rev. Lett. 65, 771 (1990). [25] M. A. Kastner, Physics Today pp. 24–31 (1993). [26] S. Datta, Electronic transport in mesoscopic systems (Cambridge University Press, 1995). [27] D. K. Ferry and S. M. Goodnick, Transport in nanostructures (Cambridge University Press, 1997). [28] R. Webb, Superlattices and Microstructures 23, 969 (1998). [29] K. Andres, R. Bhatt, P. Goalwin, T. Rice, and R. Walstedt, Phys. Rev. B 24, 244 (1981). [30] T. Pang and S. G. Louie, Phys. Rev. Lett. 65, 1635 (1990). [31] B. Tanner, Introduction to the Physics of Electrons in Solids (Cambridge University Press, 1995). [32] C. Kittel, Introduction to Solid State Physics (Wiley, New York, 1986).

56

[33] S. R. Elliot, The Physics and Chemistry of Solids (John Wiley and Sons, 1998). [34] U. Ekenberg, Phys. Rev. B 36, 6152 (1987). [35] R. Chen and K. Bajaj, Phys. Rev. B 50, 1949 (1994). [36] W. G. van der Wiel., S. D. Franceschi, T. Fujisawa, J. M. Elzerman, S. Tarucha, and L. P. Kouwenhoven, Science 289, 2105 (2000). [37] W. G. van der Wiel., Y. V. Nazarov, S. D. Franceschi, T. Fujisawa, J. M. Elzerman, E. W. G. M. Huizeling, S. Tarucha, and L. P. Kouwenhoven, Phys. Rev. B 67, 033307 (2003). [38] N. Giordano, Computational Physics (Prentice Hall: New Jersey, 1997). [39] L. Burgnies, O. Vanbesien, and D. Lippens, J. Phys. D: Appl. Phys. 32, 706 (1999). [40] D. Csontos and H. Q. Xu, Appl. Phys. Lett. 77, 2364 (2000). [41] R. L. Schult, D. G. Ravenhall, and H. W. Wyld, Phys. Rev. B 39, 5476 (1989). [42] A. Weisshaar, J. Lary, S. M. Goodnick, and V. K. Tripathi, J. Appl. Phys. 70, 355 (1991). [43] G. Jin, Z. Wang, A. Hu, and S. Jiang, J. Appl. Phys. 85, 1597 (1999). [44] C. Kim, A. M. Satanin, Y. S. Joe, and R. M. Cosby, Phys. Rev. B 60, 10962 (1999). [45] H. Q. Xu, Phys. Rev. B 52, 5803 (1995). [46] J. B. Xia and S. S. Li, Phys. Rev. B 66, 35311 (2002). [47] Y. Shi and H. Chen, Phys. Lett. A 299, 401 (2002). [48] C. S. Lent and D. J. Kirkner, J. Appl. Phys. 67, 6353 (1990). [49] H. Tachibana and H. Totsuji, J. Appl. Phys 79, 7021 (1996). [50] E. Polizzi, N. B. Abdallah, O. Vanbesien, and D. Lippens, J. Appl. Phys. 87, 8700 (2000). [51] Y. Saad, Numerical methods for large eigenvalue problems (Halsted Press, 1992). [52] R. B. Lehoucq and D. C. Sorensen, SIAM Journal On Matrix Analysis and Applications 17, 789 (1996).

57

[53] Y. Wang, J. Wang, and H. Guo, Phys. Rev. B 49, 1928 (1994). [54] E. Economou, Green’s functions in quantum physics (Springer-Verlag: Berlin, 1983). [55] M. Saito, M. Takats, M. Okada, and N. Yokoyama, Phys. Rev. B 46, 13220 (1992). [56] P. A. Knipp and T. L. Reinecke, Phys. Rev. B 54, 1880 (1996). [57] T. Ueta, Phys. Rev. B 60, 8213 (1999). [58] J. J. Koonen, H. Buhmann, and L. W. Molenkamp, Phys. Rev. Lett. 85, 2473 (2000). [59] E. G. Novik, H. Buhmann, and L. W. Molenkamp, Phys. Rev. B 67, 245302 (2003). [60] R. Venugopal, Z. Ren, S. Datta, M. S. Lundstrom, and D. Jovanovic, J. Appl. Phys. 92, 3730 (2002). [61] D. J. Thouless and S. Kirkpatrick, J. Phys. C 14, 235 (1981). [62] P. A. Lee and D. S. Fisher, Phys. Rev. Lett. 47, 882 (1981). [63] F. Sols, M. Macucci, U. Ravaioli, and K. Hess, Appl. Phys. Lett. 54, 350 (1989). [64] F. Sols, M. Macucci, U. Ravaioli, and K. Hess, J. Appl. Phys. 66, 3892 (1989). [65] T. Ando, Phys. Rev. B 44, 8017 (1991). [66] H. U. Baranger, D. P. DiVincenzo, R. A. Jalabert, and A. D. Stone, Phys. Rev. B 44, 637 (1991). [67] S. Rotter, J. Tang, L. Wirtz, J. Trost, and J. Burgd¨oorfer, Phys. Rev. B 62, 1950 (2000). [68] G. He, S. Zhu, and Z. D. Wang, Phys. Rev. B 65, 205321 (2002). [69] M. Topinka, B. LeRoy, S. Shaw, E. Heller, R. Westervelt, K. Maranowski, and A. Gossard, Science 289, 2323 (2000). [70] S. Rotter, B. Weingartner, N. Rohringer, and J. Burgd¨oorfer, Phys. Rev. B 68, 165302 (2003). [71] J. Feagin, Quantum Methods with Mathematica (Springer-Verlag: New York, 1994).

58

[72] M. Goldberger and K. Watson, Collision Theory (Wiley, New York, 1964). [73] J. Taylor, Scattering Theory: Quantum Theory of Nonrelativistic Collisions (Wiley, New York, 1972). [74] S. Konsek and T. Pearsall, Phys. Rev. B 67, 045306 (2003). [75] E. A. McCullough and R. E. Wyatt, J. Chem. Phys. 54, 3578 (1971). [76] J. Mohaidat, J. Appl. Phys. 90, 871 (2001). [77] A. Asker and A. S. Cakmak, J. Chem. Phys. 68, 2794 (1978). [78] H. Igarashi, T. Yoshikawa, and T. Honma, IEEE Transactions on Magnetics 35, 1518 (1999). [79] S. Glutsch, D. S. Chemla, and F. Bechstedt, Phys. Rev. B 54, 11592 (1996). [80] S. Glutsch, F. Bechstedt, B. Rosam, and K. Leo, Phys. Rev. B 63, 085307 (1996). [81] W. G. Schmidt, S. Glutsch, P. H. Hahn, and F. Bechstedt, Phys. Rev. B 67, 085307 (2003). [82] A. Endoh, S. Sasa, and S. Muto, Appl. Phys. Lett. 61, 52 (1992). [83] A. Endoh, S. Sasa, H. Arimoto, and S. Muto, J. Appl. Phys. 73, 998 (1993). [84] A. Endoh, S. Sasa, H. Arimoto, and S. Muto, J. Appl. Phys. 86, 6249 (1999). [85] G. D. Buffington, D. H. Madison, J. L. Peacher, and D. R. Schultz, J. Phys. B: At. Mol. Opt. Phys. 32, 2991 (1999). [86] D. O. Odero, J. L. Peacher, D. R. Schultz, and D. H. Madison, Phys. Rev. A 63, 0227081 (2001). [87] M. D. Feit, J. A. Fleck, and A. Steiger, J. Comp. Phys. 47, 412 (1982). [88] L. Zhang, J. Feagin, V. Engel, and A. Nakano, Phys. Rev. A 49, 3457 (1994). [89] O. Sugino and Y. Miyamoto, Phys. Rev. B 59, 2579 (1999). [90] N. Watanabe and M. Tsukada, Computer Physics Communications 142, 255 (2001). [91] L. Mouret, K. Dunseath, M. Terao-Dunseath, and J. Launay, J. Phys. B 36, L39 (2003).

59

[92] M. Braun, C. Meier, and V. Engel, Comp. Phys. Comm. 93, 152 (1996). [93] A. R. P. Rau and K. Unnikrishnan, Phys. Lett. A 222, 304 (1996). [94] A. R. P. Rau, Phys. Rev. Lett. 81, 4785 (1998). [95] Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. 66, 3048 (1991). [96] Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68, 2512 (1992). [97] Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. 70, 2601 (1993). [98] P. Anderson, Phys. Rev. 124, 41 (1961). [99] L. Yi and J. Wang, Phys. Rev. B 63, 073304 (2001). [100] H. Tal-Ezer and R. Kosloff, J. Chem. Phys. 81, 3967 (1984). [101] C. Cerjan and K. C. Kulander, Comp. Phys. Comm. 63, 529 (1991). [102] G. J. Kroes and D. Neuhauser, J. Chem. Phys. 105, 8690 (1996). [103] N. Balakrishnan, C. Kalyanaraman, and N. Sathyamurthy, Physics Reports 280, 79 (1997). [104] S. K. Gray and G. G. Balintkurti, J. Chem. Phys. 108, 950 (1997). [105] J. B. Wang and T. Scholz, Phys. Rev. A 57, 3554 (1998). [106] J. B. Wang and S. Midgley, Phys. Rev. B 60, 13668 (1999). [107] S. Midgley and J. B. Wang, Phys. Rev. E 61, 920 (2000). [108] J. B. Wang and S. Midgley, Physica B 284, 1962 (2000). [109] S. Midgley and J. B. Wang, Aust. J. Phys. 53, 77 (2000). [110] S. Midgley and J. B. Wang, Phys. Rev. B 64, 153304 (2001). [111] S. Midgley and J. B. Wang, Phys. Rev. E 67, 0467021 (2003). [112] J. Wang, S. Midgley, and P. Falloon, Electron Transport in Mesoscopic Systems Conference Proceedings, p239 (G¨oteborg, Sweden, 1999). [113] L. Sohn, L. P. Kouwenhoven, and G. Sch¨on, Mesoscopic Electron Transport (Kluwer Aca-

60

demic Publishers, 1997). [114] B. J. van Wees, H. van Houten, C. W. J. Beenakker, J. G. Williamson, L. P. Kouwenhoven, D. van der Marel, and C. T. Foxen, Phys. Rev. Lett. 60, 848 (1988). [115] A. Szafer and A. D. Stone, Phys. Rev. Lett. 62, 300 (1989). [116] D. Csontos and H. Q. Xu, Appl. Phys. Lett. 77, 2364 (2000). [117] V. Fal’ko, J. Phys. Condens. Matter 4, 3943 (2002). [118] J. Pieper and J. Price, Phys. Rev. Lett. 72, 3586 (1994). [119] M. Entin and M. Mahmoodian, J. Phys.: Condens. Matter 12, 6845 (2000). [120] C. Yiu and J. Wang, J. Appl. Phys. 80, 4208 (1996). [121] K. Manouchehri and J. B. Wang, J. Comput. Theor. Nanosci. 3, 249 (2006). [122] R. Heather and H. Metiu, J. Chem. Phys. 86, 5009 (1987). [123] R. Heather and H. Metiu, J. Chem. Phys. 88, 5497 (1988). [124] R. Heather, Comput. Phys. Commun. 63, 446 (1991). [125] P. Falloon and J. Wang, Comput. Phys. Commun. 134, 167 (2001). [126] M. Child, Molecular Physics 72, 89 (1991). [127] D. Neuhauser and M. Baer, J. Chem. Phys. 92, 3419 (1990). [128] A. Vib´ok and G. Balint-Kurt, J. Chem. Phys 96, 7615 (1992). [129] D. Macias, S. Brouard, and J. G. Muga, Chem. Phys. Lett. 228, 672 (1994). [130] U. Riss and H.-D. Meyer, J. Phys. B: At. Mol. Opt. Phys. 28, 1475 (1995). [131] T. Seideman and W. Miller, J. Chem. Phys. 96, 4412 (1991). [132] J. Normand, A Lie Group: Rotations in Quantum Mechanics (Elsevier North-Holland, Amsterdam, 1980). [133] R. Kosloff, J. Phys. Chem. 92, 2087 (1988).

61

[134] J. B. Wang, The Mathematica Journal 8(3), 1 (2002). [135] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in Fortran 90 (Cambridge University Press, 1996). [136] V. Mohan and N. Sathyamurthy, Comput. Phys. Rep. 7, 213 (1988). [137] C. Cohen-Tannoudji, B. Diu, and F. Lalo¨e, Quantum Mechanics, vol. 1 (Wiley: New York, 1977). [138] D. ter Haar, Problems in Quantum Mechanics (Pion: London, 1975). [139] L. Sohn, Nature 394, 131 (1998).

62

AlGaAs

GaAs

ds

qV bi + + + + + + +

+ + + + + + + +

+ + + + + + +

W

+ + + + + + + +

Ef E2

∆Εc

E1

zd

FIG. 1: Conduction band profile in the GaAs/AlGaAs heterojunction.

63

doped GaAs cap layer Source

Drain

Gate

doped AlGaAs undoped AlGaAs

undoped GaAs Semi−insulating GaAs 2 degree electron gas

FIG. 2: Example layout of a GaAs/AlGaAs heterojunction.

FIG. 3: (a) Schematic diagram of a double barrier nanostructure: gray-metallic, white-insulator, dark-semiconductor; (b) electric potential imposed on the conduction electrons in the semiconductor [24, 106].

64

FIG. 4: A double barrier potential.

FIG. 5: A double barrier potential plus an external electric field.

65

w1

a2 b2

a1 b1

€ €

€ €

€ € €

w2 €

a3 b3

w3 €

FIG. 6: The schematic diagram of a multi-port quantum waveguide

Mar 2004 to 130.95.128.51. Redistribution subject to AIP license or copyright, see http://jap.aip.org/jap/copyright.jsp

FIG. 7: Composite and separate Green’s functions in a single recursive step [64].

L*

Lx

W

FIG. 8: Schematic diagram of the T-structure.

66

Log J n Α

0.01

0.0001

1.  106

Α100 0

20

40

60

80

100

120

n

Log J n Α

0.01

0.0001

1.  106

Α1000

0

200

400

600

800

1000

n

FIG. 9: Absolute values of the Bessel function in logarithm scale.

67

(a)

(b)

FIG. 10: (a) The side and top view of the device. The dark grey regions are negatively charged metal gates and the light grey area is positively charged; (b) The confinement potential imposed on the conduction electron [107].

68

0.32

1.6

1.6

1.6

0.32

0.32

1.6 0

0 1.6

0

0

0

0

0.32

1.6

0.32

0.32

1.6 0

0

0.32 1.6

0.32

FIG. 11: Time evolution of an electron wave packet through a nanostructured ring at zero magnetic field.

69

0.32 0

1.6

1.6

1.6

0.32

0.32

1.6 0

0 1.6

0

0

0

0

0.32

1.6

0.32

1.6 0

0

0.32

0.32

1.6

0.32

FIG. 12: Time evolution of an electron wave packet through a nanostructured ring under the influence of an external agnetic field B = 0.018 T .

70

Radius = 158nm Radius = 211nm Radius = 265nm

Transmission

1

0.6

0.2 -0.05

-0.01 0.01 B(T)

0.05

FIG. 13: Aharanov Bohm oscillations for the transmission of an electron through a ring with an applied magnetic field, varying between −0.05 T to 0.05 T . Such oscillations are in agreement with those seen experimentally [37].

71

HaL.

free space yHxL

HbL.

outer buffer

interaction

free space

ö

splitting

inner buffer

potential V HxL

inner buffer

splitting

f HxL

outer buffer

x

FIG. 14: (a) The numerical grid is broadly partitioned into the interaction grid, where the potential function is non-zero, and the free space grid, where it is zero. (b) The interaction grid is partitioned into the potential region, where V (x) is non-zero, the splitting regions, where f (x) changes continuously from 1 to 0, and the buffer regions, where ψint (x) and ψf ree (x) overlaps [125].

72

P.E. Falloon, J.B. Wang / Computer Physics Communications 134 (2001) 167–182

179

Fig. 3. The final wavefunction after scattering (t = 2500 a.u.) by the double hyperbolic potential centred at x = 0. Fig. 2. Transmission coefficients for the hyperbolic barrier, using the Chebyshev scheme (dots) and the exact solution (line).

The saving is quite impressive: a factor of almost 4 in this case. The difference in the accuracy of the final wavefunction in the two cases is also very small: In[15] := PositionNorm[ψprop − ψord, L, n] Out[15] = 5.86661 × 10−15 In[16] := Max[|ψprop − ψord|2 ] Out[16] = 1.6907 × 10−17 This is clearly an acceptable level of error, and the reduction in computation time is substantial. We can now use the final wavefunction to compute points on the transmission function: In[17] := coshtranslist = TransmissionPoints[ψgrid, ψprop, L, n, 0.5]; For the hyperbolic potential barrier, the transmission function is available in closed form [11] Fig. 4. Transmission coefficients for the double2 hyperbolic barrier, using the Chebyshev scheme (dots) and the mode matching method (line). sinh (πka) The dashed line shows the=energy spread of the initial p wavepacket. , (22) T (k) sinh2 (πka) + cosh2 ( |1 − 8V0 a 2 | π/2) √ k = 2E is the wavenumber. Fig. 2 shows the exact transmission curve plotted with the numerical points in 4.3. Furtherwhere comments coshtranslist. The agreement is excellent. 4.2. presented Double hyperbolic Calculations in thisbarrier paper were carried out using Mathematica Version 4, running on an iMac 266 MHz desktop computer with 96 MB memory. The efficiency and accuracy of the wavefunction splitting algorithm depend next Transmission look at transmissioncoefficients through a double potential. Because multiple reflections of the FIG. 15:We(a) forbarrier single hyperbolic barrier, using cause the part Chebyshev scheme largely on partitioning thespend interaction grid Regardless of the potential being tostudied, the splitting region wavefunction to a prolonged timesuitably. between the barriers, a much larger grid is required accommodate the will need tofaster be of the same width. We have that the splitting shallower the slope of f (x), components of the wavepacket. Hencefound the wavefunction algorithm really comes into itsthe ownmore in thisaccurate the (dots) and the (line); (b) should Transmission coefficients for hyperbolic barriers, problem. The total gridsolution used is much larger than the singlenot barrier case, although grid remains splitting algorithm. Ofexact course, the splitting region be too wide or the elseinteraction thedouble interaction gridthebecomes too same size: large. There is thus a compromise to be made in defining f (x). We have generally found that an interaction grid with using less than does allow for themethod splitting(line). method to be worthwhile, and the512 Chebyshev scheme (dots) andnint the modelength matching The dashed line shows In[18] :=points L = 6400; n =not 4096; Lint a=sufficient 800; =buffer 512; hence we have used nint > 512 throughout. Wethe have employed the of Chebyshev propagation method energy spread the initial wavepacket [125].to study electron transport in two dimensions. Details were given in Wang and Midgley [3] (using Fortran code). Extending the current package for multi-dimensional scattering is straight-forward, although it incurs a corresponding overhead in memory and CPU time.

73

FIG. 16: Absolute errors in evaluating the second derivative of a Gaussian function using (a) the five-point finite-difference method; (b) the seven-point finite-difference method; and (c) the Fourier transformation method [106].

74

0.55

0

1.1

0.55

0

1.1

0

0 1.1

0.55 1.1

0.55

0.55

0

1.1

0.55

0

1.1

0

0 1.1

0.55

1.1

0.55

0.55

0

1.1

0.55

0

1.1

0

0 1.1

0.55 1.1

0.55

FIG. 17: Propagation of the wave packet to a final state and then back to the initial state under time reversal. This is under a strong applied magnetic field of 0.4 T and a double barrier electrostatic potential.

75

3.3 10-8 

0.55

0

1.1 0 1.1

0.55

FIG. 18: Relative error between initial wave function and the wave function propagated forward and then backward in time. This is for the propagation displayed in Figure 17.

76

0.3

0

-1

0.3

0

-1

0

0 1 -0.3

1 -0.3

0.3

0.3

0

-1

0

-1

0

0

1 -0.3

1 -0.3

0.3

0.3

0

-1

0

-1

0

0

1 -0.3

1 -0.3

FIG. 19: The Gaussian wave packet propagates along the x axis, spreading as expected.

77

2. · 10-14 0.3

0

-1 0 1 -0.3

FIG. 20: Comparison of numerical solution for the propagation of a Gaussian wave packet to known analytic solution in free space.

78

(a)

4

0

4 0 4

4

(b)

3 10-9 

4 0 0

4 0 4

4

FIG. 21: Time evolution of a Gaussian wave packet under the influence of a constant magnetic field B = 0.1 T [111]. Part (a) shows the propagation of the electron while the circles above the wave packet indicate the classical particle trajectory. Part (b) shows the error between the computational propagation and analytic results.

79