Lecture 1: Introduction to Measurement. 1.1 Introduction. • Physics is an
experimental science. • Theories, no matter how beautiful or elegant must agree
with ...
Methods of Experimental Physics
Semester 1 2005
Lecture 1: Introduction to Measurement
1.1
Introduction
• Physics is an experimental science • Theories, no matter how beautiful or elegant must agree with experiment. • To be convincing the agreement needs to be both quantitative and qualitative We are going to look at a couple of papers: two old and one new to show some demonstrations of these facts (see the copies attached to the end of this document): 1. C. Davidsson and L.H. Germer, Phys. Rev 30(6), pp. 705 (1927) - for telling the truth. 2. A. Gaidarzhy et al, Phys. Rev. Lett 94, 030402 (2005) - wow! 3. R.A. Millikan, Phys. Rev. VII(3), pp. 355, (1916) - experiment prevails over theory In addition to this I wanted to tell you of a couple of developments in physics in recent times that have arisen from experimental measurements that call into question the Standard Model (taken from http://www.science-park.info/particle/progress.html unless otherwise noted).
1.1.1
Muon’s g-factor
Precise measurement of muon’s g-factor may point to new physics beyond the Standard Model. Since all leptons (as well as quarks) have intrinsic spin, they also have a magnetic moment, which is related to spin by the g-factor. According to the simple quantum theory prediction, g is exactly 2 for both electron and muon. However, in reality, radiative corrections (QED) need to be considered in order to take into account the continuous absorption and emission of short-lived ’virtual particles’ by the electron or muon that perturb the spin magnetic moment. These corrections depend very much on the existence of other particles, such as photons, electrons and also other particles yet undiscovered and not part of the Standard Model. One possible candidate for the latter particles are superpartners. According to the supersymmetry (SUSY) theory, every fundamental particle has a companion particle called its superpartner. For example, the partner of muon is called ’smuon’. To date there has no firm evidence for supersymmetry. The g-factor for muon turns out to be about 2.0023. Instead of directly measuring g, particle phyicists usually measure the anomalous g-factor, a = (g-2)/2. For the electron, this quantity determined experimentally (ae− |exp = 1.159 652 188 4 (43) × 10−3 , positron ae+ |exp = 1.159 652 187 9 (43) × 10−3 ) is in agreement with the Standard Model prediction (ae |th = 1.159 652 183 (42) × 10−3 ) to 9 decimal places seemingly verifying that there is no need for any other new particles outside the Standard Model prediction. However, the muon is more than 200 times heavier than the electron and its g-factor is 40,000 times more sensitive than that of the electron for detecting any new particles beyond the Standard Model. However, two main factors make g-factor measurement for muon difficult: larger radiative correction and inherently short half-life (2 microseconds). But now, scientists from the Brookhaven National Laboratory has obtained a value of aµ = 1.165 920 2 × 10−3 . This differs from the Standard Model prediction by 2.8 standard deviations. Statistically, there is a 99% probability that the measurement does not agree with the Standard Model. If data are confirmed not to subject to any systematic error, could SUSY being the most plausible explanation?
1-1
Lecture 1: Introduction to Measurement
1.1.2
1-2
Constancy of the Constants
Dirac examined some fundamental ratios of the Universe in the 1930s and seemed to find some common factors of 1040 in these numbers (see Table 1). This convinced him that there could be some changes in the values of the constants with time (since time measured in his units was around 1040 , and therefore as time changes these other values might change as well) Property
Expression
Force Length
Coulomb Force/Gravitational Force size of the Universe/radius of the electron
Mass Energy Time
Number of Particles Universe’s Grav. pot. energy/mass of nucleon age of the Universe/atomic time fine structure constant
2
e 0 Gmp me ctage e2 /0 me c2 ρ0 c3 tage mp tage ((µ0 /4π)(e2 /me )) µ0 ce2 2h
Approx. Value 0.2 ×1040 4 ×1040 (1040 )2 ∼ 1 ∼ (1040 )0 4 ×1040 1.5 ln(1040 )
Table 1: Dirac’s Big Numbers This could be suggestive that the value of the constants might be changing at a fractional rate of 10−10 per year - What evidence is there for this type of esoteric thought? Unfortunately, it is impossible to make any experiment to test whether the standard of length is changing - why do I think this? One is forced to make measurements in which dimensionless ratios of constants are examined for change. The fine structure constant is one such dimensionless number (∼ 1/137), which determines the strength of interactions between charged particles and electromagnetic fields. The experimental tests for constant time stability come in three different types: (i) there are geophysical limits from looking at radioactive products of a natural nuclear reactor in Oklo, Africa, which has shown less than 10−15 change per annum in fine structure constant, while (ii) several laboratory clock experiments have shown a change below 2-4 ×10−15 per year. However, (iii) some astrophysical observations seem to have a shown a positive change in the fine-structure constant since the time of the quasars (PRL, 87 091301 (2001)) - although this is now a matter of major dispute. It was found that the contant has actually increased slightly with time (∆α/α ∼ 1.1 × 10−5 over 10 billion years = 3σ). This discovery was derived from several independent sources of measurement of the absorption spectra of distant quasars at different redshifts and then by comparing these with the wavelengths of today’s spectral lines. Further measurements has been given more recently by the same group (which confirm the earlier results with even more statistically strength), while two other groups doing the same thing have been unable to show the same effect. So far, no other evidence has been put forward for physical constants to change, although this behavior is predicted in many of the unified theories of fundamental forces. This requires tighter experimental tests if one is going to find such effects.
1.1.3
Matter in the Universe
see: http://www.eurekalert.org/features/doe/2005-02/d-bts020905.php The standard model provides a precise framework for calculation, and matches experimental data exquisitely well. However this close match only occurs if an additional unobserved particle is added to the quarks and leptons. The additional particle is called the Higgs boson. Perhaps the need for additional particles is also suggested by the cosmic microwave background data which suggests that only about 5% of the universe is composed of the quarks and leptons of the Standard Model of Particle Physics; there is six to seven times more mass in the form of invisible “dark matter.” There are rather strong arguments, based on astronomical observations and cosmology, that this dark matter is made out of particles–but not out of any of the Standard Model ones. More importantly, the Standard Model contains no particles with the right properties to form the dark matter that seems to pervade the cosmos. Although the Higgs hasn’t yet been observed there must be some particle that plays its role: its indirect effects are already detectable. Quantum effects connect the mass of the W boson to those of the quarks; and since the top quark has a very large mass, it has a detectable impact on the W. The magnitude of this impact depends on the Higgs. In fact, the W mass is found to be shifted upwards in exactly the way
Lecture 1: Introduction to Measurement
1-3
expected from the effects of a Higgs; if there were no Higgs, the W’s mass should be significantly lower than what we measure. So the Standard Model plus the Higgs might well be complete, however, quantum mechanical effects predict that the mass of the Higgs is extremely large, while our best (indirect) evidence is that it is not; in fact, it probably lies just beyond the reach of our most recent experiments. One commonly proposed idea that addresses both of these issues is to embed the Standard Model inside a larger theory, supersymmetry. Supersymmetry is mathematically elegant and solves the Higgs mass issue. If supersymmetry proves true, then for every type of familiar particle there would be a new species of superparticle, with analogous properties but much more mass. The lightest superparticles would be stable, and large numbers of them would still be drifting through our universe – left over from the big bang, when particles and superparticles were presumably created in equal numbers. These leftover massive particles would then naturally form the dark matter. The only way to settle questions like this one is through experiment. We do not know whether there is a simple Higgs or something much more complicated. We cannot tell from cosmology whether the dark matter is made out of supersymmetric particles or not. We cannot theorize or compute our way to an answer: we must inquire directly of nature.
1.1.4
Six Numbers?
There has been a claim that there are some other fundamental quantities that have resulted in the Universe that we have ended up in, as well as the possibility of life (Rees, MJ, Just Six Numbers). These six numbers are: 1. the ratio of electrical to gravitational force 1036 : if it had been less than 1034 then life couldn’t have evolved (life of the Universe is too short) 2. proportion of energy released when hydrogen fuses: 0.7%: we couldn’t exist if this was less than 0.6% or more than 0.8% 3. ratio of matter density in the Universe to critical density ∼ 4: if much lower then galaxies could not have formed, too much higher and the Universe would have collapsed long ago 4. need for a small value of cosmological constant that affects nothing on timescales shorter than 1 billion years but plays a critical role in the way our Universe looks now 5. ratio of energy required to break up a Galaxy to its rest mass (105 ): if this was smaller then we wouldn’t have structure in the Universe, and if much larger then stars would not be stable 6. the spatial dimension of the Universe (3): if 2 or 4 then life couldn’t exist. In addition, one should note that some very small changes in the values of some constants would also destroy any possibility for life: for instance a 0.1% change in the value of the nuclear forces would allow protons to bond with themselves, and a few percent downward change in the binding energy of a neutron would prevent formation.
1.2
Precision, Resolution and Accuracy Quality
Precision (or Resolution) Accuracy
Description Describes the repeatibility of a measurement in the presence of random errors (noise) Describes how well a sequence of measurements converge to “truth” Table 2: Definition of accuracy and precision
Lecture 1: Introduction to Measurement
1-4
In order to convince us of the correctness of a measurement, or that a theory is verified, or for economic benefit, or to discover new physics, requires measurement that is accurate, or at least precise - what do I mean by this (see Table 2)? To achieve precision, measurement scale (and reference) must be: • LINEAR, REPRODUCIBLE, STABLE To achieve accuracy, the measurement scale must have a: • KNOWN ZERO POINT , KNOWN SCALE FACTOR, COMPARABLE to the REFERENCE STANDARD
Figure 1: Figure 6 from Millikan’s paper showing photoelectric effect
Figure 2: Showing the 5 rules Millikan derived from Einstein’s prediction in Eq. 1
Lets consider Millikan’s experiment (excerpt attached to these notes) for a better understanding of these points (see data shown in Fig 1), as well as consider the theoretical form expected from a completely (in his opinion) untenable theory: 1 mv 2 = V e = hν − p (1) 2
Lecture 1: Introduction to Measurement
1-5
where hν is the energy absorbed by the electron from the lightwave, p is the work necessary to get the electron from the metal, and 12 mv 2 is the kinetic energy of the electron. To measure the kinetic energy of expelled electrons he basically uses a charged object to reflect the electrons away and measures the voltage, V necessary to stop them getting through (V e is the potential energy that they need to overcome to get to the detector). Millikan stated that one could derive 5 independent tests from Eq. 1 (see Figure 2). He tests all of these elements in this paper. The functional form of Eq. 1 is intended to be a universal explanation, while the value of p depends on the type of metal involved. The value of h can be found from the experiment but can also be determined from other experiments involving Brownian motion, radiation from Black-bodies. Errors in the measurement of V , or ν propagate in different ways into errors in ν0 , p and h e.g. if one is interested in determining the value of h then you only need precision and a well defined scale factor ( but not overall accuracy) in ν i.e. only need the ability to accurately measure changes in frequency. An error in the origin of the frequency scale affects the value obtained for p. It is clear that measurement of frequency or kinetic energy were not linear then you could not obtain the law in accordance with theory, if they were not reproducible or stable then measurements between different samples of the same material, or comparisons between different materials would not be useful. On the other hand, the benefit of accuracy is shown in Millikan’s interest in comparing his results with those obtainable from other measurements using different techniques and equipment. It is only because of accuracy in his measurements that this is possible, and only because of this that he is eventually convinced of the correctness of Einstein’s equation (even though it was against his expectation/beliefs).
1.3
Units
For any accurate work it is sensible to define an agreed system of units, together with a set of references (or etalons). It is preferable if these standards were based on some fundamental feature of nature (so that they were the same everywhere and hopefully unchanging), and if they were simple to realise (i.e. reproduce somewhere else) .The set of base units in the SI unit system (which are considered to be dimensionally separate are given in Table 3, while the definitions are given in Section 1.4. The particular choice of base units is arbitrary , as is the number of base units (c could be used to relate length and time, h can be used to relate energy and time, k can be used to relate energy and time etc). So by using fundamental constants we could get rid of all dimensions). All other quantities in SI are called derived quantities, and are defined via the basic equations of physics (see Table 4). In addition, to these units, definitions and standards one automatically obtains a bunch of fundamental constants. Base Quantity
Symbol for Quantity
Base unit
Symbol for unit
Length Mass Time Electric current Thermodynamic temperature Amount of Substance Luminous Intensity
l, x m t I T n Iv
metre kilogram second ampere kelvin mole candela
m kg s A K mol cd
Table 3: The 7 Base Quantities in SI Often one thinks of these units and especially the constants as absolute. However, this is not the case, think for a moment that we have two separate ways of relating force to mass for example: F
= K1 ma (2) m1 m2 F = K2 (3) r2 We could choose to use the second equation to define Force, and choose the arbitrary constant K2 as dimensionless and equal to 1. In this case Force would have units of mass2 /length2 . We would then find
Lecture 1: Introduction to Measurement
1-6
the value K1 as having units (mass - length - time2 ). In fact, of course we use K1 as having a value of 1 and being dimensionless but it doesn’t have to be that way. Although my example might appear silly, in fact it is still the case that in working with electromagnetic/static units one can get quite lost. Even today there is an awful mixture of SI units, cgs-esu, cgs-emu units, and natural (Heaviside-Lorentz) in play. The values of the relating constants in these various systems vary and so do the dimensions of those units. A table of the usual set of fundamental constants in the SI system are given in Table 5. It might be the case that some of the constants are derived from some deeper set of constants, and that their values will be given by physics that is only just being discovered today. Although no-one knows how to go about deriving the values of these constants as yet, some examples of these deeper constants (involved in string theory, quantum chromo-dynamics, grand-unification theories and high energy physics) would be the Fermi coupling constant, Weinberg angle, mass of the quarks, muons, mass of the various flavours of neutrinos. String theory hopes to predict that values of all constants on the basis of a single fundamental parameter. Derived quantity
SI unit name
Velocity Frequency Electrical Resistance Energy Refractive Index Angle
hertz ohm Joule Radian
SI unit symbol ms−1 Hz= s−1 Ω =V/A = m2 kg s−3 A−2 J = N m = m2 kg s−2 1 rad= 1
Table 4: Some example derived units in SI Fundamental quantity
Symbol
Speed of light Elementary charge (of proton) Mass of electron (at rest) Mass of proton (at rest) Planck constant Avogadro constant Gravitational Constant Boltzmann constant Permeability and permittivity of free space
c e me mp h NA G k µ0 ,0
Table 5: The fundamental constants in SI Others might argue that there are also other constants which are more meaningful as they are not unit dependent. Examples of these are the fine structure constant: α=
µ0 ce2 in SI units. 2h
or the ratio of the proton to electron mass, or the Rydberg constant: R∞ = α 2
me c α2 = 2h λC
where λC is the Compton wavelength of the electron.
1.4
Definition of SI base units
Just for fun here are the present definitions of the SI base units: metre: The metre is the length of path travelled by light in vacuum during a time interval of 1/299 792 458 of a second (1983)
Lecture 1: Introduction to Measurement
1-7
kilogram: The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram (1901) second : The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the ceasium-133 atom (1967) ampere: The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these two conductors a force equal to 2 x 10−7 newton per unit length (1948) kelvin: The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water (1967) mole: The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilograms of carbon - 12. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ion, electrons, other particles, or specified groups of such particles (1971) candela: The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and that has a radiant intensity in that direction of (1/683) watt per steradian (1979)
1.5
Practicalities
The schematic view of most measurement processes is given in Fig. 3 below. We use a transducer coupled to an amplifier to convert some physical quantity into an electrical signal and then on into a numerical value. • Transducer: any device that converts a non-electrical parameter into an electrical signal. The output electrical signal will have a well defined relationship with the input parameter • Amplifier: a device that can reproduce an electrical signal but with an increased intensity. Once again, this increase should be a well defined and stable amount. In order to have accuracy one must be able to trace the measurement back to the standards as given by the SI definition (See Sect. 1.4) for the quantity one is interested in. Unfortunately, and frequently, the standards as given by these definitions are extremely inconvenient. This is usually because the scale of the standard can be so far from the human scale/scale of interest in the experiment (or because the standard itself is so expensive, or inaccessible). In these cases one usually uses a secondary standard combined with some electrical measurement and instrument (i.e. that buried inside the CRO, DVM etc.). In addition, one should take into account a division of physical quantities into two types: intensive quantities (temperature, density, hardness, pressure etc) and extensive quantities (length, mass, current etc.). In order to measure an intensive quantity it is usually necessary to convert it to an extensive quantity i.e. measure temperature by the length of mercury in a column of a thermometer, or by the electrical resistance of a resistor. One can measure density by measuring both volume and mass (see Petley, Recent Advances in Metrology and Fundamental Constants, Ed. T Quinn et al, pp. 93-).
1.6
Measurement errors
Random Errors: are manifested by variations in a repeated measurement of the same unchanging thing (usually thought of as over ’short’ time scales.) However, sometimes the errors only appear to be random;
Lecture 1: Introduction to Measurement
Physical Observable
Transducer
1-8
Electrical Observable
Instrument
Figure 3: the measurement process
they arise as the experiment is affected by some other parameter which is not measured or controlled. In principal, these systematic errors can be found, controlled or eliminated if the observer is smart enough to find them. e.g. the length of some rod being measured might appear to fluctuate randomly because of thermal expansion driven changes or vibrational changes Temperature effects could be controlled by thermal isolation or temperature control or it could be measured and its effects subtracted. Vibration can be removed by vibration isolation of the rod but it would be difficult to correct for by measurement of accelerations and post-measurement correction. Other effects (e.g. tilt, atmospheric pressure) could be compensated for. Once again these effects occur on quite different time scales. Some systematic errors (say arising from vibration) might as well be random, other examples include electrical interference (solar flares, mobile phones, mains, TV, radio etc). The complexity and unpredicitability of these effects normally mean that they are to difficult to model. The best one can do is to try to shield from them. Systematic errors that do not appear as random changes are much more difficult to deal with e.g. if the length of an object under study is changing slowly but consistently because of plastic flow under its own weight. Over short times this is very difficult to detect but results in inaccuracy.
1.6.1
Fundamental Limits to Measurement
Some random errors are “fundamental”. These fundamental sources are correctly called noise. However, often one also lumps in all the fast and difficult to predict systematic sources of fluctuations and calls them noise as well - although this is not correct. For macroscopic systems it is usually thermal noise that is dominant and we will need methods to deal with it. In a few lectures time we will develop expressions for Brownian noise. The four important classes of noise that are always present are: 1.6.1.1
Brownian Motion Noise
The position of an object, and its surfaces, is/are continuously fluctuating because of the finite number of entities contained within it. There is no way to eliminate, predict or correct such errors. All one can do is average them away and describe the fluctuations in a statistical way. This noise has a white spectrum 1.6.1.2
Johnson or Thermal
If an object is coupled to some heat bath at non-zero temperature then the electrons within the object are subject to continuing random fluctuations. These cause random fluctuations in the voltage detected across the object. Johnson noise is the electrical analog of Brownian motion mentioned above. The open circuit root-mean-square(rms) voltage noise of a resistor, R, and temperature, T , can be shown to be: p (4) Vn = 4kT R∆f where k is the Boltzmann constant and ∆f is the bandwidth of the measurement. This noise clearly has a white spectrum. If this resistor is connected to another device of the same impedance√then the amplitude of the rms voltage fluctuations is halved, so in this case Eq. 4 will be equal to Vn = kT R∆f .
Lecture 1: Introduction to Measurement
1-9
Remembering that electrical power, Pe is given by V 2 /R we see that the deliverable power is: Pe = kT ∆f saying simply that the electrical power and thermal power are comparable (I will give you Nyquist’s original paper in a later lecture). In order to avoid having an explicit inclusion of the noise measurement bandwidth we can also talk about a rms noise density (units are volts per root Hz) which would be: √ vn = 4kT R √ √ As a ballpark estimate, for a 300 K resistor this voltage density is around (4 R kΩ) nV/ Hz where the resistor’s impedance is expressed in kΩ. This noise is obviously white as you can see by the absence of any dependence on frequency in the equation above. The distribution of voltages obtained in a sequence of measurements is Gaussian and given by: 1 2 −2V dV √ exp P (V, V + dV ) = Vn2 Vn 2π . 1.6.1.3
Shot noise
The fact that electrical current, and light, are carried by discrete particles (electrons, holes, photons) leads to a phenomena called shot noise. The origin of shot noise comes as follow: A current I consists of a flow of electrons of the form I = qN where N is the average number of electrons √ passing a plane perpendicular to the electron flow per unit time. The distribution about N is given as N where each of the carriers has been generated independently. The resulting fluctuation is given by: p In (rms) = 2qIdc ∆f (5) where q is the electronic charge, Idc is the average current and ∆f is the detection bandwidth. The fractional noise in a current decreases as the average current increases. This noise is spectrally flat i.e. it is a white noise.
10-1
Fractional Shot Noise
10-2 10-3 10-4 10-5 10-6 10-7 10-15
1.6.1.4
10-13
10-11
10-9 10-7 10-5 Current (A)
10-3
10-1
Flicker or 1/f noise
When a signal flows through any device an additional noise is generated over and above those sources mentioned above (which are present whether a signal is flowing or not) - and is termed flicker noise. This noise is just as fundamental and ubiquitous though as those others. This noise is also called 1/f noise because of the spectral signature which is approximately proportional to 1/f , where f is the Fourier frequency. This means there is equal noise in a decade or octave of frequency space in contrast to all the sources of noise above which are essentially white for all possible measurements (white noise has equal noise per frequency interval). For example, when a current flows through a resistor one notes additional
Lecture 1: Introduction to Measurement
1-10
fluctuations in the output spectrum at low frequencies which were previously absent - generally speaking this is larger that the effect of all other noise sources at low frequencies. The origin of this noise is not very well understood but has something to do with the fact that the device is far from equilibrium when the current is flowing. The level of this noise varies between devices constructed along different lines but is always presents in every device of whatever type. The noise magnitude is proportional to the current/signal flowing through the device. The typical noise level is around 10ppb in a frequency decade for a good wire-wound resistor up to 5ppm per decade for a cheap resistor. Flicker noise is compared with white noise in Figure 4. Untitled-5 1 Untitled-5
1
1
0.005
0.5 1000
100
200
300
400
500
2000
3000
4000
600 -0.005
-0.5
-0.01
-1
(a) time domain view of white noise
(b) time domain view of 1/f noise
Figure 4: Comparing Noise Types
1.6.2
Practical Noise Problems
1.6.2.1
Interference
This is a very common issue in sensitive measurements. It is crucial to examine the noise in terms of its spectrum as this can give a strong clue as to its origin. One should also note that the type of interference that it is important is related to the nature of the measurement you are making. If you are trying to make sensitive measurements of the effect of magnetic fields on laser cooled atoms then you clearly need to worry about the local direction of the Earth’s magnetic field, and fluctuations in that field. However, you are not really interested in the vibrations caused by passing cars. If on the other hand you are trying to build a Michelson-Morely experiment to detect variations in the speed of light then you are probably going to be very annoyed by the cars, but are really quite uninterested in changes in the local magnetic field. Electrical interference is usually the most problematic as almost all measurements end up being converted to some electrical quantity, and are thus sensitive to interference. If shielding (by putting everything inside a wire mesh shielded room, or putting low level signals inside co-ax cables) then filtering can sometimes be effective. We will examine these approaches in a later lecture.
1.7
Useful measures of the magnitude of noise
The noise sources listed above are only infrequently the limits to making precise measurements. More common, are such things as additional amplifier noise, electrical and magnetic interference, pick-up from mains, spurious signals from poor grounding and shielding, ground loops, vibration etc etc. In order to characterise all these various effects I have included a couple of useful definitions below.
5000
Lecture 1: Introduction to Measurement
1.7.1
1-11
Signal to Noise Ratio
Definition 1.1 The signal to noise ratio is defined as: Vs 2 SN R = 10 log vn 2 ∆f where Vs 2 is the mean square measure of the signal (units: Volts2 ), vn is the mean square noise density (units Volts2 /Hz), and ∆f is the bandwidth that needs to be specified. (i.e. usually one says, for example, that the SNR is 40dB in a 10kHz bandwidth - see Figure 5). 1 Untitled-9
Untitled-9
1.5
1
30
1
20
0.5
10
Signal to Noise ~ 40dB
0
1000
2000
3000
4000 -10
-0.5
-20
-1
50
-1.5
(a) noisy sinewave
100
150
200
250
300
(b) Spectrum of noisy sinewave
Figure 5: A demonstration of signal to noise ratio
1.7.2
Noise Figures
An ideal amplifier would amplify whatever signal and noise is present at the input without adding any noise of its own. When connected to a resistor that is held at the absolute zero- 0 K, an ideal amplifier would give no output signal at all. Real amplifiers add noise because of losses within the amplifier (if the lossy components are not kept at 0 K) and the amplifying process itself normally adds some noise too. The noise figure of an amplifier is the amount of noise at the output divided by the amount of noise one would have had from an ideal amplifier of the same gain, connected to the same source. Noise figures are usually expressed in dB and they depend on what the source is. If nothing else is explicitly stated the amplifier input is connected to a resistor at room temperature, 290 K. Definition 1.2 The noise figure of an amplifier is the ratio (usually expressed in dBs) of the noise at the output of that amplifier to the noise that would be measured if one used an ideal amplifier. In the example below we assume that a resistor of value Rs is connected to the input terminal of the amplifiers. N F = 10 log
4kT Rs + vA 2 4kT Rs
= 10 log 1 +
vA 2 4kT Rs
= SN Rout − SN Rin
where vA 2 is the effective input voltage noise density of the amplifier. The output noise of the amplifier would equal G2 vA 2 , if G is the voltage gain of the amplifier and a resistor at 0 K is on its input. The noise figure is equal to the increase in noise power from the input to the output which is above the amplifier gain. It is also numerically equal to the decrease of the signal-tonoise ratio in passing through the amplifier. Note for later: we have assumed in all equations appearing in this section that amplifier noise can all be contained within an effective voltage noise density - in fact
Lecture 1: Introduction to Measurement
1-12
amplifiers possess various types of input noise and we will return to this later in the course for a more complete description of amplifier noise. In this case vA 2 will be the sum of all these other noise sources. A table of good operational amplifiers from Analog Devices is shown below in Table 6.
V OS
Ib
Noise Density
2.8V/!s
30!V
15nA
10MHz
4V/!s
20!V
OP37
12MHz
17V/!s
AD8672
10MHz
AD8605
10MHz
OP42
Name
Bandwidth
OP27
8MHz
AD8671
Rate
Amplifiers
Range
3.2nV/rtHz
single
-55 to +125 Deg C
3nA
2.8nV/rtHz
single
-40 to +125 Deg C
30!V
15nA
3.2nV/rtHz
single
-40 to +85 Deg C
4V/!s
20!V
3nA
2.8nV/rtHz
dual
-40 to +125 Deg C
5V/!s
20!V 200fA
6.5nV/rtHz
single
-40 to +125 Deg C
10MHz
50V/!s 1.5mV 130pA
12nV/rtHz
single
-55 to +125 Deg C
AD810
80MHz
1KV/!s 1.5mV
2!A
2.9nV/rtHz
single
-55 to +125 Deg C
AD811
140MHz
2.5KV/ !s
500!V
2!A
1.9nV/rtHz
single
-55 to +125 Deg C
AD797
8MHz
20V/!s
25!V 250nA 900pV/rtHz
single
-40 to +85 Deg C
AD8674
10MHz
4V/!s
20!V
2.8nV/rtHz
quad
-40 to +125 Deg C
AD743
4.5MHz
2.8V/!s 250!V 150pA 2.9nV/rtHz
single
0 to +70 Deg C
3nA
Density
Figure 6: Some potentially useful Contact information about operational amplifiers Privacy/Security myAnalog ADI Site Map goodRegistration Technical Support
Terms of Use
© 1995-2005 Analog Devices, Inc. All Rights Reserved This site is optimized for IE 6.0+, NN 7.1, and Mozilla.
√ The best of the amplifiers have voltage noise densities at the level of a few nV/ Hz about equal to the voltage noise of resistors of a few kΩ. If the noise figure is 3dB then it means that the amplifier noise is equal to the noise of the input resistor, if the noise figure is 10dB then it indicates that the amplifier voltage noise is 3 times higher than the resistor.
1.7.3
Noise Temperature
Noise temperature is often easier to work with as it does not depend on the input source noise properties as the noise figure does (it only depends on the noise power of the amplifier). The noise temperature of an amplifier is equal to the temperature you need to have on a resistor at the source of an ideal amplifier that yields the same output noise as you would get from from the real amplifier if it were connected it to a resistor that is kept at 0 K. Definition 1.3
NT =
vA 2 4kRs
= T (10N F/10 − 1)
where vA 2 is the effective input voltage noise density of the amplifier. Noise temperatures correspond to power levels. When the temperature of a resistor is doubled the power output from it is doubled. (The voltage is proportional to the square root of the temperature). Given that powers from uncorrelated sources are additive it is also the case that noise temperatures are additive. The noise temperature at a certain point in a cascade of amplifiers is simply another way of expressing the power level of random noise. If an amplifier has a noise figure of 3dB then it has a noise temperature of 290 K and gives twice as much noise as an ideal amplifier with the same gain when connected to a room temperature resistor. The noise temperature at the output of an amplifier is the sum of the noise temperature of the source and the noise temperature of the amplifier itself multiplied by the power gain of the amplifier.
Lecture 1: Introduction to Measurement
1-13
The noise temperature of a simple resistor is the actual temperature of that resistor. The noise temperature of a diode can be many times the actual temperature of the diode or much less, the noise temperature of a good amplifier can be many times higher or lower than its actual temperature. For example, looking back at the table of amplifiers I have shown earlier (Figure 6); the AD797 has a noise temperature of 15 K and a NF of 0.21dB, while the OP42 has a noise temperature of 2600 K and an NF of 10dB (both with a 1kΩ input resistors).
Lecture 1: Introduction to Measurement
PRL 94, 030402 (2005)
1-14
PHYSICAL REVIEW LETTERS
week ending 28 JANUARY 2005
Evidence for Quantized Displacement in Macroscopic Nanomechanical Oscillators Alexei Gaidarzhy,1,2 Guiti Zolfagharkhani,1 Robert L. Badzey,1 and Pritiraj Mohanty1 1
Department of Physics, Boston University, 590 Commonwealth Avenue, Boston, Massachusetts 02215, USA 2 Aerospace and Mechanical Engineering, Boston University, 110 Cummington Street, Boston, Massachusetts 02215, USA (Received 7 July 2004; revised manuscript received 18 October 2004; published 25 January 2005) We report the observation of discrete displacement of nanomechanical oscillators with gigahertz-range resonance frequencies at millikelvin temperatures. The oscillators are nanomachined single-crystal structures of silicon, designed to provide two distinct sets of coupled elements with very low and very high frequencies. With this novel design, femtometer-level displacement of the frequency-determining element is amplified into collective motion of the entire micron-sized structure. The observed discrete response possibly results from energy quantization at the onset of the quantum regime in these macroscopic nanomechanical oscillators. DOI: 10.1103/PhysRevLett.94.030402
PACS numbers: 03.65.Ta, 62.25.+g, 62.30.+d, 62.40.+i
The quantum mechanical harmonic oscillator is a fundamental example of textbook quantum mechanics [1]. Its direct experimental realization in truly macroscopic mechanical systems is of interest to a wide range of fields [2 – 4], which include quantum measurement [5–8], quantum computation [9], atomic and quantum optics [10], condensed matter physics [11–13], and gravitational wave detection [7,8]. However, signatures of quantum behavior in a macroscopic mechanical oscillator are yet to be observed [7,8] despite intense experimental efforts [4,14,15]—most recently with nanomechanical structures [16 –18]. The essential problem in achieving quantized behavior in mechanical structures [19] has been the access to the quantum regime. Two characteristic time scales, decoherence time and dissipation time, define quantum-toclassical crossover. Although decoherence imposes a much stricter condition, a necessary requirement for observing quantum behavior is given by dissipation or energy relaxation (1=Q, inverse quality factor): for a system with Q ! 1, the quantum of oscillator energy hf is larger than or comparable to kB T. Realization of this criterion requires both millikelvin temperatures and gigahertz-range frequencies. For example, a nanomechanical beam with a resonance frequency of 1 GHz will enter the quantum regime at T ! 48 mK. For a doubly clamped beam, the fundamental p!!!!!!!!! frequency scales as E=!"t=L2 #, where E is the Young modulus, ! is the mass density, and t and L are thickness and length, respectively. In typical materials like silicon, all dimensions must be in the submicron range to achieve gigahertz resonance frequencies. However, if structure dimensions are reduced to increase the resonance frequency, it naturally increases the spring constant k. As the structure becomes stiffer, the displacement on resonance, x ! FQ=k, decreases for a given amplitude of force F. For a gigahertzrange beam, the typical displacement is on the order of a femtometer. Detecting femtometer displacements is further impeded because the quality factor is known to decrease with decreasing system dimensions [20 –22]. 0031-9007=05=94(3)=030402(4)$23.00
Propelled by the recent advances in nanomechanics, there have been numerous attempts to approach the quantum regime in nanomechanical oscillators [14,15] with low thermal occupation number Nth $ kB T=hf. The central thrust of this effort has been the development of ultrasensitive displacement detection techniques. These include the coupling of the nanomechanical beam to a single-electron transistor sensor [23–25], a cooper-pair box device [26], SQUID sensor [27], and piezoelectric sensor [28], as well as optical interferometric techniques [29]. Recently, in a 116-MHz beam oscillator measured down to a cryostat temperature of 30 mK, Knobel and Cleland [30] have reported displacement sensitivity 100 times the standard quantum limit, which is the limiting factor despite the very low occupation numbers they achieve. At a lower oscillator frequency of 20 MHz measured down to the oscillator temperature of 56 mK, LaHaye et al. [31] have achieved greater resolution, 4.3 times the standard quantum limit, at the cost of larger Nth ! 58. Our approach is inherently different in that instead of attempting to further improve the detection sensitivity, we focus on a novel design of the nanomechanical structures, which display gigahertz-range frequencies with a corresponding displacement in the picometer range. In this Letter we report the first observation of discrete displacement of possible quantum origin in a set of nanomechanical oscillators, which resonate at a frequency as high as 1.49 GHz. At a measured temperature of 110 mK, which corresponds to a thermal occupation of Nth ! 1, the oscillators demonstrate transitions between two discrete positions, with consistent amplitude in both magnetic field and time domain sweeps. We argue that the wave functions of the two low-lying energy levels of the oscillator at Nth ! 1 result in the observed quantized displacement. Furthermore, the nanomechanical structure truly represents a macroscopic quantum system as the quantized displacement involves roughly 50 % 109 silicon atoms. Our device is an antennalike structure designed to have coupled but distinct components, pictured in Fig. 1. Two
030402-1
2005 The American Physical Society
Lecture 1: Introduction to Measurement
1-15
Lecture 1: Introduction to Measurement
1-16
Lecture 1: Introduction to Measurement
1-17
Lecture 1: Introduction to Measurement
1-18
Lecture 1: Introduction to Measurement
1-19
Lecture 1: Introduction to Measurement
1-20
Lecture 1: Introduction to Measurement
1-21
Lecture 1: Introduction to Measurement
1-22
Lecture 1: Introduction to Measurement
1-23
Lecture 1: Introduction to Measurement
1-24