text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
MEASURE AND INTEGRATION: LECTURE 12
Appoximation of measurable functions by continuous func
tions. Recall Lusin’s theorem.Let f : X → C be measurable, A ⊂ X,
µ(A) < ∞, and f (x) = 0 if x �∈ A. Given � > 0, there exists g ∈ Cc(X)
such that µ({x f (x) = g(x)}) < � and
|
|
|
sup g(x) ≤ sup f (x) .
x∈X
x∈X
|
|
A corollary with the same assumptions and f bounded (i.e., |f (x) <|
< M such that
M ) is that there exists sequence gn ∈ Cc(X),
lim gn(x) = f (x) almost everywhere.
|gn|
Convergence almost everywhere. Lebesgue’s dominated conver
gence theorem (LDCT) in the case of almost everywhere.
C be a sequence of measurable
→
Theorem 0.1. Let f1, f2, . . . : X
C be defined almost everywhere
functions defined a.e. Let g : X
→
and g ∈ L1(µ). Assume limk→∞ fk (x) exists for a.e. x ∈ X and
|fk (x)| ≤ |
g(x) for a.e. x ∈ X. Then
|
� �
�
�
lim fk dµ = lim
k→∞
k→∞
fk dµ.
|fk (x)
| ≥ |g(x)|}.
X
Proof. Let Ek = {x |
k=1∪∞
the integrals. Now |fk
X
Then µ(Ek ) = 0. Let E =
Ek . Then µ(E) = 0. Redefine fk = 0 on E; this does not | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0bb49f0e3bf706941074216dcee6e3ab_18125_lec12.pdf |
E =
Ek . Then µ(E) = 0. Redefine fk = 0 on E; this does not change
g a.e., and we can apply the regular LDCT.
�
Theorem 0.2. Let f1, f2, . . . : X → C with each fk ∈ L1(µ) and as
sume that ∞
� �
�
X |fk dµ < ∞. Then ∞ fk exists a.e. and
|
� �
∞ �
�
∞ �
�
| ≤ | |
k=1
k=1
fk dµ =
fk dµ.
X
k=1
k=1
X
Proof. Let g =
|fk . Monotone convergence implies that
g =
� �
∞
|fk | < ∞. Thus g ∈ L1(µ) and so g < ∞ a.e.
fk | =
|
k=1�
k=1 |fk (x)| < ∞ a.e. This implies that the series ∞ fk (x)
∞
Thus,
�
k=1
|
�
�
∞
k=1�
∞
k=1
�
Date: October 14, 2003.
1
�
2
MEASURE AND INTEGRATION: LECTURE 12
converges absolutely a.e. Let Fj =
g for all j, and we can apply LDCT. We have
�
j
k=1
fk. Then Fj is dominated by
� �
∞ �
�
fk dµ =
X
k=1
�
X
lim Fj dµ = lim
j→∞
j→∞
�
X
Fj dµ
fk dµ = lim
j→∞
j �
�
k=1
X
�
j
�
X
k=1
fk | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0bb49f0e3bf706941074216dcee6e3ab_18125_lec12.pdf |
lim
j→∞
j �
�
k=1
X
�
j
�
X
k=1
fk dµ.
= lim
j→∞
∞ �
�
=
k=1
X
fk dµ
�
Countable additivity of the integral. Let E1, E2, . . . be a countable
C be
sequence of measurable sets. Let E = ∪∞ Ek and f : X
measurable. Assume either f ≥ 0 or f ∈ L1(E) (i.e., � →
dµ =
�
k=1
f
E
f χE dµ < ∞). Then
X
�
E
f dµ =
∞ �
�
k=1 Ek
f dµ.
Proof. First let f ≥ 0. Then
�
E
f dµ =
=
�
X
�
f χE dµ
∞
�
E k=1
f χEk dµ
∞ �
�
k=1
X
f χEk dµ
=
∞ �
�
k=1
Ek
f dµ.
Now let f ∈ L1(E) and fk = χEk . By the previous theorem, we need
only check the convergence of the series of integrals of |f χEk .|
MEASURE AND INTEGRATION: LECTURE 12
3
We have
∞ �
�
k=1
X
∞ �
�
|fk |
dµ =
X
k=1
∞ �
�
|f | χEk dµ
|f | dµ
k=1 Ek
�
f dµ < ∞,
|
|
=
=
because of the case when f ≥ 0.
E
� | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0bb49f0e3bf706941074216dcee6e3ab_18125_lec12.pdf |
20.110/5.60 Fall 2005
Lecture #7
page
1
Entropy, Reversible and Irreversible Processes and
Disorder
Examples of spontaneous processes
T1
T2
Connect two metal blocks thermally in
an isolated system
(∆U = 0)
Initially
2T T≠
1
dS dS dS
+
=
2
1
=
1
đ
đ
q q
2
−
T T
2
1
=
đ
q
1
)
(
−
T T
1
2
TT
1 2
(
đ
q
1
= −
đ 2
q
)
dS > 0 for spontaneous process
⇒
if
2
>
T T
1
<
T T
1
2
⇒
⇒
đ
q
1
đ
q
1
>
<
0 in
both cases heat flows
0
f
rom hot to cold as expected
gas
V
vac.
V
Joule expansion with an ideal gas
1 mol gas (V,T)
adiabatic
= 1 mol gas (2V,T)
∆U = 0 q = 0
w = 0
Need a reversible path to compute ∆S from q! Close the cycle and go
back to the initial state reversibly and isothermally
∆ = −∆
S
S
backwards
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #7
page
2
revq ≠
0
1 mol gas (2V,T) = 1 mol gas (V,T)
∆
S
backwards
=
∫
rev
đ
q
T
= −
∫
đ
w
T
=
V
∫
V
2
RdV
V
=
R
ln
1 | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
∫
đ
w
T
=
V
∫
V
2
RdV
V
=
R
ln
1
2
∴ ∆ =
S R
>
ln2 0
spontaneous
IMPORTANT!! To calculate ∆S for the irreversible process, we
needed to find a reversible path so we could determine
đ
q
rev
and
rev
đ
q
T
∫
.
• Mixing of ideal gases at constant T and p
nA A (g, VA, T) + nB A (g, VB, T) = n (A + B) (g, V, T)
n
A
V
A
n
B
V
B
spontaneous
mixing
+
=
n
n
A
V V
=
+
A
n
B
V
B
To calculate
∆ mixS
the two states.
, we need to find a reversible path between
constant T
A
B
A + B
piston
permeable
to A only
piston
permeable
to B only
back to initial state
∆
demixS
S
= −∆ mix
function of state
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #7
page
3
For demixing process
⇒ ∆
U
=
0
⇒
q
rev
= −
w pdV pdV
A A
B B
rev
=
+
work of compression of each gas
∴ ∆
S
demix
=
∫
rev
dq
T
=
V
A
∫
V
pdV
A A
T
+
V
B
∫
V
pdV
B B
T
=
R
n
A
ln
V
A
V
+
nR
B
ln
V
B | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
B B
T
=
R
n
A
ln
V
A
V
+
nR
B
ln
V
B
V
Put in terms of mole fractions
X
A
=
n
A
n
X
B
=
n
B
n
Ideal gas ⇒
X
A
=
V
A
V
X
B
=
V
B
V
∴ ∆
demixS
=
nR X X X X
]
+
A B
ln
ln
[
A
B
⇒ ∆
S
mix
= −
nR X X X X
]
+
A B
ln
ln
[
A
B
Since
A BX X
,
<
1
⇒ ∆
Smix
>
0
mixing is always spontaneous
The mixed state is more “disordered” or “random” than the
demixed state.
S
mixed
>
S
demixed
This is a general result:
Entropy is a measure of the disorder of a system
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #7
page
4
\
For an isolated system (or the universe)
∆ > 0S
∆ = 0S
∆ < 0S
Spontaneous, increased randomness
Reversible, no change in disorder
Impossible, order cannot “happen” in isolation
There is an inexorable drive for the universe to go to a
maximally disordered state.
Microscopic understanding: Boltzmann Equation of Entropy:
ΩlnS k
=
Where k is Boltzmann’s constant (k=R/NA).
And Ω is the number of equally probable microscopic
arrangements for the system.
This can also be used to calculate ∆S
In the case of the Joule expansion of an ideal gas in volume V
expanding to a volume 2V (as on the first page of these | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
In the case of the Joule expansion of an ideal gas in volume V
expanding to a volume 2V (as on the first page of these notes):
if we divide the initial volume V into m small cubes, each with
volume v, so that mv=V, the number of ways of placing the N
molecules of ideal gas into these small cubes initially is mN.
After the expansion the number of ways of placing the n
molecules of ideal gas into the now 2m small cubes is (2m)N.
The number of probably microscopic arrangements initially is:
Ω ∝
(C is a constant)
NC m
(
)
Nm
(
)
, or
Ω =
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #7
page
5
After the expansion it is:
2 N
Ω ∝ 2 Nm , or
)
)
C m
Ω =
(
(
So using Boltzmann’s equation to calculate ∆S for the
expansion:
∆ =
⎤
−
S k m k m kN
⎦
⎡
(
ln 2
⎣
ln
=
⎤
⎦
⎡
⎣
)
N
N
ln 2
=
R
ln 2
As we had found above!
More examples of ∆S calculations
In all cases, we must find a reversible path to calculate
rev
đ
q
T
∫
(a) Mixing of ideal gases at constant T and p
nA A (g, VA, T) + nB A (g, VB, T) = n (A + B) (g, V = VA + VB, T)
∆
S
mix
=
−
nR X X X X ]
B
+
A B
ln
ln
[
A
(b) Heating (or cooling) at constant V
A (T1, V) = A (T | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
ln
[
A
(b) Heating (or cooling) at constant V
A (T1, V) = A (T2, V)
∆ =
S
rev
đ
q
T
∫
=
T
∫ 2
T
1
CdT
V
T
T
if
VC
is
=
-independent
C
V
ln
T
2
T
1
[Note ∆ >
S
0 if
T T ]
>2
1
(c) Reversible phase change at constant T and p
e.g. H2O (l, 100∞C, 1 bar) = H2O (g, 100∞C, 1 bar)
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #7
page
6
q H
= ∆p
vap
∆
S
vap
(
100 C
°
)
=
vap
q
p
T
b
=
vap
∆
H
T
b
(Tb = boiling Temp at 1 bar)
(d) Irreversible phase change at constant T and p
e.g. H2O (l, -10∞C, 1 bar) = H2O (s, -10∞C, 1 bar)
This is spontaneous and irreversible.
\ We need to find a reversible path between the two states
to calculate ∆S.
H2O (l, -10∞C, 1 bar) =
irreversible
H2O (s, -10∞C, 1 bar)
đ
q C dT
( )
(cid:65)
=
rev
p
đ
( )
q C dT
sp
=rev
H2O (l, 0∞C, 1 bar) =
reversible
H2O (s, 0∞C, 1 bar | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
, 0∞C, 1 bar) =
reversible
H2O (s, 0∞C, 1 bar)
rev
q
p
= −∆
H
fus
Note: ∆Hfus is for the process going from the solid state to the
liquid state, the opposite of what we have above, same for ∆Sfus.
∆ = ∆
S S
− ∆
S
fus
+ ∆
S
cooling
heating
( )
(cid:65)
C dT
T
p
=
T
fus
∫
T
1
+
−∆
H
fus
T
fus
+
T
1
∫
T
fus
p
)
(
C s dT
T
∴ ∆ =
S
−∆
H
fus
T
+
fusT
∫
T
1
∆ =
S
−∆
H
fus
T
+
⎡
C
⎣
p
( )
(cid:65)
−
C
p
( )
s
⎤
⎦
ln
T
fus
T
1
⎡
C
⎣
p
( )
(cid:65)
−
C
p
( )
s
⎤
⎦
dT
T
if Cp values are T-independent
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/0bea388d6ba3e32ddda64a22f6574964_l07.pdf |
Lecture 4
Contents
1 de Broglie wavelength and Galilean transformations
2 Phase and Group Velocities
3 Choosing the wavefunction for a free particle
1 de Broglie wavelength and Galilean transformations
B. Zwiebach
February 18, 2016
1
4
6
We have seen that to any free particle with momentum p, we can associate a plane wave, or a “matter
wave”, with de Broglie wavelength λ = h/p, with p = jpj. The question is, waves of what? Well, this
wave is eventually recognized as an example of what one calls the wavefunction. The wavefunction,
as we will see is governed by the Schr¨odinger equation. As we have hinted, the wavefunction gives us
information about probabilities, and we will develop this idea in detail.
|
|
Does the wave have directional or polarization properties like electric and magnetic fields in an
electromagnetic wave? Yes, there is an analog of this, although we will not delve into it now. The
analog of polarization corresponds to spin! The effects of spin are negligible in many cases (small
velocities, no magnetic fields, for example) and for this reason, we just use a scalar wave, a complex
number
Ψ(x, t) 2 C
∈
(1.1)
that depends on space and time. A couple of obvious questions come to mind. Is the wavefunction
measurable? What kind of object is it? What does it describe? In trying to get intuition about this,
let’s consider how different observers perceive the de Broglie wavelength of a particle, which should
help us understand what kind of waves we are talking about. Recall that
where k is the wavenumber. How would this wave behave under a change of frame?
p =
h
λ
=
h
2π
2π
λ
= (cid:126)k,
(1.2)
We therefore consider two frames S and S0 with the x and x0 axes aligned and with S0 moving to
the right along the +x direction of S with constant velocity v. At time equal zero, the origins of the
two reference frames coincide.
(cid:48)
(cid:48)
(cid:48)
The time and spatial coordinates of the two frames are related by a Gal | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
two reference frames coincide.
(cid:48)
(cid:48)
(cid:48)
The time and spatial coordinates of the two frames are related by a Galilean transformation, which
states that
(cid:48)
x0 = x (cid:0) vt,
−
(cid:48)
t0 = t .
(1.3)
Indeed time runs at the same speed in all Galilean frames and the relation between x and x0 is manifest
from the arrangement shown in Fig. 1.
(cid:48)
Now assume both observers focus on a particle of mass m moving with nonrelativistic speed. Call
the speed and momentum in the S frame v(cid:101) and p = mv(cid:101), respectively. It follows by differentiation with
1
Figure 1: The S0 frame moves at speed v along the x-direction of the S frame. A particle of mass m
,
moves with speed v
(cid:101)
(cid:101), and thus momentum p = mv in the S frame.
(cid:48)
respect to t = t0 o the first equation in (1.3) tha
f
(cid:48)
t
(cid:48)
dx0
dt0 =
(cid:48)
dx
dt
− v ,
which means that the particle velocity v(cid:101)
(cid:48)
0 in the S0 frame is given by
(cid:48)
(cid:48)
v 0 = v − v .
(cid:101)
(cid:101)
Multiplying by the mass m we find the relati
on be
tween the momenta in the two frames
(cid:48)
p0 = p − mv.
(1.4)
(1.5)
(1.6)
The momentum p0 in the S0 frame can be appreciably different from the momentum p in the S frame.
Thus the observers in S0 and in S will obtain rather different de Broglie wavelengths λ0 and λ! Indeed,
(cid:48)
(cid:48)
(cid:48)
(cid:48)
(cid:48)
λ0 =
h
p0 =
(cid:48)
h
p −
mv
(cid:54)=
6= λ,
(1.7)
This is very strange! As we review now, for ordinary waves that propagate in the rest frame of
a medium ( | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
= λ,
(1.7)
This is very strange! As we review now, for ordinary waves that propagate in the rest frame of
a medium (like sound waves or water waves) Galilean observers will find frequency changes but no
change in wavelength. This is intuitively clear: to find the wavelength one need only take a picture
of the wave at some given time, and both observers looking at the picture will agree on the value of
the wavelength. On the other hand to measure frequency, each observers must wait some time to see
a full period of the wave go through them. This will take different time for the different observers.
Let us demonstrate these claims quantitatively. We begin with the statement that the phase
φ = kx − ωt of such a wave is a Galilean invariant. The wave itself may be cos φ or sin φ or some
combination, but the fact is that the physical value of the wave at any point and time must be agreed
by the two observers. The wave is an observable. Since all the features of the wave (peaks, zeroes,
etc, etc) are controlled by the phase, the two observers must agree on the value of the phase.
In the S frame the phase can be written as follows
φ = kx − ωt = k(x − ω
k t) =
2π
λ
(x − V t) =
2πx
λ
−
2πV
λ
t,
(1.8)
where V = ω
is the wave velocity. Note that the wavelength is read from the coefficient of x and ω is
k
minus the coefficient of t The two observers should agree on the value of φ. That is, we should have
φ0(cid:48)(x(cid:48)0, t(cid:48)0) = φ(x, t)
(1.9)
2
(cid:54)
where the coordinates and times are related by a Galilean transformation. Therefore
(cid:48)
(cid:48)
(cid:48)
φ0(x0, t0) =
2π
λ
(x (cid:0) V t) =
−
2π
λ
(cid:48)
(x0 + vt0 (cid:0) V t0) =
(cid:48) − ( | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
−
2π
λ
(cid:48)
(x0 + vt0 (cid:0) V t0) =
(cid:48) − (cid:48)
2π
λ
(cid:48) −
x0 (cid:0)
−
2π(V (cid:0) v)
λ
(cid:48)
t0.
(1.10)
Since the right-hand side is expressed in terms of primed variables, we can read λ0 from the coefficient
(cid:48)
of x0 and ω0 as minus the coefficient of t0:
(cid:48)
(cid:48)
(cid:48)
λ0 = λ
(cid:48)
2π
λ
ω0 =
(cid:48)
−
(V (cid:0) v) =
2πV
λ
(cid:16)
−
1 (cid:0)
v
V
(cid:17)
(cid:16)
−
1 (cid:0)
= ω
(cid:17)
.
v
V
(1.11)
(1.12)
This confirms that, as we claimed, for a physical wave propagating in a
Galilean invariant and the frequency transforms.
medium, the wavelength is a
So what does it mean that the wavelength of matter waves change under a Galilean transforma-
tion? It means that the Ψ waves are not directly measurable! Their value does not correspond to a
measurable quantity for which all Galilean observers must agree. Thus, the wavefunction need not be
invariant under Galilean transformations:
(cid:54)= (cid:48)
Ψ(x, t) 6= Ψ0(x0, t0) ,
(cid:48)
(cid:48)
(1.13)
where (x, t) and (x0, t0) are related by Galilean transformations and thus represent the same point and
time. You will figure out in Homework the correct relation between Ψ(x, t) and Ψ0(x0, t0).
(cid:48)
(cid:48)
(cid:48)
(cid:48)
(cid:48)
What is the frequency ω of the de Broglie wave for a particle with momentum p? We had
p = (cid:126)k
(1.14)
which fixes the wavelength in terms of | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
a particle with momentum p? We had
p = (cid:126)k
(1.14)
which fixes the wavelength in terms of the momentum. The frequency ω of the wave is determined by
the relation
E = (cid:126)ω ,
(1.15)
which was also postulated by de Broglie and fixes ω in terms of the energy E of the particle. Note
that for our focus on non-relativistic particles the energy E is determined by the momentum through
the relation
E =
.
(1.16)
p2
2m
We can give three pieces of evidence that (1.15) is a reasonable relation.
1. If we superpose matter waves to form a wave-packet that represents the particle, the packet
will move with the so called group velocity vg, which in fact coincides with the velocity of the
particle. The group velocity is found by differentiation of ω with respect to k, as we will review
soon:
vg =
dω
dk
=
=
dE
dp
(cid:17)
d
dp
(cid:16) p2
2m
p
m
=
= v .
(1.17)
2. The relation is also suggested by special relativity. The energy and the momentum components
of a particle form a four-vector:
(cid:19)
, p
(cid:18) E
c
3
(1.18)
(cid:54)
(cid:54)
Similarly, for waves whose phases are relativistic invariant we have another four-vector
(cid:17)
, k
(cid:16) ω
c
(1.19)
Setting two four-vectors equal to each other is a con
frames. As you can see, both de Broglie relations follow from
sistent choice: it would be valid in all Lorentz
(cid:19)
, p
(cid:18) E
c
= (cid:126)
(cid:17)
.
, k
(cid:16) ω
c
(1.20)
3. For photons, (1.15) is consistent with Einstein’s quanta of energy, because E = hν = (cid:126)ω.
In summary we have
p = (cid:126)k, E = (cid:126)ω .
(1.21)
These are called the de Bro | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
we have
p = (cid:126)k, E = (cid:126)ω .
(1.21)
These are called the de Broglie relations, and they are valid for all particles.
2 Phase and Group Velocities
To understand group velocity we form wave packets and investigate how fast they move. For this we
will simply assume that ω(k) is some arbitrary function of k. Consider a superposition of plane waves
ei(kx−ω(k)t) given by
(cid:90)
ψ(x, t) =
dk Φ(k)ei(kx−ω(k)t).
(2.22)
We assume that the function Φ(k) is peaked around some wavenumber k = k0, as shown in Fig. 2.
Figure 2: The function Φ(k) is assumed to peak around k = k0.
In order to motivate the following discussion consider the case when Φ(k) not only peaks around
k0 but it also is real (we will drop this assumption later). In this case the phase ϕ of the integrand
comes only from the exponential:
ϕ(k) = kx − ω(k)t .
(2.23)
We wish to understand what are the values of x and t for which the packet ψ(x, t) takes large values.
We use the stationary phase principle: since only for k (cid:24) k0 the integral over k has a chance to give a
non-zero contribution, the phase factor must be stationary at k = k0. The idea is simple: if a function
is multiplied by a rapidly varying phase, the integral washes out. Thus the phase must have zero
derivative at k0. Applying this idea to our phase we find the derivative and set it equal to zero at k0:
∼
dϕ
dk
(cid:12)
(cid:12)
(cid:12)k0
= x −
dω
dk
(cid:12)
(cid:12)
(cid:12)k0
t = 0 .
(2.24)
4
This means that ψ(x, t) is appreciable when x and t are related by
showing that the packet moves with group velocity
x =
dω
dk
(cid:12)
(cid:12)
(cid:12)
k0
t
,
vg =
dω
dk
(cid:12)
(cid:12 | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
(cid:12)
(cid:12)
(cid:12)
k0
t
,
vg =
dω
dk
(cid:12)
(cid:12)
(cid:12)
k0
.
(2.25)
(2.26)
Exercise. If Φ(k ) is not real write Φ(k) = jΦ(k)jeiφ(k
the velocity of the wave is not changed.
0
|
|
). Find the new version of (2.25) and show that
Let us now do a more detailed calculation that confirms the above analysis and gives some extra
insight. Notice first that
ψ(x, 0) =
(cid:90)
dk Φ(k)eikx .
We expand ω(k) in a Taylor expansion around k
= k0
ω(k) = ω(k0) + (k (cid:0) k0)
−
dω
dk
(cid:12)
(cid:12) + O (cid:0)
(cid:12)
(cid:12)
k0
(k
−
(cid:0) k0)
(cid:1)
2
.
Then we find, neglecting the O((k (cid:0) k )2
−
0
) terms
(cid:90)
ψ(x, t) =
dk Φ(k) eikx e(cid:0)iω(k0)te
−
− −
i(k k ) dω
(cid:0) (cid:0) 0 dk jk0 .
|
t
It is convenient to take out of the integral all the factors that do not depend on k:
ψ(x, t) = e
−
(cid:0)
−
(cid:0)
= e
iω(k )t+ik dω
0
|
0 dk jk0
iω(k )t+ik dω
|
0 dk jk
0
0
(cid:90)
(cid:90)
t
t
dk Φ(k)eikxe
ik dω
(cid:0) dk jk0
−
|
t
dk Φ(k)e
(cid:16)
ik x dω
− |
(cid:0) j
dk k
(cid:17)
t
0
.
(2.27)
(2.28)
(2.29)
(2.30)
Comparing with (2 | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
17)
t
0
.
(2.27)
(2.28)
(2.29)
(2.30)
Comparing with (2.27) we realize that the integral in the above expression can be written in terms of
the wavefunction at zero time:
ψ(x, t) = e
−
(cid:0)iω(k
dω
|
0)t+ik0 dk j
t
k0 ψ
(cid:16)
−
x (cid:0)
dω
dk
(cid:12)
(cid:12)
(cid:12)
(cid:12)
k0
(cid:17)
t .
(2.31)
The phase factors in front of the expression are not important in trac
In particular we can take the norm of both sides of the equation to find
king where the wave packet is.
|
jψ(x, t)j =
|
(cid:16)
(cid:12)
(cid:12)
(cid:12)ψ
−
x (cid:0)
dω
dk
(cid:12)
(cid:12)
(cid:12)
(cid:12)
k0
t,
0
(cid:17)(cid:12)
(cid:12).
(cid:12)
If ψ(x, 0) peaks at some value x it is clear from the abov
e equation
0
|
that jψ(x, t)j peaks for
|
−
x (cid:0)
(cid:12)
dω
(cid:12)
(cid:12)
dk k
(cid:12)
0
t = x0 ! x = x0 +
→
dω
dk
(cid:12)
(cid:12)
(cid:12)
(cid:12)
k0
t ,
(2.32)
(2.33)
sho
wing that the peak of the packet moves with velocity vgr = dω , ev
aluated at k0.
dk
5
3 Choosing the wavefunction for a free particle
What is the mathematical form of the wave associated with a particle a particle with energy E and
momentum p? We know that ω and k are determined from E = (cid:126)omega and p = (cid:126)k. Let’s suppose
that we want our wave to be propagating in the +xˆ direction. All the following are examples of waves
that could be candidates for the | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
that we want our wave to be propagating in the +xˆ direction. All the following are examples of waves
that could be candidates for the particle wavefunction.
1. sin (kx (cid:0) ωt)
−
2. cos (kx (cid:0) ωt)
−
3. ei(kx(cid:0)ωt) = eikxe(cid:0)iωt - time dependence / e(cid:0)iωt
∝ −
−
−
4. e(cid:0)i(kx(cid:0)ωt) = e(cid:0)ikxeiωt - time dependence / e+iωt
∝
−
−
−
In the third and fourth options we have indicated that the time dependence could come with either
sign. We will use superposition to decide which is the right one! We are looking for a wave-function
which is non-zero for all values of x.
Let’s take them one by one:
1. Starting from (1), we build a superposition in which the particle has equal probability to be
found moving in the +x and the (cid:0)x directions.
−
−
Ψ(x, t) = sin (kx (cid:0) ωt) + sin (kx + ωt)
Expanding the trigonometric functions this can be simplified to
Ψ(x, t) = 2 sin(kx) cos(ωt) .
(3.1)
(3.2)
But this result is not sensible. The wave function vanishes identically for all x at some special
times
(cid:1)
2 , 5π , ...
A wavefunction that is zero cannot represent a particle.
ωt = (cid:0) π
2 , 3π
2
(3.3)
2. Constructing a wave function from (2) with a superposition of left and right going cos waves,
Ψ(x, t) = cos(kx (cid:0) ωt) + cos(kx + ωt) = 2 cos(kx) cos(ωt) .
−
(3.4)
This choice is no good, it also vanishes identically when ωt = (cid:0) π
(cid:1)
2 , 3π , ...
2
3. Let | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
vanishes identically when ωt = (cid:0) π
(cid:1)
2 , 3π , ...
2
3. Let’s try a similar superposition of exponentials from (3), with both
having the same time
dependence
−
Ψ(x, t) = ei(kx(cid:0)ωt) + ei((cid:0)kx(cid:0)ωt)
− −
−
= (eikx + e(cid:0)ikx) e(cid:0)iωt
= 2 cos kx e(cid:0)iωt .
−
−
(3.5)
(3.6)
(3.7)
This wavefunction meets our criteria! It is never zero for all values of x because e(cid:0)iωt is never
zero.
−
6
4. A superposition of exponentials from (4) also meets our criteria
−
−
Ψ(x, t) = e(cid:0)i(kx(cid:0)ωt) + e(cid:0)i((cid:0)kx(cid:0)ωt)
= (eikx + e(cid:0)ikx) eiωt
= 2 cos kx eiωt .
− − −
−
(3.8)
(3.9)
(3.10)
This is never zero for all values of x
Since both options (3) and (4) seem to work we ask: Can we use both (3) and (4) to represent a
particle moving to the right (in the +xˆ direction)? Let’s assume that we can. Then, since adding a
state to itself should not change the state, we could represent the right moving particle by using the
sum of (3) and (4)
Ψ(x, t) = ei(kx(cid:0)ωt) + e(cid:0)i(kx(cid:0)ωt) = 2 cos(kx (cid:0) ωt) .
−
−
−
−
(3.11)
This, however, is the same as (2), which we already showed leads to difficulties. Therefore we must
choose between (3) and (4).
The choice is a matter of convention, and all physicists use the same convention. We take the free
particle | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
choose between (3) and (4).
The choice is a matter of convention, and all physicists use the same convention. We take the free
particle wavefunction to be
Free particle wavefunction : Ψ(x, t) = ei(kx(cid:0)ωt) ,
−
representing a particle with
p = (cid:126)k ,
and E = (cid:126)ω .
In three dimensions the corresponding wavefunction would be
Free particle wavefunction : Ψ(x, t) = ei(k(cid:1)x(cid:0)ωt) ,
· −
representing a particle with
p = (cid:126) k ,
and E = (cid:126)ω .
(3.12)
(3.13)
(3.14)
(3.15)
Andrew Turner and Sarah Geller transcribed Zwiebach’s handwritten notes to create the first LaTeX
version of this document.
7
MIT OpenCourseWare
https://ocw.mit.edu
8.04 Quantum Physics I
Spring 2016
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/0c07cbdc9c352c39eb9539b31ded90d7_MIT8_04S16_LecNotes4.pdf |
Compression in Bayes nets
• A Bayes net compresses the joint
probability distribution over a set of
variables in two ways:
– Dependency structure
– Parameterization
• Both kinds of compression derive from
causal structure:
– Causal locality
– Independent causal mechanisms
Dependency structure
,
MJAEBP
,
)
)
,
AMPAJPEBAPEPBP
,
)
=)
(
)
(
(
(
(
)
(
,
|
|
|
Burglary
Earthquake
Alarm
,
Graphical model asserts:
VP K
nV
( 1
,
n
∏
i
V
parents
[
i
VP
(
i
=)
])
|
=
1
JohnCalls
MaryCalls
Dependency structure
,
(
MJAEBP
,
)
|
,
JAEBMPAEBJPEBAPBEPBP
(
=)
)
)
(
(
(
(
)
,
,
,
,
,
,
,
|
|
|
)
Burglary
Earthquake
Alarm
For any distribution:
VP K
nV
( 1
,
n
∏
VVP
i
1
=)
K
V
i
(
,
,
,
|
i
=
1
)
−
1
JohnCalls
MaryCalls
Parameterization
P(B)
0.001
P(E)
0.002
Burglary
Earthquake
Alarm
Full CPT
B E P(A|B,E)
0 0 0.001
0 1 0.29
1 0 0.94
1 1 0.95
JohnCalls
A P(J|A)
0 0.05
1 0.90
MaryCalls
A P(M|A)
0 0.01
1 0.70
Parameterization
P(B)
0.001
P(E)
0.002
Burglary
Earthquake
Noisy-OR
Alarm
B E P(A|B,E)
0 0 0
0 1 wB = 0.29
1 0 wE = 0.94
1 1 wB +(1-wB )wE
JohnCalls
A P(J|A)
0 0.05 | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
1 1 wB +(1-wB )wE
JohnCalls
A P(J|A)
0 0.05
1 0.90
MaryCalls
FIX FOR NEXT
YEAR Wb and We
A P(M|A)
0 0.01
1 0.70
Outline
• The semantics of Bayes nets
– role of causality in structural compression
• Explaining away revisited
– role of causality in probabilistic inference
• Sampling algorithms for approximate
inference in graphical models
Outline
• The semantics of Bayes nets
– role of causality in structural compression
• Explaining away revisited
– role of causality in probabilistic inference
• Sampling algorithms for approximate
inference in graphical models
Global semantics
Joint probability distribution factorizes into product
of local conditional probabilities:
VP
( K
,
1
,
V
n
)
=
VP
(
i
|
V
parents
[
i
])
n
∏
i
=
1
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
,
(
MJAEBP
,
)
,
AMPAJPEBAPEPBP
,
)
=)
(
(
)
(
(
)
)
(
,
|
|
|
Global semantics
Joint probability distribution factorizes into product
of local conditional probabilities:
VP
( K
,
1
,
V
n
)
=
VP
(
i
|
V
parents
[
i
])
n
∏
i
=
1
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
Necessary to assign a probability to any possible world, e.g.
)
aPePbPmjae
¬=
(
,
amPajPe
)
)
b
¬¬
,
¬¬
,
bP
(
¬
)
)
(
(
)
(
(
,
,
|
|
|
Local semantics
Global factorization is equivalent to a set of constraints
on pairwise relationships between variables.
“Markov property”: Each node is conditionally
independent of its non-descendants given its parents.
U1
Um
Z1j
X
Znj
Y1
Yn
Figure by MIT OCW.
Local semantics
Global factorization is equivalent to a set of constraints
on pairwise relationships between variables.
“Markov property”: Each | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
MIT OCW.
Local semantics
Global factorization is equivalent to a set of constraints
on pairwise relationships between variables.
“Markov property”: Each node is conditionally
independent of its non-descendants given its parents.
Also: Each node is marginally
(a priori) independent of any
non-descendant unless they
share a common ancestor.
U1
Um
Z1j
X
Znj
Y1
Yn
Figure by MIT OCW.
Local semantics
Global factorization is equivalent to a set of constraints
on pairwise relationships between variables.
Each node is conditionally independent of all others
given its “Markov blanket”: parents, children,
children’s parents.
U1
Um
Z1j
X
Znj
Y1
Yn
Figure by MIT OCW.
Example
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
JohnCalls and MaryCalls are marginally (a priori) dependent, but
conditionally independent given Alarm. [“Common cause”]
Burglary and Earthquake are marginally (a priori) independent,
but conditionally dependent given Alarm. [“Common effect”]
Constructing a Bayes net
• Model reduces all pairwise dependence and
independence relations down to a basic set
of pairwise dependencies: graph edges.
• An analogy to learning kinship relations
– Many possible bases, some better than others
– A basis corresponding to direct causal
mechanisms seems to compress best.
An alternative basis
Suppose we get the direction of causality wrong...
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
• Does not capture the dependence between callers:
falsely believes P(JohnCalls, MaryCalls) =
P(JohnCalls) P(MaryCalls).
An alternative basis
Suppose we get the direction of causality wrong...
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
• Inserting a new arrow captures this correlation.
• This model is too complex: does not believe that
P(JohnCalls, MaryCalls|Alarm) =
P(JohnCalls|Alarm) P(MaryCalls|Alarm)
An alternative basis
Suppose we get the direction of causality wrong...
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
• Does not capture conditional dependence of causes | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
the direction of causality wrong...
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
• Does not capture conditional dependence of causes
(“explaining away”): falsely believes that
P(Burglary, Earthquake|Alarm) =
P(Burglary|Alarm) P(Earthquake|Alarm)
An alternative basis
Suppose we get the direction of causality wrong...
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
• Another new arrow captures this dependence.
• But again too complex: does not believe that
P(Burglary, Earthquake) =
P(Burglary)P(Earthquake)
Suppose we get the direction of causality wrong...
PowerSurge
Burglary
Earthquake
Alarm
JohnCalls
MaryCalls
BillsCalls
• Adding more causes or effects requires a
combinatorial proliferation of extra arrows. Too
general, not modular, too many parameters….
Constructing a Bayes net
• Model reduces all pairwise dependence and
independence relations down to a basic set of
pairwise dependencies: graph edges.
• An analogy to learning kinship relations
– Many possible bases, some better than others
– A basis corresponding to direct causal
mechanisms seems to compress best.
• Finding the minimal dependence structure
suggests a basis for learning causal models.
Outline
• The semantics of Bayes nets
– role of causality in structural compression
• Explaining away revisited
– role of causality in probabilistic inference
• Sampling algorithms for approximate
inference in graphical models
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
A priori, no correlation between B and E:
ebP
),(
=
ePbP
)(
)(
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
Explaining away
• Logical OR: Independent deterministic causes | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
1
1 0 1
1 1 1
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
After observing A = a …
abP
(
|
)
=
= 1
bPbaP
)
|
)(
aP
)(
(
Explaining away
• Logical OR: Independent deterministic causes
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
Burglary
Earthquake
Alarm
After observing A = a …
abP
|(
)
=
bP
)(
aP
)(
)(bP>
May be a big increase if P(a) is small.
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
After observing A = a …
abP
|
(
)
=
bP
)(
+
bP
)(
−
eP
)(
ePbP
)(
)(
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
)(bP>
May be a big increase if P(b), P(e) are small.
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
After observing A = a, E= e, …
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
eabP
|
),
(
=
|
(
ebPebaP
),
(
)|
eaP
(
)|
Both terms = 1
Explaining away
• Logical OR: Independent deterministic causes
Burglary
Earthquake
Alarm
After observing A = a, E= e, …
B E P(A|B,E)
0 0 0
0 1 1 | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
, E= e, …
B E P(A|B,E)
0 0 0
0 1 1
1 0 1
1 1 1
eabP
|
),
(
=
=
|
(
ebPebaP
),
(
)|
eaP
)|
(
bP
)(
ebP
(
)|
=
“Explaining away” or
“Causal discounting”
Explaining away
• Depends on the functional form (the
parameterization) of the CPT
– OR or Noisy-OR: Discounting
– AND: No Discounting
– Logistic: Discounting from parents with
positive weight; augmenting from parents with
negative weight.
– Generic CPT: Parents become dependent when
conditioning on a common child.
Parameterizing the CPT
• Logistic: Independent probabilistic causes
with varying strengths wi and a threshold θ
Child 1 upset
Child 2 upset
Parent upset
P(Pa|C1,C2)
Threshold θ
C1 C2 P(Pa|C1,C2)
[
0 0
+
1/1
[
0 1
+
1/1
[
1 0
1/1
[
1 1
1/1
]
w
1
]
w
)
2
ww
−
2
exp(
exp(
]
θ
)
−
θ
exp(
exp(
θ
θ
−
+
+
−
)
1
])
Annoyance = C1* w1 +C2* w2
Contrast w/ conditional reasoning
Rain
Sprinkler
Grass Wet
• Formulate IF-THEN rules:
– IF Rain THEN Wet
– IF Wet THEN Rain
IF Wet AND NOT Sprinkler
THEN Rain
• Rules do not distinguish directions of inference
• Requires combinatorial explosion of rules
Spreading activation or recurrent
neural networks
Burglary
Earthquake
Alarm
• Excitatory links: Burglary
Alarm, Earthquake
Alarm
• Observing earthquake, Alarm becomes more active.
• Observing alarm, Burglary and Earthquake become
more active.
• Observing alarm and earthquake, Burglary cannot
become less active. No explaining away!
Spreading activation or recurrent | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
more active.
• Observing alarm and earthquake, Burglary cannot
become less active. No explaining away!
Spreading activation or recurrent
neural networks
Burglary
Earthquake
Alarm
• Excitatory links: Burglary
Alarm, Earthquake
Alarm
• Inhibitory link: Burglar
• Observing alarm, Burglary and Earthquake become
Earthquake
more active.
• Observing alarm and earthquake, Burglary becomes
less active: explaining away.
Spreading activation or recurrent
neural networks
Burglary
Power surge
Earthquake
Alarm
• Each new variable requires more inhibitory
connections.
• Interactions between variables are not causal.
• Not modular.
– Whether a connection exists depends on what other
connections exist, in non-transparent ways.
– Combinatorial explosion of connections
The relation between PDP and
Bayes nets
• To what extent does Bayes net inference
capture insights of the PDP approach?
• To what extent do PDP networks capture or
approximate Bayes nets?
Summary
Bayes nets, or directed graphical models, offer
a powerful representation for large
probability distributions:
– Ensure tractable storage, inference, and
learning
– Capture causal structure in the world and
canonical patterns of causal reasoning.
– This combination is not a coincidence.
Still to come
• Applications to models of categorization
• More on the relation between causality and
probability:
Causal structure
Statistical dependencies
• Learning causal graph structures.
• Learning causal abstractions (“diseases
cause symptoms”)
• What’s missing from graphical models
Outline
• The semantics of Bayes nets
– role of causality in structural compression
• Explaining away revisited
– role of causality in probabilistic inference
• Sampling algorithms for approximate
inference in graphical models
Motivation
• What is the problem of inference?
– Reasoning from observed variables to
unobserved variables
• Effects to causes (diagnosis):
P(Burglary = 1|JohnCalls = 1, MaryCalls = 0)
• Causes to effects (prediction):
P(JohnCalls = 1|Burglary = 1)
P(JohnCalls = 0, MaryCalls = 0|Burglary = 1)
Motivation
• What is | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
1)
P(JohnCalls = 0, MaryCalls = 0|Burglary = 1)
Motivation
• What is the problem of inference?
– Reasoning from observed variables to
unobserved variables.
– Learning, where hypotheses are
represented by unobserved variables.
• e.g., Parameter estimation in coin flipping:
θ
P(H) = θ
d1
d2
d3
d4
Motivation
• What is the problem of inference?
– Reasoning from observed variables to
unobserved variables.
– Learning, where hypotheses are
represented by unobserved variables.
• Why is it hard?
– In principle, must consider all possible
states of all variables connecting input
and output variables.
A more complex system
Battery
Radio
Ignition
Gas
Starts
On time to work
• Joint distribution sufficient for any inference:
SOPGISPGPBIPBRPBPOSGIRBP
)
=
(
)
(
(
)
)
)
(
(
(
)
(
,
,
,
,
,
,
|
|
|
|
)
GOP
(
|
)
=
)
GOP
(
,
GP
(
)
=
∑
,
,
(
OSGIRBP
,
,
,
,
SIRB
,
,
GP
(
)
)
AP
(
)
∑=
B
BAP
,
(
)
“marginalization”
A more complex system
Battery
Radio
Ignition
Gas
Starts
On time to work
• Joint distribution sufficient for any inference:
SOPGISPGPBIPBRPBPOSGIRBP
)
=
)
(
)
)
(
(
)
)
(
(
(
(
,
,
,
,
,
,
|
|
|
|
)
GOP
|
(
)
=
)
GOP
,
(
GP
(
)
=
∑
SIRB
,
,
(
(
)
SOPGISPGPBIPBRPBP
,
|(
)
)
)
)
(
(
(
,
|
|
|
)
GP
(
)
A more complex system
Battery
Radio
Ignition
Gas
Starts
On time to work
• Joint distribution sufficient for any inference:
SOPGISPGPBIPBRPBPOSGIRBP
)
=
(
(
(
)
(
)
(
)
(
)
(
)
,
,
,
,
,
,
|
|
|
|
)
GOP
|
(
)
=
)
GOP
(
,
GP
(
)
=
⎛
� | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
,
,
,
,
,
|
|
|
|
)
GOP
|
(
)
=
)
GOP
(
,
GP
(
)
=
⎛
⎜
∑ ∑
⎜
IB
S
⎝
,
SOPGISPBIPBP
(
(
)
(
)
)
(
,
|
|
|
⎞
⎟
⎟
⎠
)
A more complex system
Battery
Radio
Ignition
Gas
Starts
On time to work
• Joint distribution sufficient for any inference:
SOPGISPGPBIPBRPBPOSGIRBP
)
=
(
)
(
)
(
(
(
)
(
)
)
(
,
,
,
,
,
,
|
|
|
|
)
• Exact inference algorithms via local computations
– for graphs without loops: belief propagation
– in general: variable elimination or junction tree, but these
will still take exponential time for complex graphs.
Sampling possible worlds
<
cloudy
,
¬
sprinkler
,
rain
,
wet
>
¬<
cloudy
,
sprinkler
,
¬
rain
,
wet
>
¬<
cloudy
¬<
cloudy
sprinkler
sprinkler
,
,
<
cloudy
<
cloudy
,
,
¬
sprinkler
¬
sprinkler
,
,
,
,
¬
rain
¬
rain
wet
>
wet
>
,
,
¬
rain
,
¬
wet
>
rain
,
wet
>
,
¬
sprinkler
,
¬
rain
,
¬
wet
>
¬<
cloudy
. . .
As the sample gets larger,
the frequency of each
possible world approaches
its true prior probability
under the model.
How do we use these
samples for inference?
Summary
• Exact inference methods do not scale well to
large, complex networks
• Sampling-based approximation algorithms can
solve inference and learning problems in arbitrary
networks, and may have some cognitive reality.
– Rejection sampling, Likelihood weighting
• Cognitive correlate: imagining possible worlds
– Gibbs sampling
• Neural correlate: Parallel local message-passing
dynamical system
• Cognitive correlate: “Two steps forward, one step
back” model of cognitive development | https://ocw.mit.edu/courses/9-66j-computational-cognitive-science-fall-2004/0c18b39d66e8cc6e2125099193dc722e_oct_12_2004_fin.pdf |
8.06 Spring 2016 Lecture Notes
2. Time-dependent approximation methods
Aram Harrow
Last updated: March 12, 2016
Contents
1 Time-dependent perturbation theory
1.1 Rotating frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Perturbation expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 NMR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Periodic perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
3
4
5
2 Light and atoms
8
9
Incoherent light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
2.2 Spontaneous emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 The photoelectric effect
3 Adiabatic evolution
15
3.1 The adiabatic approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Berry phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
. . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Ne | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
. . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Neutrino oscillations and the MSW effect
3.4 Born-Oppenheimer approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4 Scattering
24
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1 Preliminaries
4.2 Born Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Partial Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1 Time-dependent perturbation theory
Perturbation theory can also be used to analyze the case when we have a large static Hamiltonian
H0 and a small, possibly time-dependent, perturbation δH(t). In other words
H(t) = H0 + δH(t).
(1)
However, the more important difference from time-independent perturbation theory is in our goals:
we will seek to analyze the dynamics of the wavefunction (i.e. find |ψ(t)(cid:105) as a function of t) rather
than computing the spectrum of H. In fact, when we use a basis, we will work in the eigenbasis of
H0. For example, one common situation that we will analyze is that we start in an eigenstate of
H0, temporarily turn on a perturbation δH(t) and then measure in the eigenbasis of H0. This is a
bit abstract, so here is a more concrete version of the example. H0 is the natural Hamiltonian of
1
the hydrogen atom and δH(t) comes from electric and/or magnetic fields that we temporarily turn
on. If we start in the 1s state, then what is the probability that after some time we will be in the
2p state? (Note | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
. If we start in the 1s state, then what is the probability that after some time we will be in the
2p state? (Note that the very definition of the states depends on H0 and not the perturbation.)
Time-dependent perturbation theory will equip us to answer these questions.
1.1 Rotating frame
We want to solve the time-dependent Schr¨odinger equation i(cid:126)∂t|ψ(t)(cid:105) = H(t)|ψ(cid:105). We will assume
that the dynamics of H0 are simple to compute and that the computational difficulty comes from
δH(t). At the same time, if H0 is much larger than δH(t) then most of the change in the state will
come from H0. In classical dynamics when an object is undergoing two different types of motion,
it is often useful to perform a change of coordinates to eliminate one of them. We will do the same
thing here. Define the state
| ˜
ψ(t)(cid:105)
iH t
0
= e (cid:126)
|ψ(t)(cid:105).
(2)
| ˜
W
iH t0e (cid:126)
| ˜ψ(t)(cid:105) = | ˜ψ(0)(cid:105)
e say that ψ(t)(cid:105) is in the rotating frame or alternatively the interaction picture. Multiplying by
cancels out the natural time evolution of H0. In particular, if δH(t) = 0 then we would have
= |ψ(0)(cid:105). Thus, any change in |ψ(t)(cid:105) must come from δH(t).
˜
In 8.05 we saw the Schr¨odinger
Aside: comparison to Schr¨odinger and Heisenberg pictures.
picture and the Heisenberg picture. In the former, states evolve according to H and operators re-
main the same; in the latter, states stay the same and operators evolve according to H. The
interaction picture can be thought of as intermediate between these two. We pick a frame rotating
with H0, which means that the operators evolve according to H0 and the states evolve with the
remaining piece of the Hamiltonian | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
pick a frame rotating
with H0, which means that the operators evolve according to H0 and the states evolve with the
remaining piece of the Hamiltonian, namely δH. As we will see below, to calculate this evolution
correctly we need δH to rotate with H0, just like all other operators. This is a little vague but
below we will perform an exact calculation to demonstrate what happens.
Now let’s compute the time evolution of | ˜ψ(t)(cid:105).
i(cid:126)
d
dt
| ˜ψ(t)(cid:105) = i(cid:126)
(cid:16) iH t0
e (cid:126)
(cid:17)
(cid:105)
|ψ(t)
d
dt
=
iH t
0
−
H0e (cid:126)
|ψ(t)(cid:105)
iH t
0
+ e (cid:126)
(cid:105)
(H0 + δH(t)) |ψ(t)
iH t
0
= e (cid:126) δH(t)|ψ(t)(cid:105)
iH t
0
iH t
0
(cid:126) δH
(t)e− (cid:126)
(cid:123)(cid:122)
δH(t)
(cid:102)
(cid:125)
= e
(cid:124)
˜
|ψ(t)(cid:105)
since H0 and e (cid:126)
iH t
0
commute
Thus we obtain an effectiv
e Schr¨odinger
equation in the rotating frame
where we have defined
i
d(cid:126)
dt
| ˜
ψ(t)(cid:105) = δ(cid:102)H(t)
˜
|ψ(t)(cid:105)
δH(t) = e (cid:126) δH(t)e−
(cid:102)
iH t
0
iH
t0
(cid:126)
.
(3)
This has a simple interpretation as a matrix. Suppose that the eigenvalues and eigenvectors of H0
(reminder: we work with the eigenbasis
of H0 and not H(t)) are given by
H0|n(cid:105) = En|n(cid:105).
2
Defi | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
0 and not H(t)) are given by
H0|n(cid:105) = En|n(cid:105).
2
Define δHmn(t) = (cid:104)m|δH(t)|n(cid:105). Then
δH mn(t) = m e
|
(cid:102)
(cid:104)
iH t
0
(cid:126) δH(t)e− (cid:126)
iH t
0
|n(cid:105) = e
i(Em−En)t
(cid:126)
δH (t) ≡ e mn
mn
iω
t
δHmn,
where we have defined ωmn = Em−En . If we define c
(cid:126)
n(t) according to
| ˜ψ(t)(cid:105) =
(cid:88)
n
cn(t)|n(cid:105)
=⇒
|ψ(t)(cid:105) =
iEnt
e− (cid:126) cn(t)
|n(cid:105)
(cid:88)
n
then we obtain the following coupled differential equations for the {cn}.
(cid:88)
(cid:88)
i(cid:126)c˙m(t) =
δ(cid:102)H mn(t)cn(t) =
eiωmntδHmncn(t).
1.2 Perturbation expansion
n
n
So far everything has been exact, although sometimes this is already enough to solve interesting
problems. But often we will need approximate solutions. So assume that δH(t) = O(λ) and expand
the wavefunction in powers of λ, i.e.
cm(t) = cm (t) + cm (t) + cm (t) + . . .
(1)
(2)
(0)
O(1)
O(λ)
O(λ2)
| ˜
˜
ψ(t)(cid:105) = |ψ(0)(t)(cid:105) + |ψ(1)(t)(cid:105) + |ψ(2)(t)(cid:105) + . . .
˜
˜
We can solve these order by order. Applying (3) we obtain
i(cid:126)∂t| ˜ψ(0)
(cid:123)(cid: | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
order. Applying (3) we obtain
i(cid:126)∂t| ˜ψ(0)
(cid:123)(cid:122)
(cid:124)
O(1)
(cid:105) +
(t)
(cid:125)
i(cid:126)∂t| ˜ψ(1)(t)(cid:105) + i(cid:126)∂t| ˜
ψ(2)(t)(cid:105) + . . . = H(t)|ψ(0)(t)(cid:105) + δ(cid:102)H(t)|ψ(1)(t)(cid:105) + . . .
(cid:124)
(cid:123)(cid:122)
(cid:125)
O(λ2)
δ(cid:102) ˜
(cid:123)(cid:122)
(cid:124)
O(λ)
˜
(cid:123)(cid:122)
O(λ2)
(cid:123)(cid:122)
O(λ)
(cid:124)
(cid:125)
(cid:125)
(cid:124)
(cid:125)
(4)
The solution is much simp
the RHS, so the zeroth order approximation is simply that nothing happens:
time-dep
ler than
t case.
There
enden
the
in
is
no zeroth
| ˜ψ(0)(t)(cid:105) = | ˜ψ(0)(0)(cid:105) = |ψ(0)(cid:105)
i(cid:126)∂t| ˜ψ(1)(t)(cid:105)
= δ(cid:102)H(t)|ψ(0)(t)(cid:105) = δ(cid:102)H(t)|ψ(0)(cid:105).
˜
The first-order terms yield
Integrating, we find
order
term on
(5)
(6)
δH( (cid:48))
t
(cid:102)
i(cid:126) |ψ(0)(cid:105) .
This leads to one very useful formula. If we start in
state |n(cid:105), turn on H(t) for times 0 ≤ t ≤ T
and then measure in the energy eigenbasis of H0, then we find that the probability of ending in
state |m(cid | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
t ≤ T
and then measure in the energy eigenbasis of H0, then we find that the probability of ending in
state |m(cid:105) is
)
(t (cid:105) =
| ˜ψ(1)
dt(cid:48)
(7)
0
(cid:90) t
We can also continue to higher orders. The
Pn→m =
(cid:12)
(cid:12)(cid:90)
(cid:12)
(cid:12)
(cid:12)
0
2
t
dt(cid:48)
δH
iωmnt(cid:48)
mn(t(cid:48)
)e
i(cid:126)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
second-order solution
is
| ˜ψ(2)(t)(cid:105) =
(cid:90) t(cid:48)
(cid:90) t
dt(cid:48)
0
0
dt(cid:48)(cid:48)
(cid:102)H(t(cid:48)) δ
δ
i(cid:126)
(cid:102)H(t(cid:48)(cid:48))
i(cid:126)
|ψ(0)(cid:105) .
(8)
3
1.3 NMR
In some cases the rotating frame already helps us solve nontrivial problems exactly without going
to perturbation theory. Suppose we have a single spin-1/2 particle in a magnetic field pointing in
the zˆ direction. This field corresponds to a Hamiltonian
H0 = ω0Sz = ω0σz.
(cid:126)
2
If the particle is a proton (i.e. hydrogen nucleus) and the field is typical for NMR, then ω0 might
be around 500 MHz.
Static magnetic field Now let’s add a perturbation consisting of a magnetic field in the xˆ
direction. First we will consider the static perturbation
δH(t) = ΩSx,
where we will assume Ω (cid:28) ω0, e.g. Ω might be on the order of 20 KHz. (Why are we considering a
time-independent Hamiltonian with time-dependent perturbation theory? Because really it is the
time-dependence of the state and not the Hamiltonian that we are after.)
We | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
Hamiltonian with time-dependent perturbation theory? Because really it is the
time-dependence of the state and not the Hamiltonian that we are after.)
We can solve this problem exactly without using any tools of perturbation theory, but it will be
instructive to compare the exact answer with the approximate one. The exact evolution is given
by precession about the
ω0zˆ + Ωxˆ
(cid:112)
ω2
0 + Ω2
axis at an angular frequency of (cid:112)ω2
the zˆ axis.
2
0 + Ω .
If Ω (cid:28) ω0 then this is very close to precession around
Now let’s look at this problem using first-order perturbation theory.
ω t
0
H(t) = ei
δ(cid:102)
(cid:90) t
| ˜ψ(1)(t)(cid:105) =
= Ω (cos(ω0t)Sx − sin(ω0t)Sy)
ω t
0
2
σz
σz
dt(cid:48)
2 ΩSxe−i
δH(t(cid:48))
(cid:102)
i(cid:126) |ψ(0)(cid:105)
(cid:0)cos(ω0t(cid:48))Sx − sin(ω0t(cid:48))Sy |ψ(0)
dt(cid:48)
(cid:1)
(cid:105)
(sin(ω0t)Sx + (cos(ω0t) − 1)S
y)
|ψ(0)(cid:105)
Ω
i(cid:126)
0
(cid:90) t
0
1 Ω
i(cid:126)
ω0
=
=
We see that the total change is proportional to Ω/ω0, which is (cid:28) 1. Since this is the difference
between pure rotating around the zˆ axis, this is consistent with the exact answer we obtained.
The result of this calculation is that if we have a strong zˆ field, then adding a weak static xˆ
field doesn’t do very much. If we want to have a significant effect on the state, we will need to do
something else. The rotating-frame | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
If we want to have a significant effect on the state, we will need to do
something else. The rotating-frame picture suggests the answer: the perturbation should rotate
along with the frame, so that in the rotating frame it appears to be static.
Rotating magnetic field Now suppose we apply the perturbation
δH(t) = Ω (cos(ω0t)Sx + sin(ω0t)Sy) .
4
We have already computed Sx above. In the rotating frame we have
˜
Thus
˜Sx = (cos(ω0t)Sx − sin(ω0t)Sy)
˜Sy = (cos(ω0t)Sy + sin(ω0t)Sx)
δ(cid:102)H(t) = ΩSx.
The rotating-frame solution is now very simple:
| ˜ψ(t)(cid:105)
i
= e− Ωt σx
2
|ψ(0)(cid:105).
This can be easily translated back into the stationary frame to obtain
|ψ(t)(cid:105)
= e−
iω t
0
2
σz
e− σx
iΩt
2
|ψ(0)(cid:105).
1.4 Periodic perturbations
The NMR example suggests that transitions between eigenstates of H0 happens most effectively
when the perturbation rotates at the frequency ωmn. We will show that this holds more generally
in first-order perturbation theory. Suppose that
δH(t) = V cos(ωt),
for some time-independent operator V .
calculate
If our system starts in state |n(cid:105) then at time t we can
c(1)
m (t) = (cid:104)
˜m ψ(1)(t)(cid:105) =
|
(cid:90)
t
0
δH
(cid:102)
dt(cid:48)
mn( (cid:48)
t )
i(cid:126)
dt(cid:48)
δH
mn(t(cid:48))
i(cid:126)
(cid:48)
eiωmnt
(cid:90) t
0
(cid:90) t
cos(ωt)eiωmnt
(cid:48)
=
=
=
=
V | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
:90) t
0
(cid:90) t
cos(ωt)eiωmnt
(cid:48)
=
=
=
=
V
dt(cid:48) mn
i(cid:126)
dt(cid:48) (cid:16)
0
(cid:90) t
Vmn
2i(cid:126) 0
(cid:34)
V
mn
2i(cid:126)
ei(ωmn+ω)t(cid:48)
+ ei(ωmn−ω)t(cid:48)
ei(ωmn+ω)t
−
ωmn + ω
1
+
ei(ωmn−ω)t
−
ωmn − ω
(cid:17)
(cid:35)
1
The ωmn ± ω terms in the denominator mean that we will get the largest contribution when
ω ≈ |ωmn|. (A word about signs. By convention we have ω > 0, but ωmn is a difference of energies
and so can have either sign.) For concreteness, let’s suppose that ω ≈ ωmn; the ω ≈ −ωmn case is
similar. Then we have
c(1)
m (t)
≈
V
mn
2i(cid:126)
ei(ωmn
−ω)t
− 1
.
ωmn − ω
If we now measure, then the probability of obtaining outcome m is
Pn m(t)
→
≈ | m
c(1)(t) 2 =
|
|
Vmn
(cid:126)2
|2 sin2
(ω
(cid:16)
(ω
mn
(cid:17)
ω)t
mn−
2
− ω)2
=
2
|
|
V
mn
(cid:126)2
sin2
α
(cid:0)
2
(cid:1)
αt
2
,
5
where we have defined the detuning α ≡ ωmn − ω. The t, α dependence is rather subtle, so we
examine it separately. Define
f (t, α) =
sin2
(cid:1)
(cid:0) αt
2
α2
(9)
For fixed α, f (t, α) is periodic in t.
It is more interesting to consider the | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
2
α2
(9)
For fixed α, f (t, α) is periodic in t.
It is more interesting to consider the case of fixed t, for which f has a sinc-like appearance (see
Fig. 1).
Figure 1: The function f (α, t) from (9), representing the amount of amplitude transfered at a fixed
time t as we vary the detuning α ≡ ωmn − ω.
t
It has zeroes as α = 2π n for integers n = 0. Since the closest zeros to the origin are at
±2π/t,
|α| ≥ 2π ) the
we call the region α ∈ [−2π/t, 2π/t] the “peak” and the rest of the real line (i.e.
t
“tails.” For α → ∞, f (t, α) ≤ 1/α2. Thus, the tail has total area bounded by 2 ∞ 1/α2 = O(t).
For the peak, as α → 0, f (t, α) → t2/4. On the other hand, sin is concave, so for 0 ≤ θ ≤ π/2
≥ t . While these crude bounds
w
t
do not determine the precise multiplicative constants, this does show that there is a region of width
∼ 1/t and height ∼ t2, and so the peak also has area O(t).
α| ≤ π we have f (α, t)
θ = 2 θ. Thus for
≥ sin(π/2)
π/2
e have sin(θ)
2π/t
2
π
(cid:82)
π
|
We conclude that(cid:82) ∞ dαf (t, α) ∼ t. Dividing by t, we obtain
−∞
(cid:90) ∞
−∞
dα
f (t, α)
t
∼ 1.
f (t,α)
t → 0 as t → ∞ for all α = 0. So as t → ∞
On the other hand,
is always
nonnegative, always has total mass roughly independent of t, but approaches zero for all nonzero
α. This means that it approaches a delta function. A more detailed calculation (omitted | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
total mass roughly independent of t, but approaches zero for all nonzero
α. This means that it approaches a delta function. A more detailed calculation (omitted, but it
uses complex analysis) shows that
we see that
f (t,α)
t
f (t, α)
lim
t→∞ t
= δ(α).
π
2
6
(cid:54)
(cid:54)
The reason to divide by t is that this identifies the rate of transitions per unit time. Define
Rn m = Pn→m . Then the above arguments imply that
→
t
Rn m
→
≈
|
π V
2
mn|2
(cid:126)2
δ(|ωmn| − ω)
for large t.
(10)
Linewidth In practice the frequency dependence is not a delta function. The term “linewidth”
refers to the width of the region of ω that drives a transition; more concretely, FWHM stands
for “full-width half-maximum” and denotes the width of the region that achieves ≥ 1/2 the peak
transition rate. The above derivation already suggests some reasons for nonzero linewidth.
1. Finite lifetime. If we apply the perturbation for a limited amount of time, or if the state we
are driving to/from has finite lifetime, then this will contribute linewidth on the order of 1/t.
2. Power broadening. If |Vmn| is large, then we will still see transitions for larger values of |α|.
For this to prevent us from seeing the precise location of a peak, we need also the phenomenon
of saturation in which transition rates all look the same above some threshold. (For example,
we might observe the fraction of a beam that is absorbed by some sample, and by definition
this cannot go above 1.)
There are many other sources of linewidth. In general we can think of both the driving frequency
ω and the gap frequency ωmn as being distributions rather than δ functions. The driving frequency
might come from a thermal distribution or a laser, both of which output a distribution of frequen-
cies. The linewidth of a laser is much lower but still nonzero. The energy difference (cid:126)ωmn seems
like a universal constant, can also be replaced by a distribution by phenomena such as Doppler
b | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
�erence (cid:126)ωmn seems
like a universal constant, can also be replaced by a distribution by phenomena such as Doppler
broadening, in which the thermal motion of an atom will redshift or blueshift the incident light.
This is just one example of a more general phenomenon in which interactions with other degrees of
freedom can add to the linewidth; e.g. consider the hyperfine splitting, which measures the small
shifts in an electron’s energy from its interaction with the nuclear spin. This can be thought of as
adding to linewidth in two different, roughly equivalent, ways. We might think of the nuclear spins
as random and thus the interaction adds a random term to the electon’s Hamiltonian. Alterna-
tively, we might view the interaction with the nuclear spin as a source of decoherence and thus as
contributing to the finite lifetime of the electron’s excited state. We will not explore those issues
further here.
The other contribution to the rate is the matrix element |Vmn|. This depends not only on the
strength of the perturbation, but also expresses the important point that we only see transitions
from n → m if Vmn = 0. This is called a selection rule. In Griffiths it is proved that transitions from
electric fields (see the next section) from Hydrogen state |n, l, m(cid:105) to |n(cid:48), l(cid:48), m(cid:48)(cid:105) are only possible
when |l − l(cid:48)| = 1 and |m − m(cid:48)| ≤ 1 (among other restrictions). Technically these constraints hold
only for first-order perturbation theory, but still selection rules are important, since they tell us
when we need to go to higher-order perturbation theory to see transitions (known as “forbidden
transitions”). In those cases transition rates are much lower. One dramatic example is that 2p → 1s
transition in hydrogen takes 1.6ns because it occurs at first order while the 2s → 1s transition takes
0.12 seconds. For this reason states such as the 2s states are called “metastable.”
We now consider the most important special case, which gets its own top-level section, despite
being an example of a periodic pert | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
called “metastable.”
We now consider the most important special case, which gets its own top-level section, despite
being an example of a periodic perturbation, which itself is an example of first-order perturbation
theory.
7
(cid:54)
2 Light and atoms
Light consists of oscillating E and B fields. The effects of the B fields are weaker by a factor
O(v/c) ∼ α, so we will focus on the E fields. Let
(cid:126)
(cid:126)
(cid:126)
(cid:126)
(cid:126)E((cid:126)r) = E0zˆ cos(ωt − kx).
However, optical wavelengths are 4000-8000˚A, while the Bohr radius is ≈ 0.5˚A, so to leading order
we can neglect the x dependence. Thus we approximate
δH(t) = eE0 z cos(ωt).
(11)
We now can apply the results on transition rates from the last section with Vmn = eE0(cid:104)m|z|n(cid:105).
(This term is responsible for selection rules and for the role of polarization.) Thus the rate of
transitions is
Rn m =
→
0 |(cid:104)m|z n
| (cid:105)|2δ(|ωmn| − ω).
π e2E2
2 (cid:126)2
(12)
We get contributions at ωmn = ±ω corresponding to both absorption and stimulated emission.
Aside: quantizing light What about spontaneous emission? This does not appear in the
semiclassical treatment we’ve described here. Nor do the photons. “Absorption” means jumping
from a low-energy state to a higher-energy state, and “stimulated emission” means jumping from
high energy to low energy. In the former case, we reason from energy conservation that a photon
must have been absorbed, and in the latter, a photon must have been emitted. However, these
arguments are rather indirect. A much more direct explanation of what happens to the photon
comes from a more fully quantum treatment. This also yields the phenomenon of spontaneous
emission. Recall from 8.05 that oscillating electromagnetic fields can be quantized as follows:
E0 = E0(aˆ + a� | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
Recall from 8.05 that oscillating electromagnetic fields can be quantized as follows:
E0 = E0(aˆ + aˆ†)
E0 =
(cid:114)
2π(cid:126)ω
V
(Gaussian units)
=
(cid:114)
(cid:126)ω
(cid:15)0V
(SI units)
Using δH = eE0z, we obtain
δH = eE0z ⊗ (aˆ + aˆ†).
If we look at the action of z in the {1s, 2pz} basis, then it has the form
We then obtain the form of the Hamiltonian examined on pset 3.
0 α
α 0
(cid:105)
with α = 1s z 2pz .
(cid:104)
| |
This perspective also can be used to give a partial derivation of the Lamb shift, which can be
thought of as the interaction of the electron with fluctuations in the electric field of the vacuum.
0(cid:105) ∼ (cid:104)aˆ + aˆ†(cid:105) = 0 but (cid:104)E0 (cid:105) ∼
In the vacuum (i.e. ground state of the photon field) we have
(cid:104)(aˆ + aˆ†) (cid:105) > 0. These vacuum fluctuations lead to a small separation in energy between the 2s
and 2p levels of hydrogen.
(cid:104)E
2
2
Dipole moment
In the Stark effect we looked at the interaction of the hydrogen atom with an
electric field. This was a special case of the interaction between a E field and the dipole moment
of a collection of particles. Here we discuss the more general case.
(cid:126)
Suppose that we have charges q , . . . , q
, . . . , (cid:126)x N ), and we apply an electric
field E((cid:126)x). The energy is determined by the scalar potential φ((cid:126)x) which is related to the electric
at positions (cid:126)x(1)
(cid:126)
N
1
( | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
((cid:126)x) which is related to the electric
at positions (cid:126)x(1)
(cid:126)
N
1
(
8
(cid:126)
field by E = −∇(cid:126)
(cid:126)
φ((cid:126)x) = −(cid:126)x · E. In this case the Hamiltonian will be
If E((cid:126)x) = E (i.e.
φ.
(cid:126)
(cid:126)
independent of position (cid:126)x) then one possible solution is
H
=
N
(cid:88)
i
=1
qiφ x(i)(cid:1) = −
(cid:0)
(cid:126)
N
(cid:88)
i=1
i(cid:126)x i) · E =
q (
(cid:126)
−d
(cid:126) (cid:126)
· E
where we have defined the dipole moment d = (cid:80) q (cid:126)x(i)
. Our choice of φ was not unique, and
(cid:126)
we could have chosen φ((cid:126)x) = C − (cid:126)x · E for any constant C. However, this would only have added
an overall constant to the Hamiltonian, which would have no physical effect.
N
i=1 i
(cid:126)
What if the electric field is spatially varying? If this spatial variation is small and we are near
the origin, we use the first few terms of the Taylor expansion to approximate the field:
(cid:126)
E((cid:126)x) = E(0) +
(cid:126)
3 ∂E
(cid:88)
i
∂xj
i,j
=1
eˆixj + . . . .
This corresponds to a scalar potential of the form
φ =
3
(cid:88)
i=1
xiEi(0) +
31
(cid:88)
2
i,j=1
xixj
∂Ei
∂xj
xi + . . .
xj
we see that the field couples not to the dipole moment, but to the quadrupole
For the quadratic terms
(cid:80)
N
i=1 qi(cid:126)x i) ⊗ (cid | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
but to the quadrupole
For the quadratic terms
(cid:80)
N
i=1 qi(cid:126)x i) ⊗ (cid:126)x(i). This is related to emission lines such as 1s → 3d in which
moment, defined to be
(cid:96) may change by up to ±2. Of course higher moments such as octupole moments can be also be
considered. We will not explore these topics further in 8.06.
(
2.1 Incoherent light
While we have so far discussed monochromatic light with a definite polarization, it is easier to
produce light with a wide range of frequencies and with random polarization. To analyze the rate
of transitions this causes we will average (12) over frequencies and polarizations.
Begin with polarizations. Instead of the field being E0zˆ, let the electric field be E0P for some
random unit vector P . We then replace V with −E0P · d. The only operator here is the dipole
moment d = (d1, d2, d3), so the matrix elements of V are given by
ˆ (cid:126)
(cid:126)
ˆ
ˆ
Vmn = − ˆ (cid:126)
E0P · dmn = −E0
3
(cid:88)
i=1
Pi(cid:104)m|di
|n(cid:105).
Since the transition rate depends on |V
mn , we will average this quantity over the choice of polar-
|2
9
ization. Denote the average over all unit vectors P by (cid:104)·(cid:105) ˆ.P
ˆ
(cid:10)|
Vmn
|2(cid:11)
ˆ = E
P
0 P · dmn|2(cid:69)
2 (cid:68)
| ˆ (cid:126)
ˆP
= E2
0
= E2
0
= E2
0
3
(cid:88)
i,j
=1
3
(cid:88)
i,j=1
3
(cid:88)
i,j
=1
E2
= 0 (cid:88)
3
i
(cid:11)
(cid:10)(cid:104)m|Pidi|n | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
E2
= 0 (cid:88)
3
i
(cid:11)
(cid:10)(cid:104)m|Pidi|n(cid:105) (cid:104)n|Pjdj|m(cid:105)
ˆP
(cid:10)PiPj
(cid:11)
ˆ(cid:104)m d
P
| i|n(cid:105) (cid:104)n|dj|m(cid:105)
δij
3
(cid:104)
m
|di|n(cid:105) (cid:104)n|dj|m(cid:105)
explained below
|(cid:104)m|d
i n
| (cid:105)|
2
E2
≡ 0
3
|(cid:126)dmn|2
P
How did we calculate (cid:104)PiPj(cid:105) ˆ? This can be done by explicit calculation, but it is easier to use sym-
metry. First, observe that the uniform distribution over unit vectors is invariant under reflection.
Thus, if i = j, then (cid:104)PiPj(cid:105) = (cid:104)(−Pi)Pj(cid:105) = 0. On the other hand rotation symmetry means that
(cid:104)P 2
2
3 = 1, we also have (cid:104)P1 + P2 + P3 (cid:105) = 1 and
thus (cid:104)P 2
1 + P 3
i (cid:105) = 1/3. Putting this together we obtain
i (cid:105) should be independent of i. Since P 2
2 + P 2
3
2
(cid:104)PiPj(cid:105)
ˆ =
P
δ
ij
3
.
(13)
Next, we would like to average over different frequencies. The energy density of an electric field
E2
0 (using Gaussian units). Define U (ω) to be the energy density at frequency ω, so that
8π
U (ω) dω. If we consider light with this power spectrum, then we should integrate the rate
this distribution over U (ω) to obtain
is U =
(cid:82)
U =
times
(cid:90) | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
then we should integrate the rate
this distribution over U (ω) to obtain
is U =
(cid:82)
U =
times
(cid:90)
R
n→m
=
dω U (ω)
4π2
3(cid:126)
2
|(cid:126)d
|2
mn δ(ω
− |ωmn|)
=
4π2
(cid:126)
mn U ( ωmn )
|
|
3(cid:126)2 |
d
2
|
This last expression is known as Fermi’s Golden Rule. (It was discovered by Dirac, but Fermi called
it “Goldren Rule #2”.)
2.2 Spontaneous emission
The modern description of spontaneous emission requires QED, but the first derivation of it predates
even modern quantum mechanics! In a simple and elegant argument, Einstein:
(a) derived an exact relation between rates of spontaneous emission, stimulated emission and
absorption; and
(b) proposed the phenomenon of stimulated emission, which was not observed until 1960.
10
(cid:54)
He did this in 1917, more than a decade before even the Schr¨odinger equation!
Here we will reproduce that argument. It assumes a collection of atoms that can be in either
state a or state b. Suppose that there are Na atoms in state a and Nb atoms in state b, and that the
states have energies Ea, Eb with Eb > Ea. Define ωba = Eb−Ea and β = 1/k
BT . Assume further
that the atoms are in contact with a bath of photons and that the entire system is in thermal
equilibrium with temperature T . From this we can deduce three facts:
(cid:126)
Fact 1. Equilibrium means no change: Na = Nb = 0.
˙
˙
Fact 2. At thermal equilibrium we have N
b
Na
= e−
βEb
e−βEa
=
(cid:126)
e−β ωba.
Fact 3. At thermal equilibrium the black-body radiation spectrum is
U (ω) =
(cid:126)
ω3
π2c3 eβ(cid:126)ω − 1
.
(14)
We would like to understand the following processes:
Process
Explanation
Rate
Absorption A photon of frequency ωba is absorbed and an
atom changes from state a | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
like to understand the following processes:
Process
Explanation
Rate
Absorption A photon of frequency ωba is absorbed and an
atom changes from state a to state b.
BabNaU (ωba)
Spontaneous A photon of frequency ωba is emitted and an atom
emission
changes from state b to state a.
ANb
Stimulated
emission
A photon of frequency ωba is emitted and an atom
changes from state b to state a.
BbaNbU (ωba)
These processes depend on the Einstein coefficients A, Bab and Bba for spontaneous emission,
absorption and stimulated emission respectively. They also depend on the populations of atoms
and/or photons that they involve; e.g. absorption requires an atom in state a and a photon of
frequency ωba, so its rate is proportional to NaU (ωba). Here it is safe to posit the existence of
stimulated emission because we have not assumed that Bba is nonzero.
Having set up the problem, we are now almost done! Adding these processes up, we get
˙Nb = −NbA − NbBbaU (ωba) + NaBabU (ωba).
(15)
From Fact 1, Nb = 0 and so we can rearrange (15) to obtain
˙
U (ωba) =
A
B − B
ab
ba
Na
N
b
Fact 2
=
A
Bab − B
ba
Fact 3 (cid:126)ω3
π2c3
=
1
eβ(cid:126)ωba
− 1
eβ(cid:126)
ωba
Since this relation should hold for all values of β, we can equate coefficients and find
Bab = Bba
(cid:126)ω3
A = ba Bab
π2c3
(16a)
(16b)
We see that these three processes are tightly related! All from a simple thought experiment, and
not even the one that Einstein is most famous for.
Today we can understand this as the fact that the electric field enters into the Hamiltonian as
a Hermitian operator proportional to aˆ + aˆ†, and so the photon-destroying processes containing aˆ
11
are inevitably accompanied by photon-creating processes containing a | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
+ aˆ†, and so the photon-destroying processes containing aˆ
11
are inevitably accompanied by photon-creating processes containing aˆ†. Additionally the relation
between spontaneous and stimulated emission can be seen in the fact that both involve an a†
operator acting on the photon field. If there are no photons, then the field is in state |0(cid:105) and we
get the term a†|0(cid:105) = |1(cid:105), corresponding to spontaneous emission. If there are already n photon in
the mode, then we get the term a†|n(cid:105) =
n + 1|n + 1(cid:105). Since the probabilities are the square of
the amplitudes, this means that we see photons emitted at n + 1 times the spontaneous emission
rate.
In Einstein’s terminology, the n here is from stimulated emission and the +1 is from the
spontaneous emission which always occurs independent of the number of photons present.
√
Returning to (16), we plug in Fermi’s Golden Rule and obtain the rates
Bab = Bba =
4π2
(cid:126)
3(cid:126)2 |dab|2
4ω3
3(cid:126)c3 |(cid:126)dab|2.
A = ba
and
2.3 The photoelectric effect
So far we have considered transitions between individual pairs of states. Ionization (aka the photo-
electric effect) involves a transition from a bound state of some atom to one in which the electron is
in a free state. This presents a few new challenges. First, we are used to treating unbound states as
unnormalized, e.g. ψ((cid:126)x) = eik·(cid:126)x. Second, to calculate the ionization rate, we need to sum over final
states, since the quantity of physical interest is the total rate of electrons being dislodged from an
atom, and not the rate at which they transition to any specific final state. (A more refined analysis
might look at the angular distribution of the final state.)
(cid:126)
Suppose our initial state is our old friend, the ground state of the hydrogen atom, and the
final state is a plane wave. | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
126)
Suppose our initial state is our old friend, the ground state of the hydrogen atom, and the
final state is a plane wave.
If the final state is unnormalized, then matrix elements such as
(cid:104)ψfinal|V |ψinitial(cid:105) become less meaningful. One way to fix this is to put the system in a box of
size L × L × L with L (cid:29) a0 ≡ (cid:126)2
2 and to impose periodic boundary conditions. The resulting
me
plane-wave states are now
ψ(cid:126) ((cid:126)x) =
k
(cid:126)
exp(ik (cid:126)
· x)
L3/2
,
(cid:126)n and (cid:126)n = (n1, n2, n3) is a vector of integers. (We use k instead of p(cid:126) =
(cid:126)
(cid:126)k to keep
where |k(cid:105) = 2π
L
the notation more compact.) We will assume that L (cid:29) a0 and also that the final energy of the
electron is (cid:29) 13.6 eV. This means that we can approximate the final state as a free electron and
can ignore the interaction with the proton.
(cid:126)
Apply an oscillating electric field to obtain the time-dependent potential
The rate of the transitions will be governed by the matrix element
δH = eE0x3 cos(ωt) ≡ V cos(ωt).
(cid:104)(cid:126)k|V |1, 0, 0(cid:105) =
eE0
πa3
0L3
(cid:112)
3
d x x3 exp
(cid:90)
(cid:124)
(cid:18) r
−
a0
(cid:123)(cid:122)
A
− (cid:126)k · (cid:126)x
i
≡
(cid:112)
eE
0
πa3
0L3
A.
(cid:19)
(cid:125)
We have defined A to equal the difficult-to-ev
aluate in
tegral. The factor
of x3 can be removed by | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
ned A to equal the difficult-to-ev
aluate in
tegral. The factor
of x3 can be removed by
12
writing A = i ∂ B, where
∂k3
(cid:90)
B =
d3x exp
(cid:18) r
a0
− − (cid:126)ik (cid:126)
· x
(cid:19)
defining µ =
(cid:126)
(cid:126)x
·
k
kr
, k =
(cid:126)
|k|, r = |(cid:126)x|
using
(cid:90)
∞
0
rne αr
n!
− = +1
αn
r2
(cid:90) 1
−1
(cid:18)
dµ exp
− − ikrµ
r
a0
(cid:19)
= 2π
= 4π
(cid:90) ∞
0
(cid:90)
1
−
1
dµ
(cid:16)
1
(cid:17)
+ ikµ
3
1
a0
=
4πi
k3
(cid:90) 1
−1
dµ
(cid:16)
1
(cid:17)3
µ + i
ka0
=
−
2πi
k3
(cid:16)
1
1 + i (cid:17)
ka0
2 −
(cid:16)
(cid:17)
2
(cid:16)
=
2πi
k3
1 + 1
ika
0
(cid:16)
− 1
ika0
(cid:17)
2
(cid:17)
2
(cid:17)
2
(cid:16)
− 1
1 + 1
k2a2
0
1
1
− i
ka
0
8
π
=
8πi
k3
(cid:16)
1
a
ik 0
1 + 1
2 2
k a0
=
(cid:17)
2
(cid:16)
0 1
a
k4
1 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
1
2 2
k a0
=
(cid:17)
2
(cid:16)
0 1
a
k4
1
+ 2a2
k
0
=
(cid:17)
2
8π
(cid:0)
a
0
−
k2 + a
0
2
(cid:1)
2
To compute A, we use the fact that
∂ k2 = 2
3. Thus
k
∂k3
A = i
∂
∂k3
B =
−
(cid:0)
32πik3
(cid:1)
a0 k2 + a−2 3
0
.
We can simplify this expression using our assumption
the final state energy) is much larger than the binding energy. The final energy is
binding energy is 2ma2 . Thus
e photon energy (and therefore also
(cid:126)2 2
k and the
2m
that th
(cid:126)2
0
(cid:126)2k2
2m
(cid:29)
(cid:126)2
2ma2
0
=
⇒
ka0
(cid:29)
1.
We can use this to simplify A by dropping the a−2
0
term in the dominator:
A ≈
−32πik3
a k6
0
=
−32πi cos(θ)
a0k5
,
where θ is the angle between k and the z-axis.
(cid:126)
We can now compute the squared matrix element to be (canceling a factor of π from numerator
and denominator)
|(cid:104)(cid:126)k|V |1, 0, 0(cid:105)|2 =
e2E2 1024π cos2(θ)
0
a3L3
0
a2 10
0k
.
To compute the average rate over some time t, we will multiply this by 1
(cid:126)2
sin2(αt/2)
α2
≡
and (cid:126)α is the difference in energy between the initial and final states. If we neglect the
, where f (t, α)
f (t,α)
t
energy of the initial state, we obtain
R | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
If we neglect the
, where f (t, α)
f (t,α)
t
energy of the initial state, we obtain
R
(cid:126) =
1,0 0→k
,
1024πe2E2
0 cos2(θ)
(cid:126)2a5k10L3
0
(cid:16)
f t,
(cid:17)
ω
.
(cid:126)k2
2m −
t
13
We can simplify this a bit by averaging over all the polarizations of the light. (In fact, the angular
dependance of the free electron can often carry useful information, but here it will help simplify
some calculations.) The average of cos2(θ) over the sphere is 1/3 (by the same arguments we used
in the derivation of Fermi’s golden rule), so we obtain
(cid:104)R
1,0,0→k(cid:105)
(cid:126) =
(cid:16)
2 f t,
1024πe2
E
0
3(cid:126)2a5 10
0k L3
(cid:126)k2
2m
t
(cid:17)
− ω
.
Let’s pause for a minute to look at what we’ve derived. One strange feature is the 1/L3 term,
because the rate of ionization should not depend on how much empty space surrounds the atom.
Another strange thing appears to happen when we take t large, so that f (t, α)/t will approach
π δ(α). This would cause the transition rates to be nonzero only when 2mω/(cid:126) exactly equals k2 for
2
some valid vector k (i.e. of the form 2π (cid:126)n). We do not generally expect physical systems to have
such sensitive dependence on their parameters.
(cid:126)
L
As often happens when two things look wrong, these difficulties can be made to “cancel each
other out.” Let us take t to be large but finite. It will turn out that t needs to be large only relative
to
, which is not very demanding when L is large. In this case, we can approximate f (t, α)
with a step function:
(cid:126)2
2
L m
(cid:40) π t2
˜ | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
f (t, α)
with a step function:
(cid:126)2
2
L m
(cid:40) π t2
˜
f (t, α) ≈ f (t, α) ≡ 2
0
if 0 ≤ α ≤ 1
t
otherwise
In what sense is this a good approximation? We argue that for large t, f (t, α)/t ≈ π δ(α), just like
f (t, α)/t. Suppose that g(α) is a function satisfying |g(cid:48)(α)| ≤ C for all α. Then
2
˜
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:90)
∞
−∞
dα
(cid:32) ˜f (t, α)
π
t2
(cid:33)
− δ(α)
(cid:12)
(cid:12)
(cid:12)
g(α)
(cid:12)
(cid:12)
=
=
=
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
0
(cid:90) 1/t
0
(cid:90)
1/t
t
t
(cid:90)
1/t
(cid:32)
t
(cid:33)
dα g(α)
−
g(0)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
dα (g(α)
(cid:12)
(cid:12)
(cid:12)
− g(0))
(cid:12)
(cid:12)
(cid:90)
α
dα
dβ g(cid:48)(β)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
0
(cid:90) 1/t
dα
0
1
2t2
C
=
0
(cid:90) α
0
C
2t
≤ t
≤
t
(cid:12) (cid:48)(β)
dβ g
(cid:12)
(cid:12)
(cid:12)
triangle inequality
This tends to 0 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
) (cid:48)(β)
dβ g
(cid:12)
(cid:12)
(cid:12)
triangle inequality
This tends to 0 as t → ∞. (This is an example of a more general principle that the “shape” of a δ
function doesn’t matter. For example, the limit of a Gaussian distribution with σ2 → 0 would also
work.)
Now using f (t, α), we get a nonzero contribution for k satisfying
˜
(cid:126)
⇔
⇔
(cid:126)
0 ≤ 2m
k2
2mω
(cid:126) ≤ k2
≤
− ω ≤
(cid:18)
mω
2
(cid:126)
1
t
1 +
(cid:19)
1
tω
(cid:114) 2mω
(cid:126) ≤ ≤
k
(cid:115)
(cid:18)
1 +
2mω
(cid:126)
(cid:19)
1
tω
(cid:114) 2mω
(cid:126)
≈
(cid:19)
(cid:18) 1
+
2tω
(17)
14
(cid:126)
(cid:126)
How many k satisfy (17)? Valid k live on a cubic lattice with spacing 2π/L, and thus have density
(cid:126)
(L/2π)3. Thus we can estimate the number of k satisfying (17) by (L/2π)3 times the volume of
(cid:113) 2mω and
(17). This in turn corresponds to a spherical shell of inner radius
k-space satisfying
(cid:126)
(cid:113)
thickness
2mω 1 . Thus we have
(cid:126)
2tω
# valid k =
(cid:126)
(cid:18) (cid:19)3
L
2π
(cid:18)
2mω
(cid:126)
4π
(cid:19)3/2
1
2tω
=
L3m (cid:114) 2mω
(cid:126)
2π2(cid:126)t
=
L3mk
2π2(cid:126)t
.
In the last step we use the fact that spherical shell is thin to approximate k ≈ 2mω . Thus, when
(cid:113)
(cid:126 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
last step we use the fact that spherical shell is thin to approximate k ≈ 2mω . Thus, when
(cid:113)
(cid:126)
we sum f (t, α)/t over k we obtain
˜
(cid:126)
(cid:88) ˜f (t, α)
t
(cid:126)
k
π
= t
2
· # valid k =
(cid:126)
L3mk
(cid:126)
4π
.
We have obtained our factor of L3 that removes the unphysical dependence on the boundary
conditions. Putting everything together we get
R
1,0,0→all k
(cid:126) =
256me2E2
0 .
3(cid:126)3a5
0k9
3 Adiabatic evolution
3.1 The adiabatic approximation
We now turn to a different kind of approximation, in which we consider slowly varying Hamiltonians.
We will consider a time-dependent Hamiltonian H(t). Let |ψn(t)(cid:105) and En(t) be the “instantaneous”
eigenbases and eigenenergies, defined by
H(t)|ψn(t)(cid:105) = En(t)|ψn(t)(cid:105)
E1(t) ≤ E2(t) ≤ . . .
We also define |Ψ(t)(cid:105) to be the solution of Schr¨odinger’s equation: i.e.
i(cid:126)
∂
∂
t
|Ψ(t)
(cid:105) = H t)|Ψ(t)
(
(cid:105).
(18)
(19)
Beware that (18) and (19) are not the same. You might think of (18) as a naive attempt to solve
(19). If the system starts in |ψn(0)(cid:105) at time 0, there is of course no reason in general to expect that
|ψn(t)(cid:105) will be the correct solution for later t. And yet, the adiabatic theorem states that in some
cases this is exactly what happens.
Theorem 1 (Adiabatic theorem). Suppose at t = 0, |Ψ(0)(cid:105) = |ψn(0)(cid:105) for some n. | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
at t = 0, |Ψ(0)(cid:105) = |ψn(0)(cid:105) for some n. Then if H is
changed slowly for 0 ≤ t ≤ T , then at time T we will have |Ψ(T )(cid:105) ≈ |ψn(T )(cid:105).
˙
This theorem is stated in somewhat vague terms, e.g. what does “changed slowly” mean? H
should be small, but relative to what? One clue is the reference to the nth eigenstate |ψn(t)(cid:105).
This is only well defined if En(t) is unique, so clearly the theorem fails in the case of degenerate
eigenvalues. And since the theorem should not behave discontinuously with respect to H(t), it
should also fail for “nearly” degenerate eigenvalues. This gives us another energy scale to compare
with H (which has units of energy/time, or energy squared once we multiply by (cid:126)). We will see
later the sense in which this can be shown to be the right comparison.
˙
15
(cid:126)
(cid:126)
S
(cid:126)
Example. Suppose we have a spin-1/2 particle in a magnetic field B(t). Then the Hamiltonian
is H(t) = g
eµB · B(t). The adiabatic theorem says that if we start with the spin and B both
pointing in the +zˆ direction and gradually rotate B to point in the xˆ direction, then the spin will
follow the magnetic field and also point in the xˆ direction. Given that the Schr¨odinger equation
prescribes instead that the spin precess around the magnetic field, this behavior appears at first
somewhat strange.
(cid:126)
(cid:126)
(cid:126)
Derivation We will not rigorously prove the adiabatic theorem, but will describe most of the
derivation. Begin by writing
|Ψ(t)(cid:105) =
cn(t)
|ψn(t)(cid:105).
(cid:88)
Taking derivatives of both sides we obtain
n
i(cid:126)
d
dt
|Ψ(t)(cid:105) = i(cid:126) (cid:88)
c˙n(t)|� | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)
d
dt
|Ψ(t)(cid:105) = i(cid:126) (cid:88)
c˙n(t)|ψn(t)(cid:105) + cn(t) ψn(t)(cid:105) =
| ˙
n
(cid:88)
n
cn(t)En(t)|ψn(t)
(cid:105).
Multiply both sides by (cid:104)ψk(t)| and we obtain
i(cid:126)
c˙k = Ekck − i(cid:126) (cid:88)
˙
(cid:104)ψk|ψn(cid:105)cn.
n
(20)
˙
Now we need a way to evaluate (cid:104)ψk|ψn(cid:105) in terms of more familiar quantities.
Start with
d
dt
Apply (cid:104)ψk|
Apply
H|ψn(cid:105) = En|ψn(cid:105)
˙
H|ψn(cid:105) + H
˙
|ψn(cid:105) = En|ψn(cid:105) + En| ˙ψn(cid:105)
˙
˙
˙
ψk|ψn(cid:105) = Enδkn + En(cid:104)ψk|ψn(cid:105)
˙
(cid:104)ψk| ˙H|ψn(cid:105) + Ek(cid:104)
This equation has two interesting cases: k = n and k = n. The former will not be helpful in
estimating (cid:104)ψk| ˙ψn(cid:105), but does give us a useful result, called the Hellmann-Feynman theorem.
k = n
k = n
(cid:104)
En = (cid:104)ψn| ˙H|ψn(cid:105)
˙
(cid:104)ψ | ˙H|ψ (cid:105)
n
k
− Ek
En
=
ψ | ˙
k
ψn(cid:105)
≡
˙H
k
En − Ek
n
In the last step, we used Hkn to refer to the matrix | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
:105)
≡
˙H
k
En − Ek
n
In the last step, we used Hkn to refer to the matrix elements of H in the {|ψn(cid:105)} basis.
˙
˙
Plugging this into (20) we find
i(cid:126)c˙k =
(cid:104)ψk| ˙ψk(cid:105)
(Ek − i(cid:126)
)ck − i(cid:126)
(cid:125)
(cid:124)
(cid:123)(cid:122)
n
adiabatic approximatio
(cid:124)
(cid:88)
n=k
˙H
kn
−
En Ek
(cid:123)(cid:122)
error term
cn .
(cid:125)
(21)
If the part of the equation denoted “error term” did not
k| would be independent of
time, which would confirm the adiabatic theorem. Furthermore, the error term is suppressed by a
factor of 1/∆nk, where ∆nk ≡ En − Ek is the energy gap. So naively it seems that if H is small
relative to ∆nk then the error term should be small. On the other hand, these two quantities do
not even have the same units, so we will have to be careful.
then |c
exist,
˙
16
(cid:54)
(cid:54)
(cid:54)
Phases Before we analyze the error term, let’s look at the phases we get if the error term were
˙
c˙k = (Ek − i(cid:126)(cid:104)ψk|ψk(cid:105))ck. The solution of this differential equation is
not there. i.e. suppose that i(cid:126)
ck(t) = c (0)eiθk(t)eiγk(t)
(cid:90) t
k
1
θk(t) ≡ − (cid:126)
Ek(t(cid:48))dt(cid:48)
0
γk(t) ≡
(cid:90)
t
0
νk(t(cid:48))dt(cid:48)
˙
| k(cid:105)
νk(t) ≡ i(cid:104)ψk ψ
(22a)
(22 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
˙
| k(cid:105)
νk(t) ≡ i(cid:104)ψk ψ
(22a)
(22b)
The θk(t) term is called the “dynamical” phase and corresponds to exactly what you’d expect from
a Hamiltonian that’s always on; namely the phase of state k rotates at rate −Ek/(cid:126). The γk(t) is
called the “geometric phase” or “Berry phase” and will be discussed further in the next lecture.
At this point, observe only that it is independent of (cid:126) and that νk(t) can be seen to be real by
applying d/dt to the equation (cid:104)ψk|ψk(cid:105) = 1.
Validity of the adiabatic approximation Let’s estimate the magnitude of the error term in a
toy model. Suppose that H(t) = H0 + t V , where H0, V are time-independent and T is a constant
that sets the timescale on which V is turned on. Then H = V /T . An important prediction about
the adiabatic theorem is that if the more slowly H changes from H0 to H0 + V , the lower the
probability of transition should be; i.e.
increasing T should reduce the error term, even if we
integrate over time from 0 to T .
T
˙
Let’s see how this works. If the gap is always (cid:38) ∆, then we can upper-bound the transition
rate by some matrix element of V . This decreases as T and ∆ increase, which is good. But if we
add up this rate of transitions over time T , then the total transition amplitude can be as large as
∼ V /∆. Thus, going more slowly appears not to reduce the total probability of transition!
T ∆
What went wrong? Well, we assumed that amplitude from state n simply added up in state
k. But if the states have different energies, then over time the terms we add will have different
phases, and may cancel out. This can be understood in terms of time-dependent perturbation
theory. Define c˜k(t) = e−iθk(t)ck(t). Observe that
i(cid:126)
d | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
Define c˜k(t) = e−iθk(t)ck(t). Observe that
i(cid:126)
d
dt
c˜k(t) = (cid:126)θk(t)e−iθk(t)ck(t) + i(cid:126)e−iθk(t)c˙k(t)
˙
= E (t)e
k
−
−iθ (t)
k
c (t) + e
k
−iθ (t)
k
(Ek(t) − (cid:126)ν
k(t))ck(t) − i(cid:126) (cid:88)
=
−(cid:126)
νk(t)c˜k(t)
− (cid:126)
i
(cid:88) ˙H
kn
En −
n=k
Ek
n=k
ei(θn(t)−θk(t))c˜
n(t)
˙H
kn
En − Ek
e−iθ
k(t)cn(t)
In the last step we have used cn(t) = eiθn(t)c˜n(t). Let’s ignore the νk(t) geometric phase term (since
our analysis here is somewhat heuristic). We see that the error term is the same as in (21) but
with an extra phase of ei(θn(t)−θk(t)). Analyzing this in general is tricky, but let’s suppose that the
energy levels are roughly constant, so we can replace it with e−iωnkt, where ωnk = (En − Ek)/(cid:126).
Now when we integrate the contribution of this term from t = 0 to t = T we get
(cid:90) T ˙H e−iωnkt
kn
0 En − Ek
dt
∼
V e−iωnkT
(cid:126)ω2
T
nk
− 1
∼
V
T (cid:126)ω2
nk
∼
(cid:126)V
∆2T
Finally we obtain that the probability of transition decreases with T . This can be thought of as a
rough justification of the adiabatic theorem, but it of course made many simplifying assumptions
and in general it will be only qualitatively correct.
17
(cid:54 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
iabatic theorem, but it of course made many simplifying assumptions
and in general it will be only qualitatively correct.
17
(cid:54)
(cid:54)
This was focused on a specific transition. In general adiabatic transitions between levels m and
n are suppressed if
(cid:126)| ˙Hmn| (cid:28) ∆2 = min(Em(t) − En(t))2 .
t
(23)
Landau-Zener transitions One example that can be solved exactly is a two-level system with
a linearly changing Hamiltonian. Suppose a spin-1/2 particle experiences a magnetic field resulting
in the Hamiltonian
H(t) = ∆σx + σz,
vt
T
for some constants ∆, v, T . The eigenvalues are ±(cid:112)∆2 + (vt/T )2. Assuming v > 0, then when
t = −∞ the top eigenstate is |−(cid:105) and the bottom eigenstate is |+(cid:105). When t = ∞ these are reversed;
(cid:105)±|−(cid:105)
|
.
+ is the top eigenstate and
√
2
See diagram on black-board for energy levels.
is the bottom eigenstate. When t = 0, the eigenstates are
|−(cid:105)
|+
(cid:105)
Suppose that ∆ = 0 and we start in the |−(cid:105) at t = −∞. Then at t = ∞ we will still be in
the |−(cid:105) state, with only the phase having changed. But if ∆ > 0 and we move slowly enough then
the adiabatic approximation says we will remain in the top eigenstate, which for t = ∞ will be
|+(cid:105). Thus, the presence of a very small transverse field can completely change the state if we move
slowly enough through it.
In this case, the error term in the adiabatic approximation can be calculated rather precisely
and is given by the Landau-Zener formula (proof omitted):
Pr[transition] ≈ exp − (cid:126)v
(cid:18) 2π2∆2T
(cid:19)
.
Observe that it has all the qualitative features | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
cid:126)v
(cid:18) 2π2∆2T
(cid:19)
.
Observe that it has all the qualitative features that we expect in terms of dependence on ∆, v, T ,
but that it corresponds to a rate of transitions exponentially smaller than our above estimate from
first-order perturbation theory. Note that here “transition” refers to transitions between energy
level. Thus starting in |−(cid:105) and ending in |+(cid:105) corresponds to “no transition” while ending in |−(cid:105)
would correspond to “transition,” since it means starting in the higher energy level and ending in
the lower energy level.
3.2 Berry phase
Recall that the adiabatic theorem states that if we start in state |ψn(0)(cid:105) and change the Hamiltonian
slowly, then we will end in approximately the state
eiθn(t)eiγn(t)|ψn(t)(cid:105)
t
(cid:90)
θn(t) ≡ −
E
n(t(cid:48))
dt(cid:48)
1
(cid:126)
0
γn(t) ≡
(cid:90)
t
0
n(t(cid:48))dt(cid:48)
ν
˙
νn(t) ≡ i(cid:104)ψn|ψn
(cid:105)
(24a)
(24b)
The phase γn(t) is called the geometric phase, or the Berry phase, after Michael Berry’s 1984
explanation of it.
Do the phases in the adiabatic approximation matter? This is a somewhat subtle question. Of
course an overall phase cannot be observed, but a relative phase can lead to observable interference
effects. The phases in (22) depend on the eigenstate label n, and so in principle interference is
possible. But solutions to the equation H(t)|ψn(t)(cid:105) = En(t)|ψn(t)(cid:105) are not uniquely defined, and
we can in general redefine |ψn(t)(cid:105) by multiplying by a phase that can depend on both n and t.
18
To see how this works, let us consider the example of a spin-1/2 particle in a spatially varying
m | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
and t.
18
To see how this works, let us consider the example of a spin-1/2 particle in a spatially varying
magnetic field. If the particle moves slowly, we can think of the position (cid:126)r(t) as a classical variable
causing the spin to experience the Hamiltonian H((cid:126)r(t)). This suggests that we might write the
state as a function of (cid:126)r(t), as |ψn((cid:126)r(t))(cid:105) or even |ψn((cid:126)r)(cid:105).
If the particle’s position is a classical
function of time, then we need only consider interference between states with the same value of (cid:126)r,
and so we can safely change |ψn((cid:126)r)(cid:105) by any phase that is a function of n and (cid:126)r.
In fact, even if the particle were in a superposition of positions as in the two-slit experiment,
then we could still only see interference effects between branches of the wavefunction with the same
value of (cid:126)r. Thus, again we can define an arbitrary (n, (cid:126)r)-dependent phase.
(cid:126)
More generally, suppose that H depends on some set of coordinates R(t) = (R1(t), . . . , RN (t)).
(cid:126) (cid:105)
The eigenvalue equation (18) becomes H(R) ψn(R) = En(R) ψn(R)(cid:105) where we leave the time-
(cid:126)
dependence of R implicit. This allows us to compute even in situations where R is in a superposition
of coordinates at a given time t.
(cid:126) |
(cid:126) |
(cid:126)
(cid:126)
To express γn(t) in terms of |ψn(R)(cid:105), we compute
(cid:126)
d
dt
(cid:126)ψn(R)(cid:105)
|
=
N
(cid:88) d|ψ
(cid:126)
n(R)
dR
i
(cid:105) dRi
dt
(cid:126)
= (cid:126) |ψn(R)
∇(cid:126)
R
(cid:105) ·
(cid | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
(cid:126)
= (cid:126) |ψn(R)
∇(cid:126)
R
(cid:105) ·
(cid:126)
dR
dt
i=1
(cid:90) t
γn(t) = i
(cid:104)ψn|∇(cid:126) |ψn(cid:105) ·
(cid:126)
R
(cid:126)
dR
dt
dt =
(cid:90)
(cid:126)
R(t)
(cid:126) (0)
R
i(cid:104)
ψn
(cid:126)
(cid:126)
|∇ (cid:126)R|ψn(cid:105) · dR
0
The answer is in terms of a line integral, which depends only on the path and not on time (unlike
the dynamical phase).
(cid:126)
(cid:126)
) ? Suppose we replace |ψn(R)
(cid:126)
(cid:105) with |ψn(R)
˜
(cid:105) =
How does this change if we reparameterize ψn(R (cid:105)
|
(cid:126)
(cid:126)ψn(R)(cid:105). Then the Berry phase becomes
iβ(R
|
)
e−
γ˜n(t) = i
(cid:90) (cid:126)
R(t)
(cid:126) (0)
R
(cid:104) ˜ (cid:126) |∇(cid:126) | ˜ (cid:126)
(cid:126) ψn(R)
ψn(R)
R
(cid:105)
·
(cid:126)
dR = i
(cid:90)
(cid:126)
R(t)
(cid:126)
R(0)
(cid:104)
ψ
(cid:126) (cid:126)
n(R) iβ(R)
∇
|
e
(cid:126)
(cid:126) e
R
(cid:126)
= γn(t) + β(R(t)) − β(R(0))
(cid:126)
− (
iβ R)
(cid:126)
(cid:126)
|
ψn(R)
(cid:105) ·
(cid:126)
dR
(cid:126)
(cid:126)
Changing β only changes phases as a function of the endpoints of the path. Thus | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
(cid:126)
dR
(cid:126)
(cid:126)
Changing β only changes phases as a function of the endpoints of the path. Thus, we can eliminate
the phase for any fixed path with R(t) = R(0), but not simultaneously for all paths. In particular,
if a particle takes two different paths to the same point, the difference in their phases cannot be
redefined away. More simply, suppose the path is a loop, so that R(0) = R(t). Then regardless of
β we will have γn = γ˜n. This suggests an important point about the Berry phase, which is that it
is uniquely defined on closed paths, but not necessarily open ones.
Suppose that R(t) follows a closed curve C. Then we can write
(cid:73)
(cid:73)
(cid:126)
(cid:126)
(cid:126)
γ [C] =
n
i(cid:104)ψn|∇(cid:126)
(cid:123)(cid:122)
(cid:124)
(cid:126)n(R)
A(cid:126)
(cid:105) · (cid:126)
d
(cid:126)R|ψn
(cid:125)
R =
(cid:126)
A
(cid:126)
n(R) · (cid:126)
dR,
where we have defined
same reason that νn(t) is real.
the
Berry connection A (cid:126)
(cid:126)
n(R) = i(cid:104)ψn|∇ (cid:126) |ψn(cid:105). Note that it is real for the
(cid:126)
R
In some cases, we can simplify γn[C]. If N = 1 then the integral is always zero, since the line
integral of a closed curve in 1-d is always zero. In 2-d or 3-d we can use Green’s theorem or Stokes’s
theorem respectively to simplify the computation of γn[C]. Let’s focus on 3-d, because it contains
2-d as a special case. Then
if S denotes the surface enclosed by curve C, we have
(cid:73)
C
A(cid:126)
(cid:126)
n(R)
(cid:126)
· dR =
(cid:90) (cid:90) | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)
C
A(cid:126)
(cid:126)
n(R)
(cid:126)
· dR =
(cid:90) (cid:90)
S
(
(cid:126)
(cid:126)
∇ (cid:126)R × An) · d(cid:126)a ≡
(cid:90) (cid:90)
S
(cid:126)
D
n
· d(cid:126)a.
19
(cid:54)
Here we define the Berry curvature Dn = ∇ (cid:126) × A(cid:126)n and the infinitesimal unit of area d(cid:126)a. We can
R
write Dn in a more symmetric way as follows:
(cid:126)
(cid:126)
(cid:126)
(Dn)i = i
(cid:88) d
(cid:15)ijk
dRj
,k
j
(cid:104)ψn|
d
dRk
|ψn = i
(cid:105)
(cid:88)
(cid:15)ijk
(cid:18)
d ψn d ψn
|
|
(cid:104)
dR
dR
j
k
(cid:105)
+ ψn
(cid:104)
|
d
d
dRj dRk
(cid:19)
ψn
|
(cid:105)
.
Because (cid:15)ijk is antisymmetric in j, k and d
are left with
dRj dRk
is symmetric, the second term vanishes and we
(cid:126)
Dn = i(
∇(cid:126)
r(cid:104)ψn|) × (cid:126)(∇r|ψn(cid:105)).
(25)
j,k
d
Example: electron spin in a magnetic field. The Hamiltonian is
(cid:126)
H = µ(cid:126)σ · B
µ =
(cid:126)
e
mc
.
Suppose that B = B(cid:126)r where B is fixed and we slowly trace out a closed path in the unit sphere
with (cid:126)r. Suppose that we start in the state
(cid:126)
|(cid:126)r; +(cid:105) = |(cid:126)r(cid:105) =
with
(cid:126 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)r; +(cid:105) = |(cid:126)r(cid:105) =
with
(cid:126)r = sin(θ) sin(φ)
cos(θ/2)
eiφ sin(θ/2)
sin(θ) cos(φ)
cos(θ)
Then the adiabatic theorem states that we will remain in the state |(cid:126)r(cid:105) at later points, up to an
overall phase. To compute the geometric phase observe that
∇(cid:126)
d
= rˆ +
dr
1 d
r dθ
ˆ
θ +
1
d
r sin θ dφ
ˆ
φ.
Since d
dr |(cid:126)r(cid:105) = 0 we have
∇|(cid:126)
(cid:126)r(cid:105) =
1
2r
−
sin(θ/2)
eiφ cos(θ
/2)
ˆ
θ +
1
r sin θ
(cid:124)
(cid:123)(cid:122)
(cid:105)
|−(cid:126)r
(cid:125)
0
ieiφ sin(θ/2)
ˆ
φ.
This first term will not contribute to the Berry connection, and so we obtain
A(cid:126)
+((cid:126)r) = i(cid:104)(cid:126)r|∇|(cid:126)r(cid:105) = −
(cid:126)
1 sin2(θ/2)
sin(θ)
r
ˆ
φ.
Finally the Berry curvature is
(cid:126)
(cid:126)
D+ = ∇ × A+ =
(cid:126)
1
d
r sin θ dθ
(sin θ
A+,φ) rˆ = −
rˆ
d
r2 sin θ dθ
sin2(θ/2) =
−
1
2r2
rˆ | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
ˆ
d
r2 sin θ dθ
sin2(θ/2) =
−
1
2r2
rˆ.
For this last computation, observe that d sin2(θ/2) = d 1−cos θ = sin θ. We can now compute the
Berry phase as
dθ
dθ
2
(cid:90) (cid:90)
γ+[C] =
1
− Ω.
2
(cid:126)D+ ·
S
d(cid:126)a =
(cid:124)(cid:123)(cid:122)(cid:125)
r2
dΩrˆ
20
Here dΩ is a unit of solid angle, and Ω is the solid angle contained by C.
What if we used a different parameterization for |(cid:126)r(cid:105)? An equally valid choice is
|(cid:126)r(cid:105) =
iφ
e− cos(θ/2)
sin(θ/2)
.
(26)
If we carry through the same computation we find that no
w
A(cid:126)
+ =
1 cos2(θ/2)
sin(θ)
r
ˆ
φ
and
(cid:126)
D+ =
−
1
2r2
.
rˆ
dθ
We see that the d sin2(θ/2) was replaced by a d (
dθ − cos2(θ/2)) which gives the same answer. This
is an example of the general principle that the Berry connection is sensitive to our choice of phase
convention but the Berry curvature is not. Accordingly the Berry curvature can be observed in
experiments.
What if we started instead with the state |(cid:126)r; −(cid:105) = | − (cid:126)r(cid:105)? Then a similar calculation would find
that
γ [C] = Ω.
−
1
2
Since the two states pick up different phases, this can be seen experimentally if we start in a
superposition of |(cid:126)r; +(cid:105) and |(cid:126)r; −(cid:105).
More generally, if we have a spin-s particle, | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)r; +(cid:105) and |(cid:126)r; −(cid:105).
More generally, if we have a spin-s particle, then its z component of angular momentum can be
anything in the range −s ≤ m ≤ s and one can show that
γm[C] = −mΩ.
There is much more that can be said about Berry’s phase. An excellent treatment is found in the
1989 book Geometric phases in physics by Wilczek and Shapere. There is a classical analogue called
Hannay’s phase. Berry’s phase also has applications to molecular dynamics and to understanding
electrical and magnetic properties of Bloch states. We will see Berry’s phase again when we discuss
the Aharonov-Bohm effect in a few weeks.
3.3 Neutrino oscillations and the MSW effect
In this section we will discuss the application of the adiabatic theorem to a phenomenon involving
solar neutrinos. The name neutrino means “little neutral one” and neutrinos are spin-1/2, electri-
cally neutral, almost massless and very weakly interacting particles. Neutrinos were first proposed
by Pauli in 1930 to explain the apparent violation of energy, momentum and angular momentum
conservation in beta decay. (Since beta decay involves the decay of a neutron into a proton, an
electron and an electron antineutrino, but only the proton and electron could be readily detected,
there was an apparent anomaly.)
Their almost complete lack of interaction with matter (it takes 100 lightyears of lead to ab-
sorb 50% of a beam of neutrinos) has made many properties of neutrinos remain mysterious.
Corresponding to the charged leptons e−, e+ (electron/position), µ−/µ+ (muon/antimuon) and
τ −/τ + (tau/antitau), neutrinos (aka neutral leptons) also exist in three flavors: νe, νµ, ντ , with
antineutrinos denoted ν¯e, ν¯µ, ν¯τ . Most, but not all, interactions preserve lepton number, defined
to be the number of leptons minus the number of antileptons. Indeed, most interactions preserve
#e− + #νe − #e+ − #ν | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
of leptons minus the number of antileptons. Indeed, most interactions preserve
#e− + #νe − #e+ − #ν¯e (electronic number) and similarly for muons and taus. However, these
quantities are not conserved by neutrino oscillations. Even the total lepton number is violated by
a phenomenon known as the chiral anomaly.
21
Solar neutrinos Solar neutrinos are produced via the p-p chain reaction, which converts (via a
series of reactions)
4 1H = 4p+ + 4e− (cid:55)→ 2p+
(cid:124)
n + 2e− +2νe.
+ 2
(cid:125)
(cid:123)(cid:122)
4He
The resulting neutrinos are produced with energies in the range
produced in the sun are electron neutrinos.
0.5-20MeV. Almost all of neutrinos
Detection Neutrinos can be detected via inverse beta decay, corresponding to the reaction
A + νe (cid:55)→ A(cid:48) + e−,
where A, A(cid:48) are different atomic nuclei. For solar neutrinos this will only happen for electron
neutrinos because the reaction A + νµ (cid:55)→ A(cid:48) + µ− will only happen for mu neutrinos carrying
at least 108 MeV of kinetic energy. So it is easiest to observe electron neutrinos. However other
flavors of neutrinos can also be detected via more complicated processes, such as neutrino-mediated
disassociation of deuterium.
Observations of solar neutrinos The first experiment to detect cosmic neutrinos was the 1968
Homestake experiment, led by Ray Davis, which used 100,000 gallons of dry-cleaning fluid (C2Cl4)
to detect neutrinos via the process 37Cl + νe (cid:55)→37 Ar + e−. However, this only found about 1/3 as
many neutrinos as standard solar models predicted.
In 2002, the Sudbery Neutrino Observatory (SNO) measured the total neutrino flux and found
that once mu- and tau-neutrinos were accounted for, the total number of neutrinos was correct.
Thus, somehow electron neutrinos in the | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
found
that once mu- and tau-neutrinos were accounted for, the total number of neutrinos was correct.
Thus, somehow electron neutrinos in the sun had become mu and tau neutrinos by the time they
reached the Earth.
Neutrino oscillations The first high-confidence observations of neutrino oscillations were by
the Super Kamiokande experiment in 1998, which could distinguish electron neutrinos from muon
neutrinos. Since neutrinos oscillate, they must have energy, which means they must have mass
(if we wish to exclude more speculative theories, such as violations of the principle of relativity).
This means that a neutrino, in its rest frame, has a Hamiltonian with eigenstates |ν1(cid:105), |ν2(cid:105), |ν3(cid:105)
that in general will be different from the flavor eigenstates |νe(cid:105), |νµ(cid:105), |ντ (cid:105) that participate in weak-
interaction processes such as beta decay.
We will treat this in a simplified way by neglecting |ν3(cid:105) and |ντ (cid:105). So the Hamiltonian can be
modeled as (in the |νe(cid:105), |νµ(cid:105) basis)
H =
Ee ∆
∆ Eµ
,
(27)
where Ee, Eµ are the energies (possibly equal)
of the electron and muon neutrinos and ∆ represents
a mixing term. Unfortunately, plugging in known parameter estimates for the terms in (27) would
predict that roughly a 0.57 fraction of solar neutrinos would end up in the |νe(cid:105) state, so this still
cannot fully explain our observations.
It turns out that this puzzle can be resolved by a clever use of the adiabatic
The MSW effect
theorem. Electron neutrinos scatter off of electrons and thus the Hamiltonian in (27) should be
22
modified to add a term proportional to the local density of electrons. Thus after some additional
rearranging, we obtain
H = E0 +
− 0 cos(2θ) � | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
some additional
rearranging, we obtain
H = E0 +
− 0 cos(2θ) ∆0 sin(2θ)
∆
∆0 sin(2θ) ∆0 cos(2θ)
+
,
CNe 0
0
0
(28)
where ∆0, θ come from (27) (θ ≈ π/6 is the “mixing angle” that measures how far the flavor states
are from being eigenstates), C is a constant and Ne = Ne((cid:126)r) is the local electron density. If the
neutrino is traveling at speed ≈ c in direction xˆ, then (cid:126)r ≈ ctxˆ. Thus we can think of Ne as
time-dependent. We then can rewrite H as
const · I +
(cid:18) CNe(t)
2
(cid:19)
− ∆0 cos(2θ)
σz + ∆0 sin(2θ)σx.
(29)
This looks like the adiabatic Landau-Zener transition we studied in the last lecture, although here
the σz term is no longer being swept from −∞ to +∞. Instead, near the center of the sun, Ne(0)
is large and the eigenstates are roughly |νe(cid:105), |νµ(cid:105). For large t, the neutrinos are in vacuum, where
their eigenstates are |ν1(cid:105), |ν2(cid:105).
If the conditions of the adiabatic theorem are met, then neutrinos that start in state |νe(cid:105) (in
the center of the sun) will emerge in state |ν2(cid:105) (at the surface of the sun). They will then remain
in this state as they propagate to the Earth. It turns out that this holds for neutrinos of energies
(cid:39) 2M eV . In this case, the probability of observing the neutrino on Earth in the |νe(cid:105) state (thinking
of neutrino detectors as making measurements in the flavor basis) is sin2(θ), which gives more or
less the observed value of 0.31 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
detectors as making measurements in the flavor basis) is sin2(θ), which gives more or
less the observed value of 0.31.
3.4 Born-Oppenheimer approximation
Consider a system with N nuclei and n electrons. Write the Hamiltonian as
H = −
N
(cid:88) (cid:126)2
j=1
∇(cid:126) 2
2M R
j
j
(cid:126) + Hel(R).
(30)
(cid:126)
(cid:126)
Here R = (R1, . . . , RN ) denotes the positions of the N nuclei and Hel(R) includes all the other
terms, i.e. kinetic energy of the electrons as well as the potential energy terms which include
electron-electron, nuclei-nuclei and electron-nuclei interactions. Let r denote all of the coordinates
of the electrons. While (30) may be too hard to solve exactly, we can use a version of the adiabatic
theorem to derive an approximate solution.
We will consider a product ansatz:
where the many-electron wavefunction is an eigenstate of the reduced Hamiltonian:
Ψ(R, r) = γ(R)ΦR(r),
Hel(R)ΦR(r) = Eel(R)ΦR(r).
(31)
(32)
(Typically this eigenstate wil be simply the ground state.) This is plausible because of the adiabatic
theorem. If the nuclei move slowly then as this happens the electrons can rapidly adjust to remain
in their ground states. Then once we have solved (32) we might imagine that we can substitute
23
back to solve for the nuclear eigenstates. We might guess that they are solutions to the following
eigenvalue equation
N
−
(cid:88) (cid:126)2
∇(cid:126) 2
2M R
j
j
j=1
(cid:126) + Eel(R)
γ(R) = Eγ(R).
(33)
However, this is not quite right. If we apply ∇ (cid:126)
Rj
(cid:126)
to (31
) we obtain
∇(cid:126)
(cid:126) Ψ(R, r) = (
Rj
(cid:126)
∇ | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
) we obtain
∇(cid:126)
(cid:126) Ψ(R, r) = (
Rj
(cid:126)
∇Rj
(cid:126) γ(R))ΦR(r) + γ(R)
(cid:126)
∇Rj
(cid:126) ΦR(r).
Using the adiabatic approximation we neglect the overlap of ∇ (cid:126) Ψ(R, r) with all states to
Rj
Equivalently we can multiply on the left by (cid:104)ΦR|. This results in
(cid:126)
(cid:90)
d3nrΦR(r)∗∇(cid:126)
(cid:126)
(cid:126) Ψ(R, r) = ∇
R
j
(cid:126) γ(R) + γ(R)
R
j
(cid:90)
∗
3nrΦR(r)
d
(cid:126)
∇ (cid:126) ΦR(r)
Rj
(cid:126)= (∇ (cid:126)Rj
− (cid:126)iAj)γ(R),
where Aj is the familiar Berry connection
(cid:126)
A(cid:126)j = i(cid:104)ΦR|∇(cid:126)
(cid:105)
(cid:126) |ΦR .
Rj
We conclude that the effective Hamiltonian actual experienced by the nuclei should be
Heff =
N (cid:126)2
(cid:88)
2Mj
j=1
(
(cid:126)
∇ (cid:126)Rj
− iA(cid:126)j)2 + Eel(R).
(34)
|ΦR(cid:105).
(35)
(36)
(cid:126)
We will see these Aj terms again when we discuss electomagnetism later in the semester. In
systems of nuclei and atoms we need at least three nuclei before the Aj terms can have an effect,
for the same reason that we do not see a Berry phase unless we trace out a loop in a parameter
space of dimension ≥ 2.
(cid:126)
The Born-Oppenheimer applies not just to nuclei and electrons but whenever we can divide a
system into fast and slow-moving degrees of freedom; e.g. we can treat a proton as a single | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
to nuclei and electrons but whenever we can divide a
system into fast and slow-moving degrees of freedom; e.g. we can treat a proton as a single particle
and ignore (or “integrate out”) the motion of the quarks within the proton. This is an important
principle that we often take for granted. Some more general versions of Born-Oppenheimer are
called “effective field theory” or the renormalization group.
4 Scattering
4.1 Preliminaries
One of the most important types of experiments in quantum mechanics is scattering. A beam of
particles is sent into a potential and scatters off it in various directions. The angular distribution
of scattered particles is then measured.
In 8.04 we studied scattering in 1-d, and here we will
study scattering in 3-d. This is an enormous field, and we will barely scratch the surface of it. In
particular, we will focus on the following special case:
• Elastic scattering. The outgoing particle has the same energy as the incoming particle. This
means we can model the particles being scattered off semi-classically, as a static potential
V ((cid:126)r). The other types of scattering are inelastic scattering, which can involve transformation
of the particles involved or creation of new particles, and absorption, in which there is no
outgoing particle.
24
• Non-relativistic scattering. This is by contrast with modern accelerators such as the LHC.
However, non-relativistic scattering is still relevant to many cutting-edge experiments, such
as modern search for cosmic dark matter (which is believed to be traveling at non-relativistic
speeds).
Even this special case can teach us a lot of interesting physics. For example, Rutherford scattering1
showed that atoms have nuclei, thereby refuting the earlier “plum pudding” model of atoms. This
led to a model of atoms in which electrons orbit nuclei like planets, and resolving the problems of
this model in turn was one of the early successes of quantum mechanics.
Scattering cross section:
In scattering problems it is important to think about which physical
quantities can be observed. The incoming particles have a flux that is measured in terms of number
of particles per unit area per unit time, i.e. d2Nin . If we just count the total number of scattered
particles, then this is | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
of particles per unit area per unit time, i.e. d2Nin . If we just count the total number of scattered
particles, then this is measured in terms of particles per time: dNscat . The ratio of these quantities
has units of area and is called the scattering cross section:
dAdt
dt
dNscat
dt
d2
Nin
dAdt
= σ.
(37)
To get a sense of why these are the right units, consider scattering of classical particles off of a
classical hard sphere of radius a. If a particle hits the sphere it will scatter, and if it does not hit
the sphere it will not scatter. Assume that the beam of particles is much wider than the target,
i.e. each particle has trajectory (cid:126)r = (x0, y0, z0 + vt) with
0 given by a distribution with
standard deviation that is (cid:29) a. The particles that scatter
0 ≤ a
which corresponds to a region with area πa2, which is precisely the cross-sectional area of the
scat = d Nin πa2, it follows that σ = πa2. This simple example
sphere. Since we have dN
is good to
dt
keep in mind to have intuition about the meaning of scattering cross sections.
x2
0 + y2
will be the ones with
0 + y2
x2
2
dA dt
(cid:112)
(cid:112)
Differential cross-section: We can get more information out of an experiment by measuring
the angular dependence of the scattered particles. The number of scattered particles can then be
2
measured in terms of a rate per solid angle, i.e. d Nscat . The resulting differential cross-section dσ
dΩ
dΩ dt
is defined to be
dσ
dΩ
(θ, φ)
scat
d2N
≡ dΩ dt
d2
Nin
dAdt
(38)
Here the spherical coordinates (θ, φ) denote the direction of the outgoing particles. It is conventional
to define the axes so that the incoming particles have momentum in the zˆ direction, so θ is the angle
between the scattered particle and the incoming beam (i.e. � | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
the incoming particles have momentum in the zˆ direction, so θ is the angle
between the scattered particle and the incoming beam (i.e. θ = 0 means no change in direction
and θ = π means backwards scattering) while φ is the azimuthal angle. Integrating over all angles
gives us the full cross-section, i.e.
σ =
dΩ
.
(39)
(cid:90)
dσ
dΩ
Quantum mechanical scattering: Assume that the incoming particle states are wavepackets
that are large relative to the target. This allows us to approximate the incoming particles as plane
wave, i.e.
ψin ∝ eikz− iEt
(cid:126) ,
(40)
1Rutherford scattering is named after Ernest Rutherford for his 1911 explanation of the 1909 experiment which
was carried out by Geiger and Marsden.
25
(cid:126)2 2
2m
where E = k . Here we need to assume that the potential V ((cid:126)r)
→ 0 as r → ∞ so that plane
waves are solutions to the Schr¨odinger equation for large r. For the scattered wave, we should seek
solutions satisfying
(cid:126)2
− ∇(cid:126) 2ψscat = Eψscat
(cid:19)
ψ
k2
scat = ψscat
(cid:18) 1 d2
r dr2
−
2m
1 ˆ
r + L2
r2
as r
→ ∞
in spherical coordinates
(41)
(42)
A general solution can be written as a superposition of separable solutions. Separable solutions to
(42) can in turn be written as
in terms of some functions u(r), f (θ, φ). In terms of these (42) becomes
rψ(r, θ, φ) = u(r)f (θ, φ),
u(cid:48)(cid:48)f +
ˆ
1
uL2f +k2uf = 0.
r2
(cid:123)(cid:122)
(cid:124)
0 as
→
(cid:125)
r→∞
(43)
(44)
h side and simply have u(cid:48)(cid:48) = −k2u, | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)
r→∞
(43)
(44)
h side and simply have u(cid:48)(cid:48) = −k2u, which has
eac
Thus, for large r, we can cancel the f from
solutions e±ikr. The eikr solution corresponds to outgoing waves and the e−ikr solution to incoming
waves. A scattered wave should be entirely outgoing, and so we obtain
r→∞ f (θ, φ)
ψscat =
r
eikr− t
iE
(cid:126)
or more precisely
ψscat =
f (θ, φ)
r
eikr− t
iE
(cid:126) + O
(cid:19)
.
(cid:18) 1
r2
Because the scattering is elastic, the k and E here are the same as for the incoming wave.
(45)
(46)
Time-independent formulation: As with 1-d scattering problems, the true scattering process
is of course time-dependent, but the quantities of interest (transmission/reflection in 1-d, differential
cross section in 3-d) can be extracted by solving the time-independent Schr¨odinger equation with
suitable boundary conditions. In the true process, the incoming wave should really be a wavepacket
with well-defined momentum ≈ (0, 0, k) and therefore delocalized position. The outgoing wave will
be a combination of an un-scattered part, which looks like the original wave packet continuing
forward in the zˆ direction, and a scattered part, which is a spherical outgoing wavepacket with a
f (θ, φ) angular dependence. However, we can treat the incoming wave instead as the static plane
wave eikz
eikr. (Both of these
are when r → ∞.) Thus we can formulate the entire scattering problem as a time-independent
boundary-value problem. The high-level strategy is then to solve the Schr¨odinger equation subject
to the boundary conditions
and the scattered wave instead as the static outgoing wave
f (θ,φ)
r
ψ((cid:126)r) →∞= eikz +
r
f (θ, φ)
r
eikr.
(47)
This is analogous to what we did in 1-D scattering, where the boundary conditions were that | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
θ, φ)
r
eikr.
(47)
This is analogous to what we did in 1-D scattering, where the boundary conditions were that ψ(x)
should approach eikx + Re−ikx for x → −∞ and should approach T eikx for x → ∞. As in the 1-D
case, we have to remember that this equation is an approximation for a time-dependent problem.
As a result when calculating observable quantities we have to remember not to include interference
terms between the incoming and reflected waves, since these never both exist at the same point in
time.
26
Relation to observables.
|T |2 and |R|2 respectively.
sections dσ . To compute these, first evaluate the incoming flux
In the 1-D case, the probabilities of transmission and reflection are
In the 3-d case, the observable quantities are the differential cross
dΩ
(cid:126)
(cid:126)
Sin = Im ψin
m
∗ ∇(cid:126)
ψ
(cid:126)
in = Im
m
(cid:16)
e−ikz∇e z(cid:17)
(cid:126)
ik
=
(cid:126)
k
m
zˆ = vzˆ
(48)
In the last step we have used v = (cid:126)k/m. The units here are off because technically the wavefunction
√
should be not eikz but something more like eikz/
V , where V has units of volume. But neglecting
this factor in both the numerator and denominator of (38) will cause this to cancel out. Keeping
that in mind, we calculate the denominator to be
d2Nin = |(cid:126)Sin
dAdt
| = v.
Similarly the outgoing flux is (using ∇ = ∂ rˆ + 1 ∂ θ + 1
(cid:126)
ˆ
ˆ
∂ φ)
∂r
r ∂θ
r sin θ ∂φ
(cid:126)
Sscat = Im
(cid:126)
m
(cid:18) e−ikr
r
f ∗ikrˆ
ikr
e
r
f + O | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
)
m
(cid:18) e−ikr
r
f ∗ikrˆ
ikr
e
r
f + O
(cid:19)(cid:19)
(cid:18)
1
r3
= v
rˆ
r2 |
f 2 + O(1/r3).
|
To relate this to the flux per solid angle we use d(cid:126)a = r2dΩrˆ to obtain
d2Nscat
dΩ dt
(cid:126)= Sscat · d(cid:126)a = (v
rˆ
r2 |
f 2 + O(r−3)) r2 dΩ rˆ = f 2vdΩ + O(1/r).
|
|
·
|
(49)
(50)
(51)
We can neglect the O(1/r) term as r → ∞ (and on a side note, we see now why the leading-order
term in Sscat was O(1/r2)) and obtain the simple formula
(cid:126)
dσ
dΩ
= |f (θ, φ) 2
|
.
(52)
The Optical Theorem.
(52) is valid everywhere except at θ = 0. There we have to also
consider interference between the scattered and the unscattered wave. (Unlike the incoming wave,
the outgoing unscattered does coexist with the scattered wave.) The resulting flux is
(cid:126)
S = Im
out
(cid:126)
m
(cid:18)
e−ikz
f ∗
+ e−ikr
r
(cid:19) (cid:18)
ik
ikz
zˆe +
ikr
2
f e + O(1/r )
(cid:19)
=
vzˆ + v
(cid:124)(cid:123)(cid:122)(cid:125)
(cid:124)
(cid:126)
Sunscat
rˆ
|f |2 + v Re
2
r
(cid:123)(cid:122)
(cid:126)scat
S
(cid:125)
(cid:124)
(cid:18)
zˆf ∗
r
eik(z−r) + eik(r
(cid:123)(cid:122)
(cid:126)Sinterference
(cid: | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
r
eik(z−r) + eik(r
(cid:123)(cid:122)
(cid:126)Sinterference
(cid:19)
−z)
(cid:125)
ikrˆ
r
rˆf
r
(53a)
(53b)
This last term can be thought of as the effects of interference.
for θ
0. Here rˆ
(cid:112)
≈
≈
2
2
zˆ and we define ρ = x + y so that (to leading order) r = z + . Then
aluate it for large r and
ρ2
2z
We will ev
(cid:90)
(cid:126)Sinter · d(cid:126) = v
a
(cid:90) 2π
(cid:90)
∞
dφ
ρdρ
f ∗e−ik
2
2
ρ
ρ
2z + f e 2z
ik
(cid:19)
(cid:18)
ˆ
z
z
0
(cid:90) ∞ dρ2
2
0
(cid:90) ∞
0
v
= 4π Re
z
2πv
z
Re
=
2
ikρ
f e 2z
iky
dye 2z f (0) =
−
4πv
k
Im f (0)
0
(54a)
(54b)
(54c)
27
Since the outgoing flux should equal the incoming flux, we can define A to be the beam area and
find
Av = Av + v
dΩ|f (θ, φ)|2 −
Im f (0).
(cid:90)
4πv
k
Thus we obtain the following identity, known as the optical theorem:
(cid:90)
dΩ|f (θ, φ)|2 =
4π
k
Im f (0)
(55)
(56)
All this is well and good, but we have made no progress at all in computing f (θ, φ). We will
discuss two approximation methods: the partial wave method (which is exact, but yields nice
approximations when k is very small) and the Born approximation, which is a good approximation | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
the partial wave method (which is exact, but yields nice
approximations when k is very small) and the Born approximation, which is a good approximation
when we are scattering off a weak potential. This is analogous to approximations we have seen
before (see Table 1). We will discuss the Born approximation in Section 4.2 and the partial-wave
perturbation type
time-independent
time-dependent
scattering
small
slow
TIPT
WKB
TDPT
Born
adiabatic
partial wave
Table 1: Summary of approximation techniques for scattering problems.
technique in Section 4.3.
4.2 Born Approximation
Zooming out, we want to solve the following eigenvalue equation:
(cid:126)(∇2 + k2
)|ψ(cid:105) = U |ψ(cid:105)
where
U ≡
2m
(cid:126)2
V.
(57)
This looks like a basic linear algebra question. Can we solve it by inverting (∇2 + k2) to obtain
(cid:126)
? (cid:126)
ψ = (∇2 + k2)−1U |ψ(cid:105)?
(58)
To answer this, we first review some basic linear algebra. Suppose we want to solve the equation
(cid:126)
A(cid:126)x = b
(59)
for some normal matrix A. We can write b = A−1(cid:126)x only if A is invertible. Otherwise the solution
will not be uniquely defined. More generally, suppose that our vectors live on a space V . Then we
can divide up V as
(cid:126)
V = Im A ⊕ ker A
where
ker A = {(cid:126)x0 : A(cid:126)x0 = 0}.
(60)
If we restrict A to the subspace Im A then it is indeed invertible. The solutions to (59) are then
given by
(cid:126)x = (A|Im A)−1(cid:126)b + (cid:126)x0
where
(cid:126)x0 ∈ ker A.
(61)
Returning now to the quantum case, the operator (∇2 + k2) is certainly not invertible. States
satisfying (∇2 + k2)|ψ0(cid:105) = 0 exist, and | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
) is certainly not invertible. States
satisfying (∇2 + k2)|ψ0(cid:105) = 0 exist, and are plane waves with momentum (cid:126)k. But if we restrict
(cid:126)
(cid:126)
28
(cid:126)
∇(cid:126)(
2 + k2) to the subspace of states with momentum = (cid:126)k then it is invertible. Define the Green’s
operator G to be (∇2 + k2)|−1
. The calculation of G is rather subtle and the details can be
found in Griffiths. However, on general principles we can make a fair amount of progress. Since
(cid:126)(∇2 + k2) is diagonal in the momentum basis, then G should be as well. Thus G should be written
as an integral over |p(cid:126)(cid:105)(cid:104)p(cid:126)| times some function of p(cid:126). By Fourier transforming this function we can
equivalently write G in terms of translation operators as
p=(cid:126)k
(cid:90)
G =
d3
(cid:126)rG((cid:126)r)T(cid:126)r
where
T(cid:126) ≡ e−i (cid:126)
r
(cid:126)r
·p(cid:126)
.
(62)
go through this more concretely. In the momentum basis we have the completeness relation
Let’s
(cid:82)
d3p(cid:126)|p(cid:126)(cid:105)(cid:104)p(cid:126)| = I which implies
(cid:90)
∇(cid:126) 2 + k2 =
d3p(cid:126)(−(cid:126)2p2 + k2)|p(cid:126)(cid:105)(cid:104)p(cid:126)|.
(63)
(cid:82)
To invert this we might naively write G = (cid:82) (cid:48) d3p(cid:126)(−(cid:126)2p2 + k2)−1|p(cid:126)(cid:105)(cid:104)p(cid:126)| where
over all p(cid:126) with p = (cid:126)k. To handle the diverging denominator, one method is | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
:126)| where
over all p(cid:126) with p = (cid:126)k. To handle the diverging denominator, one method is
(cid:48) denotes the integral
to write
(cid:90)
G = lim
(cid:15)→0
d3p(cid:126)(−(cid:126)2p2 + k2 + i(cid:15))−1|p(cid:126) p(cid:126)
(cid:105)(cid:104) |.
(64)
Finally we can write this in the position basis according to (62) and obtain the position-space
Green’s function G((cid:126)r) by Fourier-transforming (−(cid:126)2p2 + k2 + i(cid:15))−1. In Griffiths this integral is
carried out obtaining the answer:
G((cid:126)r) = −
eikr
4πr
.
This function G((cid:126)r) is called a Green’s function. We can thus write
(cid:90)
G =
d3(cid:126)r
−eikr
4πr
T(cid:126)r.
Having computed G, we can now solve (57) and obtain
|ψ(cid:105) = |ψ0(cid:105) + GU |ψ(cid:105)
(65)
(66)
(67)
for some free-particle solution |ψ0(cid:105). Indeed for a scattering problem, we should have ψ0((cid:126)r) = eikz.
(67) is exact, but not very useful because |ψ(cid:105) appears on both the LHS and RHS. However, it will
let us expand |ψ(cid:105) in powers of U .
The first Born approximation consists of replacing the |ψ(cid:105) on the RHS of (67) by |ψ0(cid:105), thus
yielding
|ψ(cid:105) = |ψ0(cid:105) + GU |ψ0(cid:105).
(68)
The second Born approximation consists of using (68) to approximate |ψ(cid:105) in the RHS of (67), which
yields
|ψ(cid:105) = |ψ0(cid:105) + GU (|ψ0(cid:105) + GU |� | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
|ψ(cid:105) = |ψ0(cid:105) + GU (|ψ0(cid:105) + GU |ψ0(cid:105)) = |ψ0(cid:105) + GU |ψ0(cid:105) + GU GU |ψ0(cid:105).
Of course we could also rewrite (67) as |ψ(cid:105) = (I − GU )−1|ψ0(cid:105) = (cid:80)
n 0(GU )n|ψ0(cid:105) (since (I
≥
is formally invertible) and truncate this sum at some finite value of n.
(69)
− GU )
29
(cid:54)
(cid:54)
(cid:54)
These results so far have been rather abstract. Plugging in (65) and ψ0((cid:126)r) = eikz we find that
the first Born approximation is
ψ((cid:126)r) = ψ0((cid:126)r) +
(cid:90)
d3(cid:126)r(cid:48)ψ0((cid:126)r(cid:48))G((cid:126)r − (cid:126)r(cid:48))U ((cid:126)r(cid:48))
(cid:90)
= eikz − d3(cid:126)r(cid:48)eikz
(cid:48)
|r−(cid:126)r(cid:48)|
eik (cid:126)
4π|(cid:126)r − (cid:126)r(cid:48)|
U ((cid:126)r(cid:48))
(70a)
(70b)
If we assume that the potential is short range and we evaluate this quantity for r far outside the
range of U , then we will have r (cid:29) r(cid:48) for all the points where the integral has a nonzero contribution.
In this case |(cid:126)r − (cid:126)r(cid:48)| ≈ r − rˆ · (cid:126)r(cid:48). Let us further define
(cid:126)
k = krˆ
and
(cid:126)
k(cid:48) = kzˆ,
(71)
corresponding to the outgoing and incoming wavevectors respectively. Then we have (still in the
first | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
= kzˆ,
(71)
corresponding to the outgoing and incoming wavevectors respectively. Then we have (still in the
first Born approximation)
ψscat((cid:126)r) = −
≈ −
(cid:90)
(cid:90)
d3(cid:126)r(cid:48)eikz
d3(cid:126)r(cid:48)eikz
(cid:48)
(cid:48)
|r−(cid:126)r(cid:48)|
eik (cid:126)
− (cid:126)r(cid:48)
|
4π|(cid:126)r
e − ·r(cid:48)
r ikrˆ (cid:126)
ik
4πr
U ((cid:126)r(cid:48))
U ((cid:126)r(cid:48))
(cid:18)
=
−
(cid:90)
d3
(cid:126)
(cid:126)r(cid:48)ei(k −k)·(cid:126)r
(cid:48) (cid:126)
(cid:48)
(cid:48)
U ((cid:126)r
4π
(cid:19)
)
ikr
e
r
(72a)
(72b)
(72c)
The quantity in parentheses is then f (θ, φ). If we define V ((cid:126)q) to be the Fourier transform of V ((cid:126)r)
then we obtain
m ˜ (cid:126)
V (k
(cid:126)
2π 2
where f1 refers to the first Born approximation. One very simple example is when V ((cid:126)r) = V0δ((cid:126)r).
Then we simply have f1 = − mV0
2π(cid:126)2 .
f1(θ, φ) = −
(cid:126)
− k),
(73)
(cid:48)
A further simplification occurs in the case when V ((cid:126)r) is centrally symmetric. Then
˜
(cid:90)
˜V ((cid:126)q) ≡
d (cid:126)rV (r)eiq(cid:126)·(cid:126)r = 2π
3
(cid:90)
∞
(cid:90)
1
dr
0
−1
dµr2V (r)ei | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
3
(cid:90)
∞
(cid:90)
1
dr
0
−1
dµr2V (r)eiqrµ =
4π
q
(cid:90)
∞
0
dr rV (r) sin(qr).
(74)
Finally the momentum transfer (cid:126)q = k(cid:48) − (cid:126)k satisfies q = 2k sin(θ/2).
(cid:126)
One application of this (see Griffiths for details) is the Yukawa potential: V (r) = −βe−µr/r.
The first Born approximation yields
f Yukawa
1
(θ) = −
2
2mβ
(cid:126)2(µ2 + q2)
.
Taking β = −eQ and µ = 0 recovers Rutherford scattering, with
f Rutherford
1
(θ) =
2meQ
(cid:126)2q2
=
meQ
2(cid:126)2k2 sin2(θ/2)
.
A good exercise is to rederive the Born approximation by using Fermi’s Golden Rule and
counting the number of outgoing states in the vicinity of a given k. See section 7.11 of Sakurai for
details. Another version is in Merzbacher, section 20.1.
(cid:126)
30
Rigorous derivation of Green’s functions. The above derivation of G was somewhat informal.
However, once the form of G((cid:126)r) is derived informally or even guessed, it can be verified that
(cid:126)(∇2 + k2
)G acts as the identity on all states with no component in the null space of (∇2 + k2).
This is the content of Griffiths problem 11.8. Implicit is that we are working in a Schwartz space
which rules out exponentially growing wavefunctions, and in turn implies that ∇2 has only real
eigenvalues and is in fact Hermitian. A more rigorous derivation of the Born approximation can
also be obtained by using time-dependent perturbation theory as described by Sakurai section 7.11
and Merzbacher section 20.1. I particularly recommend the discussion in Merzbacher.
(cid:126)
(cid:126)
4.3 Partial Waves | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
Merzbacher section 20.1. I particularly recommend the discussion in Merzbacher.
(cid:126)
(cid:126)
4.3 Partial Waves
In this section assume we have a central potential, i.e. V ((cid:126)r) = V (r). Since our initial conditions
are invariant under rotation about the zˆ axis, our scattering solutions will also be independent of
φ (but not θ).
Assume further that
lim r2V (r) = 0.
r→∞
(75)
We will see the relevance of this condition shortly.
A wavefunction with no φ dependence can be written in terms of Legendre polynomials2 (cor-
responding to the m = 0 spherical harmonics)as
ψ(r, θ) =
∞
(cid:88)
l=0
Rl(r)Pl(cos θ).
If we define ul(r) = rRl(r), then the eigenvalue equation becomes (for each l)
− u(cid:48)(cid:48)
l + Vefful = k2ul
where
Veff ≡
2mV
(cid:126)2
+
l(l + 1)
r2
.
(76)
(77)
If (75) holds, then for sufficiently large r, we can approximate Veff ≈ l(l + 1)/r2. In this region
the solutions to (77) are given by the spherical Bessel functions. Redefining x ≡ kr and using the
assumption Veff ≈ l(l + 1)/r2, (77) becomes
u(cid:48)(cid:48)
l −
l(l + 1)
x2
ul =
−ul.
(78)
This has two linearly independent solutions: xjl(x) and xnl(x) where jl(x), nl(x) are the spherical
Bessel functions of the first and second kind respectively, and are defined as
jl(x) = (−x)l
(cid:19)
l
(cid:18)
1 d
x dx
sin(x)
x
and
nl(x) =
−(−x)l
(cid:19)
l
(cid:18)
1 d
x dx
cos(x)
x
These can be thought of as “sin-like” | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
(cid:19)
l
(cid:18)
1 d
x dx
cos(x)
x
These can be thought of as “sin-like” and “cos-like” respectively. For l = 0, 1, (79) becomes
j0(x) =
n0(x) = −
sin(x)
x
cos(x)
x
j1(x) =
n1(x) =
sin(x)
x2 −
cos(x)
x2 −
cos(x)
x
sin(x)
x
−
(79)
(80a)
(80b)
2
What are Legendre polynomials? One definition starts with the orthogonality condition
2 δmn. (The δmn is the important term here;
2
2n+1
2n+1
Gram-Schmidt procedure to 1, x, x2, . . . , we obtain the Legendre polynomials P0 = 1, P1 = 1,
Thus any degree-n polynomial can be written as a linear combination of P0, . . . , Pn and vice-versa.
1
P− m(x)Pn(x)dx =
1
is a somewhat arbitrary convention.) Then if we apply the
− 1), . . ..
P2 = 1 (3x2
2
(cid:82)
The reason for (76) is that if ψ(r, θ) is independent of φ then we can write it as a power series in r and z, or
equivalently r and z = cos(θ). These power series can always be written in the form of (76).
r
31
(83)
(1)
l
(1)
l
One can check (by evaluating the derivatives in (79) repeatedly and keeping only the lowest power
of 1/x) that as x → ∞ we have the asymptotic behavior
jl(x) → sin(x − lπ/2)
1
x
and
nl(x)
1
→ − cos(x
x
− lπ/2).
(81)
On the other hand, in the x → 0 limit we can keep track of only the lowest power of x to find that
as x → 0 we have
jl(x)
→
xl
(2l + 1)!!
and
nl(x)
→ −
,
(82)
(2l
1) | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
(x)
→
xl
(2l + 1)!!
and
nl(x)
→ −
,
(82)
(2l
1)!!
−
xl+1
where (2l + 1)!! ≡ (2l + 1)(2l − 1) · · · 3 · 1.
If jl and nl are sin-like and cos-like, it will be convenient to define functions that resemble
incoming and outgoing waves. These are the spherical Hankel functions of the first and second
kind:
h = j
l + inl
and
(1)
l
h = j
l − inl
(2)
l
(1)
l
For large r, h (kr)
precisely, Rl(r) (i.e.
Putting this together we get
→ − l+1 eikr
i)
kr
(
, and so our scattered wave should be proportional to h . More
the angular-momentum-l component) should be proportional to h (kr).
ψ(r, θ)
when V 0
= ≈ eikz +
(cid:88)
l≥0
(cid:124)
clh (kr)P (cos(θ)),
l
(1)
l
(cid:123)(cid:122)
ψscat
(cid:125)
(84)
for some coefficients cl.
l = ki +1(2l + 1)al, so that we have
c
l
It will be convenient to
write the
cl in terms
of new coefficients al as
ψ
scat(r, θ)
when V 0
= ≈ k
(cid:88)
l≥0
i +1(2l + 1)alh (kr)P
l
l(cos(θ))
(1)
l
when r=→∞ (cid:88)
(2l + 1)alPl(cos(θ))
l≥0
(cid:124)
(cid:123)(cid:122)
(θ)
f
(cid:125)
eikr
r
We can then compute the differential cross-section in terms of
the al
dσ
dΩ
= |f (θ)|2 =
(cid:12)
(cid:12)
(cid:12)( | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
of
the al
dσ
dΩ
= |f (θ)|2 =
(cid:12)
(cid:12)
(cid:12)(cid:88)
(cid:12)
(cid:12)
l
(cid:12)
2
(cid:12)
(cid:12)
(2l + 1)alPl(cos(θ))
(cid:12)
(cid:12)
.
Recall that the Legendre polynomials satify the orthogonality relation
Thus we can calculate
(cid:90) 1
−
1
dzPl(z)Pl(cid:48)(z) = δl,l(cid:48)
2
2l + 1
.
(cid:90)
σ =
dΩ
dσ
dΩ
(cid:88)
= 4π
(2l + 1)|a |2
l
.
l
(85)
(86)
(87)
(88)
(89)
While the differential cross section involves interference between different values of l, the total cross
section is simply given by an incoherent sum over l. Intuitively this is because we can think of l and
θ as conjugate observables, analogous to momentum and position. If we measure the probability
32
of observing something at a particular position, we will see interference effects between different
momenta, but if we integrate over all positions, these will go away.
The beauty of the partial-wave approach is that it reduces to a series of 1-d scattering problems,
one for each value of l. We have written down the form of the scattered wave. For the incoming
wave, we need to express eikz in the form of (76). From the arguments above we can see that eikz
should have form (cid:80)
l(Aljl(kr) + Blnl(kr))Pl(cos θ). The specific solution is given by Rayleigh’s
formula which we state without proof:
eikz = eikr cos θ =
(cid:88)
l≥0
il(2l + 1)jl(kr)Pl(cos θ)
Plugging this in, we find that in the V ≈ 0 region we have
ψ(r, θ)
when V 0
= ≈ (cid: | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
��nd that in the V ≈ 0 region we have
ψ(r, θ)
when V 0
= ≈ (cid:88)
il(2l + 1)Pl(cos θ)
l≥
0
l(kr)
j
(cid:123)(cid:122)
(cid:125)
(cid:124)
wa
plane
ve
(1)
+ ikalh (kr)
l
(cid:123)(cid:122)
(cid:125)
scattered
(cid:124)
=
1
2
(cid:88)
l≥0
(1)
il(2l + 1)Pl(cos θ)
)(1 + 2ika +
(kr
h
l
(cid:124)
(cid:125)
(cid:123)(cid:122)
outgoing
l)
(2)
(kr
h
)
l
(cid:123)(cid:122)
(cid:125)
(cid:124)
incoming
(90)
(91)
(92)
Now we have really expressed this as a series of 1-d scattering problems. Here
comes the crucial
simplifying move. Because we are assuming that the collision is elastic, probability and angular-
momentum conservation means that ”what goes in must come out”; i.e.
The outgoing wave’s amplitude must have the same absolute value as the incoming wave’s ampli-
tude. We can rewrite (93) by introducing the phase shift δl defined by
|1 + 2ikal| = 1.
(93)
1 + 2ikal = e2iδl.
The factor of 2 is conventional. We can rewrite (94) to solve for al as
al =
2eiδl − 1
2ik
=
eiδl sin(δl)
k
=
1
1
k cot(δl) − i
.
Many equivalent expressions are also possible. In terms of the phase shifts, we can write
f (θ) =
1
k
(cid:88)
(2l + 1)Pl(cos θ)eiδl sin(δl)
σ =
4π
k2
l
(cid:88)
l | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
1)Pl(cos θ)eiδl sin(δl)
σ =
4π
k2
l
(cid:88)
l
(2l + 1) sin2(δl).
(94)
(95)
(96)
(97)
As an application we can verify that this satisfies the optical theorem. Observe that Pl(1) = 1 for
all l. Thus
Im f (0) =
(2l + 1)P (1) sin2
l
(δl) = σ.
(98)
4π
k
4π 1
k k
(cid:88)
l
Another easy application is partial-wave unitarity, which bounds the total amount of scattered
k2 (2l + 1) sin (δl), so that σ = l σl. Then using
wave with angular momentum l. Define σl = 4π
sin2(δl) ≤ 1 we have
(cid:80)
2
(2l + 1).
(99)
σl ≤
4π
k2
33
This bound is called “partial-wave unitarity.”
How to compute phase shifts. Let us look again at the r → ∞ solution. We can use the fact
that overall phase and normalization don’t matter to obtain
(1)
l
Rl(r) = h (kr)e2iδl + h (kr)
(2)
l
= (1 + e2iδl)jl(kr) + i(e2iδl − 1)nl(kr)
∝ cos(δl)jl(kr) − sin(δl)nl(kr)
∝ jl(kr) − tan(δl)nl(kr)
(100a)
(100b)
(100c)
(100d)
Suppose that we know the interior solution Rl(r) for r ≤ b and that V (r) ≈ 0 for r > b. Then we
can compute the phase shift by matching
for r = b
Here is a simple example. Consider a hard sphere of radius b. Then Rl(r) = 0 for r ≤ b and we
R(cid:48)(r)
l
Rl(r)
±
(cid:15).
have
jl(kb) −
tan(δl)nl(kb) = 0 | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
l
Rl(r)
±
(cid:15).
have
jl(kb) −
tan(δl)nl(kb) = 0 =⇒ δl = tan−1
(cid:18)
(cid:19)
.
j
l(kb
)
nl(kb)
(101)
One particularly simple case is when l = 0. Then
δ0 = tan−1
(cid:19)
(cid:18)
j0(kb)
n0(kb)
= tan−1
(cid:33)
(cid:32)
kb)
sin(
kb
− cos(kb)
kb
=
− tan−1(tan(kb)) = −kb.
(102)
It turns out that in general repulsive potentials yield negative phase shifts.
What about larger values of l? Suppose that kb (cid:28) 1. Then using the x → 0 approximations
for jl(x), nl(x), we obtain
kδ −−→−→0
l
tan−1
(cid:18)
l
(kb)2 +1
(2l + 1)!!(2l − 1)!!
(cid:19)
≈
(kb)2l+1
2l + 1!!(2l − 1)!!
∼
(kb)2l+1.
(103)
Thus the l = 0 scattering dominates. In terms of cross-sections, we have
σl =
4π
k2
(2l + 1) sin (δl)
2
≈
4π
(2l + 1)((2l − 1)!!)4
(kb)4lb2.
(104)
For l = 0 this is 4πb2 which four times the classical value of πb2, and for higher value of l this
drops exponentially (assuming kb (cid:28) 1). Even if kb > 1 this drops exponentially once l (cid:29) kb, which
confirms our intuition that the angular momentum should be on the order of (cid:126)kb.
Another way to think about this reason to favor low values of l is because the l(l + 1)/r2
term in Veff forms an angular momentum “barrier” that prevents low-energy incoming waves from
penetrating to small enough r to see the potential V (r).
The high | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
an angular momentum “barrier” that prevents low-energy incoming waves from
penetrating to small enough r to see the potential V (r).
The high-energy limit. The partial wave approximation is easiest to use in the low-k limit
because then we can restrict our attention to a few values of l, or even just l = 0. But for the hard
sphere we can also evaluate the kb (cid:29) 1 limit. In this case we expect to find angular momenta up
to lmax ≡ kb. Thus we approximate the total cross section by
σ ≈
l4π max
(cid:88)
k2
l=0
(2l + 1) sin2(δl).
(105)
34
The phases δl will vary over the entire range from 0 to 2π so we simply approximate sin2(δl) by its
average value of 1/2. Thus we obtain
σ ≈
2π
k2
kb
(cid:88)
l=0
(2l + 1) = 2πb2.
(106)
This is now twice the classical result. Even though the particles are moving quickly they still
diffract like waves. One surprising consequence is that even though a hard sphere leaves a shadow,
there is a small bright spot at the center of the shadow. Indeed the optical theorem predicts that
Im f (0) = k σ ≈ kb2/2. Thus |f (0)|2 ≥ (kb)2b2/4. For a further discussion of this bright spot and
the role it played in early 19th-century debates about whether light is a particle and/or a wave,
look up “Arago spot” on wikipedia.
4π
Phase shifts. As we have seen, scattering can be understood in terms of phase shifts. Now we
describe a simple physical way of seeing this. If V = 0 then a plane wave has u0(r) = sin(kr),
due to the u0(0) = 0 boundary condition. When there is scattering, the phase shift δ0 will become
nonzero and we will have
u0(r) = sin(kr + δ0).
If the potential is attractive then the phase will oscillate more rapidly in the scattering region and
so we will have δ0 > 0 while if it is repulsive then the phase will | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
the phase will oscillate more rapidly in the scattering region and
so we will have δ0 > 0 while if it is repulsive then the phase will oscillate more slowly and we will
have δ0 < 0. See Fig. 2 for an illustration.
Figure 2: The phase shift δ0 is positive for attractive potentials and negative for repulsive potentials.
Scattering length. In the regime of low k it turns out that many potentials behave qualitatively
like what we have seen with the hard sphere, with a characteristic length scale called a “scattering
length.” To derive this suppose there is some b such that V (r) ≈ 0 for r ≥ b. In this region we
have u0(r) ≈ sin(kb + δ0) (neglecting normalization). In the vicinity of b we have
u0(r) ≈ u0(b) + u(cid:48)
0(b)(r − b).
35
If we extrapolate to smaller values of r our approximation hits 0 at r = a where
a = b − 0(b)
u
(cid:48) (b)
u0
= b
−
tan(kb + δ0)
k
.
Using the tan addition formula and taking the limit k → 0 we find
a
= b −
1 tan(kb) + tan(δ0)
k 1 − tan(kb) tan(δ0)
≈
= b
−
kb + tan(δ
k
0)
= −
tan(δ
k
)
0
.
(107)
Rearranging we have tan(δ0) = −ka, and in the ka (cid:28) 1 limit this yields
σ
0 =
4π
k2
2
2
sin (δ0) ≈ 4πa ,
which is again the hard sphere result. Similar results hold for larger value of l. Thus the scattering
length can be thought of as an effective size of a scattering target.
36
MIT OpenCourseWare
http://ocw.mit.edu
8.06 Quantum Physics III
Spring 2016
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2016/0c27511c09675d8d385577023328248b_MIT8_06S16_chap2.pdf |
18.725 Algebraic Geometry I
Lecture
9
Lecture 9: Chow’s Lemma, Blowups
Last time we showed that projective varieties are complete. The following result from Wei-Liang Chow gives
a partial converse. Recall that a birational morphism between two varieties is an isomorphism on some pair
of open subsets.
Lemma 1 (Chow’s Lemma). If X is a complete, irreducible variety, then there exists a projective variety
˜X that is birational to X.
Proof. This proof is a standard one. Here we follow the proof presented by [SH77]. Choose an affine covering
X = U1 ∪ . . . ∪ Un, and let Yi ⊇ Ui be projective varieties containing Ui as open subsets. Now consider
(cid:89)
∆ : U → U n →
Yi, and φ : U → X × Y be induced by the standard
→ Y where U =
Ui, Y =
(cid:89)
(cid:92)
Ui
i
i
i
˜
→
X and ∆. Let X be the closure of φ(U ), and π1 gives a map f : X → X. This map is
inclusion U
birational because f −1(U ) = φ(U ), and on U the map π1 ◦ φ is just identity. (To see the first claim, note
˜
that it means (U × Y ) ∩ X = φ(U ), i.e. φ(U ) is closed in U × Y , which is true because φ(U ) in U × Y is just
the graph of ∆, which is closed as Y is separated.)
So it remains to check that X is projective. We show this by showing that the restriction of π2 : X×Y → Y
˜
to X, which we write as g : X → Y , is a closed embedding. Let Vi = p−1
(Ui), where pi is the projection
map from Y to Yi. First we claim that π 1
− (Vi) cover X, which easily follow from the statement that
2
π−1
2 (Vi) = f −1(Ui), since Ui cover X. Consider W = f −1(U ) = φ(U ) as an open subset in f −1(Ui): on W
we have f = pig, so the | https://ocw.mit.edu/courses/18-725-algebraic-geometry-fall-2015/0c3f5291cb00434b36fd840f150e5143_MIT18_725F15_lec09.pdf |
= f −1(U ) = φ(U ) as an open subset in f −1(Ui): on W
we have f = pig, so the same holds on f −1(Ui) and the covering property follows.
˜
i
˜
It remains to show that X ∩ Vi → Ui are closed embeddings. Noting that Vi = Y1 × . . .
˜
˜
˜
× Yi−1 × Ui ×
i −→ Ui (cid:44)→ X, and note that it is closed and isomorphic
Zi and that Zi is closed, taking closure we see that X ∩ Vi → Ui
˜
p
i
Yi+1 × . . . × Yn, we write Zi to denote the graph of V
to Vi via projection. Noting that φ(U ) ⊆
is closed in Zi.
n
Blowing up of a point in An The blow-up of the affine n-space at the origin is defined as An = Bl0(An)
⊆
A × Pn−1 = {(x, L) : x ∈ An, L ∈ Pn−1, x ∈ L}. It is a variety defined by equations xitj = xjti. We have a
projection π : An → An. Atop 0 there is an entire Pn−1, and on the remaining open set the
projection is an
isomorphism.
(cid:99)
(cid:99)
Now consider
X an closed subset of An, such that {0} is not a component. The proper transform of
\ 0 under π. Suppose X
˜
≥ dim(X). If X is
˜
(X) other than Pn−1. The preimage of 0 within X
X (a.k.a. the blowup of X at 0), denoted X, is the closure of the preimage of X
contains 0, then π− (X) = X ∪ P − . If X (cid:40) A , then P − (cid:54)⊆ X because dim(Pn−1
irreducible, then X is the irreducible component of π−1
is called the exceptional locus.
n 1
n 1
˜
˜
˜
˜
)
n
1 | https://ocw.mit.edu/courses/18-725-algebraic-geometry-fall-2015/0c3f5291cb00434b36fd840f150e5143_MIT18_725F15_lec09.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.