text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
Lecture 16
April 13th, 2004
Elliptic regularity
Hitherto we have always assumed our solutions already lie in the appropriate Ck,α space and
then showed estimates on their norms in those spaces. Now we will avoid this a priori assumption
and show that they do hold a posteriori. This is important for the consistency of our discussion.
Precisely what we would like to show is —
A priori regularity.
Let u ∈ C2(Ω) be a solution of Lu = f and assume 0 < α < 1. We
do not assume c(x) ≤ 0 but we do assume all the other assumptions on L in the previous Theorem
hold. If f ∈ Cα(Ω) then u ∈ C2,α(Ω)
• Here we mean the Cα norm is locally bounded, i.e for every point exists a neighborhood where
the Cα-norm is bounded. Had we written Cα( ¯Ω) we would mean a global bound on sup
x,y
|f (x) − f (y)|
|x − y|α
(as in the footnote if Lecture 14).
• This result will allow us to assume in previous theorems only C2 regularity on (candidate)
solutions instead of assuming C2,α regularity.
Proof. Let u be a solution as above. Since the Theorem is local in nature we take any point in Ω
and look at a ball B centered there contained in Ω. We then consider the Dirichlet problem
L0v = f ′
= u
v
on
on
B,
∂B.
where L0 := L − c(x) and f ′(x) := f (x) − c(x) · u(x). This Dirichlet problem is on a ball, with
”c ≤ 0”, uniform elliptic and with coefficients in Cα. Therefore we have uniqueness and existence
of a solution v in C2,α(B) ∩ C0( ¯B). But u satisfies Lu = f or equivalently L0u = f ′ on all of Ω so
1
in particular on ¯B. By uniqueness on B therefore we have u| | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
0u = f ′ on all of Ω so
1
in particular on ¯B. By uniqueness on B therefore we have u| ¯B = v, and so u is C2,α smooth there.
As this is for any point and all balls we have u ∈ C2,α(Ω).
It is insightful to note at this point that these results are optimal under the above assumptions.
Indeed need C2 smoothness (or atleast C1,1) in order to define 2nd derivatives wrt L! If one takes
u in a larger function space, i.e weaker regularity of u, and defines Lu = f in a weak sense then
need more regularity on coefficients of L! Under the assumption of Cα continuity on the coefficients
indeed we are in an optimal situation.
Higher a priori regularity.
Let u ∈ C2(Ω) be a solution of Lu = f and 0 < α < 1. We
do not assume c(x) ≤ 0 but we assume uniformly elliptic and that all coefficients are in Ck,α. If
f ∈ Ck,α then u ∈ Ck+2,α. If f ∈ C∞ then u ∈ C∞.
Proof. k = 0 was the previous Theorem.
The case k = 1. The proof relies in an elegant way on our previous results with the combination
of the new idea of using difference quotients. We would like to differentiate the u three times and
prove we get a Cα function. Differentiating the equation Lu = f once would serve our purpose but
it can not be done na¨ively as it would involve 3 derivatives of u and we only know that u has two.
To circumvent this hurdle we will take two derivatives of the difference quotients of u, which we
define by (let e1, . . . , en denote the unit vectors in Rn)
∆hu :=
u(x + h · el) − u(x)
h
=: .
uh(x) − u(x)
h
.
Namely we look at
∆hLu =
Lu(x + h · el) − Lu(x)
h
=
f ( | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
u(x)
h
.
Namely we look at
∆hLu =
Lu(x + h · el) − Lu(x)
h
=
f (x + h · el) − f (x)
h
= ∆huf.
Note ∆hv(x) −→h→0Dlv(x) if v ∈ C1 (which we don’t know a priori in our case yet).
Expanding our equation in full gives
2
1
h h(aij(x + h · el) − aij(x) + aij(x))Dijuh − aij(x)Diju(x)
+bi(x + h · el)Diu(x + h · el) − bi(x)Diu(x) + c(x + h · el)u(x + h · el) − c(x)u(x)i
= ∆haijDijuh − aijDij∆hu + ∆hbiDiuh + biDi∆hu + ∆hc · uh + c · ∆hu = ∆hf.
or succintly
L∆hu = f ′
:= ∆hf − ∆haij · Dijuh − ∆hbi · Diuh − ∆hc · uh
where uh := u(x + h · e1).
We now analyse the regularity of the terms. f ∈ C1,α so so is ∆hf , but not (bounded) uniformly
wrt h (i.e C1,α norm of ∆hf may go to ∞ as h decreases). On the otherhand ∆hf ∈ Cα(Ω)
uniformly wrt h (∀h > 0): ∆huf = f (x+h·el)−f (x)
h
= Dlf (¯x) for some ¯x in the interval, and rhs
has a uniform Cα bound as f ∈ C1,α on all Ω! (needed as ¯x can be arbitrary).
For the same reason ∆haij, ∆hbi, ∆hc ∈ Cα(Ω). By the k = 0 case we know u ∈ C2,α(Ω) and not
just in C2(Ω). ⇔ Dijuh ∈ Cα(Ω) uniformly.
Remark. We take a moment to describe | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
) and not
just in C2(Ω). ⇔ Dijuh ∈ Cα(Ω) uniformly.
Remark. We take a moment to describe what we mean by uniformity. We say a function
gh = g(h, ·) : Ω → R is uniformly bounded in Cα wrt h when ∀Ω′ ⊂⊂ Ω exists c(Ω) such that
|gh|Cα(Ω′) ≤ c(Ω). Note this definition goes along with our local definition of a function being in
Cα(Ω) (and not in Cα( ¯Ω)!).
Putting the above facts together we now see that both sides of the equation L∆hu = f ′ are in
Cα(Ω). And they are also in Cα(Ω′) with rhs uniformly so with constant c(Ω′).
By the interior Schauder estimate, ∀Ω′′ ⊂⊂ Ω′ and for each h
||∆hu||C2,α(Ω′′) ≤ c(γ, Λ, Ω
′′
) ·
||∆hu||C0,Ω′
(cid:0)
(+)||f ′
||
Cα,Ω′
(
(cid:1)
) ≤ ˜c(γ, Λ, Ω
′′, Ω
′, Ω, ||u||C1(Ω),
which is independent of h! If we assume the Claim below taking the limit h → 0 we get Dlu ∈
C2,α(Ω′′), ∀l = 1, . . . , n u ∈ C3,α(Ω′′). ∀Ω′′ ⊂⊂ Ω′ ⊂⊂ Ω ⇔ u ∈ C3,α(Ω).
3
Claim.
||∆hg||Cα (A) ≤ c independently of h ⇔ Dlg ∈ Cα(A).
First we we show g ∈ C0,1(A). This is tantamount to the existence of limh→0 ∆hg(x) (since if it
exists it equals Dluγ(x) - that’s how we define the first l-directional derivative at x). Now {∆hg}h>0
is family of uniformly bounded | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
that’s how we define the first l-directional derivative at x). Now {∆hg}h>0
is family of uniformly bounded (in C0(A)) and equicontinuous functions (from the uniform H¨older
constant). So by the Arzel`a-Ascoli Theorem exists a sequence {∆hig}∞
i=1 converging to some
˜w ∈ Cα(A) in the Cβ(A) norm for any β < α. But as we remarked above ˜w necessarily equals Dlg
by definition.
Second, we show g ∈ C1(A) (i.e such that derivative is continuous not just bounded) and actually
∈ C1,α(A):
c ≥ ||∆hg||Cα (A) ≥ lim
h→0
∆hg(x) − ∆hg(y)
|x − y|α
=
Dlg(x) − Dlg(y)
|x − y|α
= |Dlg|Cα(A)
where we used that c is independent of h.
The case k ≥ 2. Let k = 2. By the k = 1 case we can legitimately take 3 derivatives as
u ∈ C3,α(Ω). One has
L(Dlu) = f ′
:= Dlf − Dlaij · Diju − Dlbi · Diu − Dlc · u
with Dlu, f ′ ∈ C1,α(Ω). So again by the k = 1 case we have now Dlu ∈ C3,α(Ω), hence u ∈ C4,α(Ω).
The instances k ≥ 3 are in the same spirit.
Boundary regularity
Let Ω be a C2,α domain, i.e whose boundary is locally the graph of a C2,α function. Let L be
uniformly elliptic with Cα coefficients and c ≤ 0.
T heorem. Let f ∈ Cα(Ω), ϕ ∈ C2,α(∂Ω), u ∈ C2(Ω)∩C0( ¯Ω) satisfying
Lu = f
u = ϕ
on
on
Ω,
∂∂Ω.
with 0 < α < 1. Then u ∈ C2,α( ¯Ω).
4 | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
on
Ω,
∂∂Ω.
with 0 < α < 1. Then u ∈ C2,α( ¯Ω).
4
Proof. Our previous results give u ∈ C2,α(Ω) and we seek to extend it to those points in ∂Ω. Note
that even though u = ϕ on ∂Ω and ϕ is C2,α there this does not give the same property for u. It
just gives that u is C2,α in directions tangent to ∂Ω, but not in directions leading to the boundary.
The question is local: restrict attention to B(x0, R) ∩ ¯Ω for each x0 ∈ ∂Ω. We choose a C2,α
homeomorphism Ψ1 : Rn → Rn sending B(x0, R) ∩ ∂Ω to a portion of a (flat) hyperplane and
∂B(x0, R) ∩ Ω to the boundary of half a disc. We then choose another C2,α homeomorphism
Ψ2 : Rn → Rn sending the whole half disc into a disc (= a ball). Therefore Ψ2 ◦ Ψ1 maps our
original boundary portion into a portion of the boundary of a ball.
Similarly to previous computations of this sort we define the induced operator ˜L on the induced
domain Ψ2 ◦ Ψ1(B(x0, R) ∩ Ω) and define the induced functions ˜u, ˜ϕ, ˜f and we get a new Dirichlet
problem with all norms of our original objects equivalent to those of our induced ones. Note that
still ˜c := c ◦ Ψ1
−1 ◦ Ψ2
−1 ≤ 0, therefore by our theory exists a unique solution v ∈ C2,α(Ψ2 ◦
Ψ1(B(x0, R) ∩ Ω) ∪ Ψ2 ◦ Ψ1(B(x0, R) ∩ ∂Ω)) ∩ C0(Ψ2 ◦ Ψ1(B(x0, R) ∩ ¯Ω)) for the induced Dirichlet
problem . Now our ˜u also solves it. So by uniqueness ˜u = v and | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
� ¯Ω)) for the induced Dirichlet
problem . Now our ˜u also solves it. So by uniqueness ˜u = v and ˜u has C2,α regularity as the
induced boundary portion, and by pulling back through C2,α diffeomorphisms we get that so does
u.
Remark. The assumption c ≤ 0 is not necessary although modifying the proof is non-trivial
without this assumption (exercise). We needed it in order to be able to use our existence result.
But since we already assume a solution exists we may use some of our previous results which do
not need c ≤ 0 and which secure C2,α regularity upto the boundary.
5 | https://ocw.mit.edu/courses/18-156-differential-analysis-spring-2004/001210200bdd9ab37d9c20bf897d605f_da5.pdf |
15.083J/6.859J Integer Optimization
Lecture 8: Duality I
Slide 1
Slide 2
Slide 3
Slide 4
1 Outline
• Duality from lift and project
• Lagrangean duality
2 Duality from lift and project
ZIP = max c�x
•
s.t. Ax = b
xi ∈ {0, 1}.
• {x ∈ (cid:3) | Ax = b, x ≥ 0} is bounded for all b.
n
• Without of loss of generality xi + xi+n = 1 are included in Ax = b.
2.1 LP1
(cid:3)
(cid:2) (cid:2)
ZLP1 = max
(cid:4)
cj wS
(cid:4)
j∈S
S⊆N
(cid:3)
(cid:2)
s.t.
Aj − b wS = 0 ∀ S ⊆ N,
j∈S (cid:2)
wS = 1,
S⊆N
wS ≥ 0.
Theorem: ZIP = ZLP1.
2.2 LP2
(cid:2)
yS =
T :S⊆T
wT .
ZLP2 = max
(cid:2)
cj y{j}
j∈N
(cid:3)
(cid:2)
s.t.
(cid:4)
(cid:2)
(cid:3)
j∈S
(cid:2)
j /∈S
(cid:4)
Aj − b yN = 0,
Aj − b yS + Aj yS∪{j} = 0, ∀ S ⊆ N,
j∈N
yS ≥ 0, y∅ = 1.
Theorem: ZLP1 = ZLP2.
1
2.3 Lift-Project
(cid:2)
• Inequality form:
j∈N
Aj x | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
2.
1
2.3 Lift-Project
(cid:2)
• Inequality form:
j∈N
Aj xj ≤ b
(cid:5)
• Multiply constraints with
for all S N to obtain using
i∈S
x
⊆
(cid:6) (cid:7) (cid:6) (cid:7)
xi +
Aj
Aj
i
(cid:7)
xi ≤ b
xi.
j∈S
i∈S
j /
∈S
i∈S∪{j}
i∈S
• Define yS =
(cid:5)
i∈S
xi, noting that yS ≥ 0 and setting y∅ = 1
(cid:8)
(cid:6)
(cid:9)
Aj − b yS +
(cid:6)
Aj yS∪{j} ≤ 0.
j∈S
j /∈S
2.4 The dual problem
Slide 5
x
2
i = xi:
Slide 6
min u (cid:6)
∅b
(cid:6)
j} (Aj − b) + u∅
s.t. u{
(cid:8)
(cid:9)
(cid:6)
(cid:6)
(cid:6) Aj ≥ cj ∀ j ∈ N,
(cid:6)
u
S
Aj − b
+
(cid:6)
S\{j}Aj ≥ 0 ∀ S ⊆ N, |
u
S| ≥ 2.
j∈S
j∈S
2.5 Strong Duality
Suppose that the only feasible solution to Ax = 0, x ≥ 0 is the vector 0.
Slide 7
• (Weak duality) If x is a feasible solution to the primal problem and u is a
feasible solution to the dual problem, then
(cid:6) x ≤ u∅
c
(cid:6) b.
• (Strong | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
solution to the dual problem, then
(cid:6) x ≤ u∅
c
(cid:6) b.
• (Strong duality) If the primal problem has an optimal solution, so does its
dual problem, and the respective optimal costs are equal.
2.6 Complementary slackness
x and u feasible solutions for primal and dual. Then, x and u are optimal solutions
if and only if
(cid:10)
(cid:6)
u
{j} (Aj − b) + u
(cid:6)
(cid:9)
(cid:11)
(cid:9)
(cid:7)
(cid:6)
∅Aj − cj xj = 0 ∀ j ∈ N,
Aj − b +
(cid:6)
S\{j}Aj
u
xj = 0 ∀ S ⊆ N, |S| ≥ 2.
(cid:8) (cid:8)
(cid:6)
u
S
(cid:6)
Slide 8
j∈S
j∈S
j∈S
2
2.7 Example
Dual
maximize x1 + 2x2 + 3x3 + 5x4
subject to 3x1 + 5x2 + 7x3 + 9x4 = 12,
i = 1, 2, 3, 4.
xi ∈ {0.1},
Slide 9
minimize 12u∅
≥ 1
subject to −9u1 + 3u∅
≥ 2
−7u2 + 5u∅
≥ 3
−5u3 + 7u∅
−3u4 + 9u∅
≥ 5
≥ 0
−4u1,2 + 5u1 + 3u2
≥ 0
−2u1,3 + 7u1 + 3u3 | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
≥ 1
−3u4 + 9u∅ ≥ 5
0u1,4 + 9u1 + 3u4 ≥ 0
are all satisfied with equality.
3 Lagrangean duality
ZIP = min
c (cid:6) x
s.t. Ax ≥ b
Dx ≥ d
x ∈ Z n ,
(∗)
X = {x ∈ Z n | Dx ≥ d}.
3
Slide 10
Let λ ≥ 0.
Z(λ) = min c (cid:6) x + λ(cid:6)(b − Ax)
s.t. x ∈ X,
3.1 Weak duality
• If problem (*) has an optimal solution, then Z(λ) ≤ ZIP for λ ≥ 0.
• The function Z(λ) is concave.
Slide 11
• Lagrangean dual
ZD = max Z(λ)
s.t. λ ≥ 0.
• ZD ≤ ZIP.
3.2 Characterization
ZD = min c�x
s.t. Ax ≥ b
x ∈ conv(X).
3.3 Proof outline
(cid:10)
(cid:11)
(cid:6) x + λ(cid:6)
(b − Ax) .
• Z(λ) = min c
(cid:11)
(cid:10)
• Z(λ) = minx∈conv(X) c (cid:6) x + λ(cid:6)(b − Ax) .
(cid:11)
(cid:10)
(cid:6) x + λ(cid:6)
• ZD = max min
.
(b − Ax)
c
x∈X
λ≥0 x∈conv(X)
• Let x k, k ∈ K, and wj , j ∈ J, be the extreme points and a extreme rays of
Slide 12
Slide 13
conv | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
be the extreme points and a extreme rays of
Slide 12
Slide 13
conv(X)
Z(λ) =
−∞,
⎧
⎪ ⎨
(cid:11)
⎪
⎩ min c x + λ (b − Ax ) ,
(cid:10) (cid:6) k
k
(cid:6)
k∈K
if (c (cid:6) − λ(cid:6)A)wj < 0,
for some j ∈ J,
otherwise.
•
•
(cid:10)
(cid:6) x
c
(b − Axk
(cid:11)
)
ZD = maximize min
k∈K
(c (cid:6) − λ(cid:6)A)wj ≥ 0,
λ ≥ 0,
subject to
k + λ(cid:6)
j ∈ J,
maximize y
subject to y + λ(cid:6)(Axk − b) ≤ c (cid:6) x k ,
≤ c (cid:6) wj ,
λ(cid:6)Awj
λ ≥ 0.
k ∈ K,
j ∈ J,
4
(cid:8)
(cid:6)
Dual minimize c
(cid:6)
(cid:6)
αk x
k +
βj wj
(cid:9)
subject to
•
k∈K
(cid:6)
αk = 1
j∈J
k∈K
(cid:8)
A
(cid:6)
(cid:6)
(cid:9)
βj wj ≥ b
αkx
k +
k∈K
j∈J
(cid:16) (cid:2)
αk, βj ≥ 0,
αkx k +
k∈K
j∈J
(cid:2)
k ∈ K, j ∈ J.
(cid:17) (cid:17)
(cid:2)
(cid:17)
βj | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
(0, 2).
5
Z(λ)
4
2
− 1
3
5
3
λ
conv(X)
xIP
x2
3
xD
2
xLP
1
c
0
1
2
x1
6
MIT OpenCourseWare
http://ocw.mit.edu
15.083J / 6.859J Integer Programming and Combinatorial Optimization
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/0036ce1440b69bb15bb6080528bae4e8_MIT15_083JF09_lec08.pdf |
Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits
So far we have explored time-independent (resistive) elements that are also linear.
A time-independent elements is one for which we can plot an i/v curve. The current is
only a function of the voltage, it does not depend on the rate of change of the voltage.
We will see latter that capacitors and inductors are not time-independent elements. Time-
independent elements are often called resistive elements.
Note that we often have a time dependent signal applied to time independent elements.
This is fine, we only need to analyze the circuit characteristics at each instance in time.
We will explore this further in a few classes from now.
Linearity
A function f is linear if for any two inputs x1 and x2
(
f x1 + x2
)= f x1(
)+ f x2(
)
Resistive circuits are linear. That is if we take the set {xi} as the inputs to a circuit and
f({xi}) as the response of the circuit, then the above linear relationship holds. The
response may be for example the voltage at any node of the circuit or the current through
any element.
Let’s explore the following example.
i
R
Vs1
Vs2
Vs Vs
1
+
2
−
iR 0
=
i
=
2
1Vs Vs
+
R
KVL for this circuit gives
Or
6.071/22.071 Spring 2006. Chaniotakis and Cory
(1.1)
(1.2)
1
And as we see the response of the circuit depends linearly on the voltages
.
A useful way of viewing linearity is to consider suppressing sources. A voltage source is
suppressed by setting the voltage to zero: that is by short circuiting the voltage source.
and
2Vs
1Vs
Consider again the simple circuit above. We could view it as the linear superposition of
two circuits, each of which has only one voltage source.
i1
Vs1
i2
R
R
Vs2
The total current is | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
circuits, each of which has only one voltage source.
i1
Vs1
i2
R
R
Vs2
The total current is the sum of the currents in each circuit.
i
i
2
=
+
1
i
= +
Vs2
Vs
1
R
R
Vs Vs
2
1
+
R
=
(1.3)
Which is the same result obtained by the application of KVL around of the original
circuit.
If the circuit we are interested in is linear, then we can use superposition to simplify the
analysis. For a linear circuit with multiple sources, suppress all but one source and
analyze the circuit. Repeat for all sources and add the results to find the total response
for the full circuit.
6.071/22.071 Spring 2006. Chaniotakis and Cory
2
Independent sources may be suppressed as follows:
Voltage sources:
Vs
+
v=Vs
-
Current sources:
suppress
short
+
v=0
-
i=Is
Is
suppress
i=0
open
6.071/22.071 Spring 2006. Chaniotakis and Cory
3
An example:
Consider the following example of a linear circuit with two sources. Let’s analyze the
circuit using superposition.
R2
i2
R1
i1
Vs
+
-
Is
First let’s suppress the current source and analyze the circuit with the voltage source
acting alone.
R2
i2v
R1
i1v
Vs
+
-
So, based on just the voltage source the currents through the resistors are:
1i v 0=
Vs
R
2
=
i v
2
(1.4)
(1.5)
Next we calculate the contribution of the current source acting alone
R1
i1i
+
v1
-
R2
i2i
Is
Notice that R2 is shorted out (there is no voltage across R2), and therefore there is no
current through it. The current through R1 is Is, and | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
shorted out (there is no voltage across R2), and therefore there is no
current through it. The current through R1 is Is, and so the voltage drop across R1 is,
6.071/22.071 Spring 2006. Chaniotakis and Cory
4
1v
=
IsR1
And so
1i
Is=
Vs
R
2
=
i
2
How much current is going through the voltage source Vs?
Another example:
For the following circuit let’s calculate the node voltage v.
R1
v
Vs
R2
Is
Nodal analysis gives:
or
v
Vs
−
R
1
+
Is
−
v
R
2
=
0
v
=
2
R
R
R
1
+
2
Vs
+
1 2
R R
R
R
1
2
+
Is
(1.6)
(1.7)
(1.8)
(1.9)
(1.10)
We notice that the answer given by Eq. (1.10) is the sum of two terms: one due to the
voltage and the other due to the current.
Now we will solve the same problem using superposition
The voltage v will have a contribution v1 from the voltage source Vs and a contribution
v2 from the current source Is.
6.071/22.071 Spring 2006. Chaniotakis and Cory
5
Vs
And
R1
v1
R1
v2
R2
R2
Is
v
1
=
Vs
v
2
=
Is
2
R
R
R
1
+
2
R R
1 2
R
R
1
2
+
(1.11)
(1.12)
Adding voltages v1 and v2 we obtain the result given by Eq. (1.10).
More on the i-v characteristics of circuits.
As discussed during the last lecture, the | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
obtain the result given by Eq. (1.10).
More on the i-v characteristics of circuits.
As discussed during the last lecture, the i-v characteristic curve is a very good way to
represent a given circuit.
A circuit may contain a large number of elements and in many cases knowing the i-v
characteristics of the circuit is sufficient in order to understand its behavior and be able to
interconnect it with other circuits.
The following figure illustrates the general concept where a circuit is represented by the
box as indicated. Our communication with the circuit is via the port A-B. This is a single
port network regardless of its internal complexity.
R4
Vn
In
R3
i
A
+
v
-
B
If we apply a voltage v across the terminals A-B as indicated we can in turn measure the
resulting current i . If we do this for a number of different voltages and then plot them on
the i-v space we obtain the i-v characteristic curve of the circuit.
For a general linear network the i-v characteristic curve is a linear function
+
i m v b
=
6.071/22.071 Spring 2006. Chaniotakis and Cory
(1.13)
6
Here are some examples of i-v characteristics
i
i
+
v
-
R
v
In general the i-v characteristic does not pass through the origin. This is shown by the
next circuit for which the current i and the voltage v are related by
iR Vs v
0
− =
+
or
i
=
v Vs
−
R
i
R
Vs
i
+
v
-
Vs
-Vs/R
v
(1.14)
(1.15)
Similarly, when a current source is connected in parallel with a resistor the i-v
relationship is
i
Is
= − +
v
R
i
open circuit
voltage
(1.16)
i
+
v
-
Is
R
-Is
RIs
v
short circuit
current
6.071/22.071 Spring 2006. Chaniotakis | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
Is
v
short circuit
current
6.071/22.071 Spring 2006. Chaniotakis and Cory
7
Thevenin Equivalent Circuits.
For linear systems the i-v curve is a straight line. In order to define it we need to identify
only two pints on it. Any two points would do, but perhaps the simplest are where the
line crosses the i and v axes.
These two points may be obtained by performing two simple measurements (or make two
simple calculations). With these two measurements we are able to replace the complex
network by a simple equivalent circuit.
This circuit is known as the Thevenin Equivalent Circuit.
Since we are dealing with linear circuits, application of the principle of superposition
results in the following expression for the current i and voltage v relation.
i m v
0
=
+
m V
j
j
+
∑
j
∑
b I
j
j
j
(1.17)
Where
jV and
the coefficients
jI are voltage and current sources in the circuit under investigation and
jb are functions of other circuit parameters such as resistances.
jm and
And so for a general network we can write
Where
And
+
i m v b
=
m m=
0
b
=
∑
j
m V
j
j
+
Thevenin’s Theorem is stated as follows:
∑
b
jI
j
j
(1.18)
(1.19)
(1.20)
A linear one port network can be replaced by an equivalent circuit consisting of a voltage
source VTh in series with a resistor Rth. The voltage VTh is equal to the open circuit
voltage across the terminals of the port and the resistance RTh is equal to the open circuit
voltage VTh divided by the short circuit current Isc
The procedure to calculate the Thevenin Equivalent Circuit is as follows:
1. Calculate the equivalent resistance of the circuit (RTh) by setting all voltage and
current sources to zero
2. | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
is as follows:
1. Calculate the equivalent resistance of the circuit (RTh) by setting all voltage and
current sources to zero
2. Calculate the open circuit voltage Voc also called the Thevenin voltage VTh
6.071/22.071 Spring 2006. Chaniotakis and Cory
8
The equivalent circuit is now
R4
Vn
In
R3
Original circuit
i
A
+
v
-
B
RTh
i
Voc
A
B
+
v
-
Equivalent circuit
If we short terminals A-B, the short circuit current Isc is
Isc
=
VTh
RTh
(1.21)
Example:
Find vo using Thevenin’s theorem
6kΩ
12 V
2kΩ
6kΩ
1kΩ
+
vo
-
The 1kΩ resistor is the load. Remove it and compute the open circuit voltage Voc or
VTh.
6kΩ
12 V
2kΩ
6kΩ
+
Voc
-
Voc is 6V. Do you see why?
Now let’s find the Thevenin equivalent resistance RTh.
6.071/22.071 Spring 2006. Chaniotakis and Cory
9
6kΩ
2kΩ
6kΩ
RTh
And the Thevenin circuit is
RTh
6
k
= Ω
// 6
k
Ω + Ω = Ω
k
5
2
k
5k Ω
6 V
1k Ω
6 V
5k Ω
1k Ω
+
vo
-
And vo=1 Volt.
Another example:
Determine the Thevenin equivalent circuit seen by the resistor RL.
+
Vs
-
R1
R2
RL
R3
R4
Resistor RL is the load resistor and the balance of the system is interface with it.
Therefore in order to characterize the network we must look the network characteristics
in the absence of RL.
6.07 | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
system is interface with it.
Therefore in order to characterize the network we must look the network characteristics
in the absence of RL.
6.071/22.071 Spring 2006. Chaniotakis and Cory
10
+
Vs
-
R1
R2
A
B
R3
R4
First lets calculate the equivalent resistance RTh. To do this we short the voltage source
resulting in the circuit.
R1
R2
A
B
R3
R4
≡
A
R1
R3
R2
R4
B
The resistance seen by looking into port A-B is the parallel combination of
In series with the parallel combination
R
13
=
1 3
R R
1
3
R
R
+
R
24
=
2 4
R R
2
4
R
R
+
RTh R
=
13
+
R
24
(1.22)
(1.23)
(1.24)
The open circuit voltage across terminals A-B is equal to
6.071/22.071 Spring 2006. Chaniotakis and Cory
11
+
Vs
-
R1
R2
A
vA
B
vB
R3
R4
VTh
=
vA vB
−
=
Vs
⎛
⎜
⎝
R
3
1
R
R
+
3
−
R
4
2
R
+
4
R
⎞
⎟
⎠
(1.25)
And we have obtained the equivalent circuit with the Thevenin resistance given by Eq.
(1.24) and the Thevenin voltage given by Eq. (1.25).
6.071/22.071 Spring 2006. Chaniotakis and Cory
12
The Wheatstone Bridge Circuit as a measuring instrument.
Measuring small changes in large quantities – is one of the most common challenges in
measurement. If the quantity you | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
Bridge Circuit as a measuring instrument.
Measuring small changes in large quantities – is one of the most common challenges in
measurement. If the quantity you are measuring has a maximum value, Vmax, and the
measurement device is set to have a dynamic range that covers 0 - Vmax, then the errors
will be a fraction of Vmax. However, many measurable quantities only vary slightly, and
so it would be advantageous to make a difference measurement over the limited range ,
Vmax- Vmin. The Wheatstone bridge circuit accomplishes this.
+
Vs
-
R1
R2
A
+
B
-
vu
R3
Ru
The Wheatstone bridge is composed of three known resistors and one unknown, Ru, by
measuring either the voltage or the current across the center of the bridge the unknown
resistor can be determined. We will focus on the measurement of the voltage vu as
indicated in the above circuit.
The analysis can proceed by considering the two voltage dividers formed by resistor pairs
R1, R3 and R2, R4.
+
Vs
-
R1
R3
A
+
vA
-
R2
Ru
B
+
vB
-
The voltage vu is given by
Where,
vu
=
vA vB
−
(1.26)
6.071/22.071 Spring 2006. Chaniotakis and Cory
13
And
And vu becomes:
vA Vs
=
vB Vs
=
R
3
1
R
R
+
3
Ru
2
+
Ru
R
vu Vs
=
⎛
⎜
⎝
R
3
1
R
R
+
3
−
Ru
2
+
Ru
⎞
⎟
⎠
R
(1.27)
(1.28)
(1.29)
A typical use of the Wheatstone bridge is to have R1=R2 and R3 ~ Ru. So let’s take
Ru R ε
+
=
3
Under these simplifications,
vu Vs
=
=
Vs
⎛
⎜
⎝
⎛
⎜
⎝ | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
R ε
+
=
3
Under these simplifications,
vu Vs
=
=
Vs
⎛
⎜
⎝
⎛
⎜
⎝
R
3
R
R
1
+
3
R
3
R
R
1
+
3
−
−
R
Ru
Ru
2
+
⎞
⎟
⎠
R
3
ε
+
R
R
3
1
+
+
ε
(1.30)
(1.31)
⎞
⎟
⎠
As discussed above we are interested in the case where the variation in Ru is small, that is
in the case where
. Then the above expression may be approximated as,
+(cid:19)
1R
ε
R
3
(cid:17)
vu Vs
ε
+
R
1
R
3
(1.32)
6.071/22.071 Spring 2006. Chaniotakis and Cory
14
The Norton equivalent circuit
A linear one port network can be replaced by an equivalent circuit consisting of a current
source In in parallel with a resistor Rn. The current In is equal to the short circuit current
through the terminals of the port and the resistance Rn is equal to the open circuit voltage
Voc divided by the short circuit current In.
The Norton equivalent circuit model is shown below:
In
Rn
i
+
v
-
By using KCL we derive the i-v relationship for this circuit.
or
i
+
In
−
v
Rn
=
0
i
=
v
Rn
−
In
For
0i =
(open circuit) the open circuit voltage is
And the short circuit current is
Voc
=
InRn
Isc
In=
(1.33)
(1.34)
(1.35)
(1.36)
If we choose
Rn RTh
=
and
In
=
Voc
RTh
the Thevenin and Norton circuits are equivalent
6.071/22.071 Spring 2006. | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
RTh
the Thevenin and Norton circuits are equivalent
6.071/22.071 Spring 2006. Chaniotakis and Cory
15
RTh
i
Voc
A
B
+
v
-
(cid:8)(cid:11)(cid:11)(cid:11)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:11)(cid:11)(cid:11)(cid:10)
Thevenin Circuit
i
+
v
-
In
RTh
(cid:8)(cid:11)(cid:11)(cid:11)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:11)(cid:11)(cid:11)(cid:10)
Norton Circuit
We may use this equivalence to analyze circuits by performing the so called source
transformations (voltage to current or current to voltage).
For example let’s consider the following circuit for which we would like to calculate the
current i as indicated by using the source transformation method.
3 V
i
3 Ω
6 Ω
6 Ω
3 Ω
2 A
By performing the source transformations we will be able to obtain the solution by
simplifying the circuit.
First, let’s perform the transformation of the part of the circuit contained within the
dotted rectangle indicated below:
3 V
i
3 Ω
6 Ω
6 Ω
3 Ω
2 A
The transformation from the Thevenin circuit indicated above to its Norton equivalent
gives
0.5 A
i
3 Ω
6 Ω
6 Ω
3 Ω
2 A
6.071/22.071 Spring 2006. Chaniotakis and Cory
16
Next let’s consider the Norton equivalent on the right side as indicated below:
0.5 A
i
3 Ω
6 Ω
6 Ω
3 Ω
2 A | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
side as indicated below:
0.5 A
i
3 Ω
6 Ω
6 Ω
3 Ω
2 A
The transformation from the Norton circuit indicated above to a Thevenin equivalent
gives
0.5 A
6 Ω
Which is the same as
0.5 A
i
i
3 Ω
3 Ω
6 Ω
6 Ω
6 Ω
6 Ω
6 V
6 V
By transforming the Thevenin circuit on the right with its Norton equivalent we have
i
0.5 A
6 Ω
6 Ω
6 Ω
1 A
And so from current division we obtain
i
=
1 3
⎛
⎜
3 2
⎝
⎞
⎟
⎠
=
1
2
A
(1.37)
6.071/22.071 Spring 2006. Chaniotakis and Cory
17
Another example: Find the Norton equivalent circuit at terminals X-Y.
Is
R3
Vs
R1
R4
R2
X
Y
First we calculate the equivalent resistance across terminals X-Y by setting all sources to
zero. The corresponding circuit is
R3
R1
R4
R2
X
Y
Rn
And Rn is
Rn
=
R R
2( 1
R
3
R
4)
+
+
R
R
R
R
3
2
1
4
+
+
+
(1.38)
Next we calculate the short circuit current
Is
R3
Vs
R1
R4
Isc
R2
X
Y
6.071/22.071 Spring 2006. Chaniotakis and Cory
18
Resistor R2 does not affect the calculation and so the corresponding circuit is
Is
R3
Vs
R1
R4
Isc
X
Y
By applying the mesh method we have
Isc
=
Vs
R
1
+
−
R
IsR
3
+
3
R | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
applying the mesh method we have
Isc
=
Vs
R
1
+
−
R
IsR
3
+
3
R
4
=
In
(1.39)
With the values for Rn and Isc given by Equations (1.38) and (1.39) the Norton
equivalent circuit is defined
In
Rn
X
Y
6.071/22.071 Spring 2006. Chaniotakis and Cory
19
Power Transfer.
In many cases an electronic system is designed to provide power to a load. The general
problem is depicted on Figure 1 where the load is represented by resistor RL.
linear
electronic
system
RL
Figure 1.
By considering the Thevenin equivalent circuit of the system seen by the load resistor we
can represent the problem by the circuit shown on Figure 2.
RTh
i
VTh
+
vL
-
RL
The power delivered to the load resistor RL is
Figure 2
The current i is given by
And the power becomes
2
P i RL
=
i
=
VTh
RTh RL
+
P
⎛
= ⎜
⎝
VTh
RTh RL
+
2
⎞
⎟
⎠
RL
(1.40)
(1.41)
(1.42)
For our electronic system, the voltage VTh and resistance RTh are known. Therefore if we
vary RL and plot the power delivered to it as a function of RL we obtain the general
behavior shown on the plot of Figure 3.
6.071/22.071 Spring 2006. Chaniotakis and Cory
20
The curve has a maximum which occurs at RL=RTh.
Figure 3.
In order to show that the maximum occurs at RL=RTh we differentiate Eq. (1.42) with
respect to RL and then set the result equal to zero.
and
dP
dRL
=
VTh
2
(
⎡
⎢
⎣
dP
dRL
2 | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
zero.
and
dP
dRL
=
VTh
2
(
⎡
⎢
⎣
dP
dRL
2
RTh RL
+
(
2
)
(
−
RTh RL
)
+
4
RL RTh RL
+
)
⎤
⎥
⎦
(1.43)
= → −
RL RTh
0
=
0
(1.44)
and so the maximum power occurs when the load resistance RL is equal to the Thevenin
equivalent resistance RTh.1
Condition for maximum power transfer:
RL RTh
=
The maximum power transferred from the source to the load is
P
max
=
2
VTh
4
RTh
(1.45)
(1.46)
1 By taking the second derivative
2
d P
2
dRL
and setting RL=RTh we can easily show that
2
d P
2
dRL
< , thereby
0
the point RL=RTh corresponds to a maximum.
6.071/22.071 Spring 2006. Chaniotakis and Cory
21
Example:
For the Wheatstone bridge circuit below, calculate the maximum power delivered to
resistor RL.
+
Vs
-
R1
R2
RL
R3
R4
Previously we calculated the Thevenin equivalent circuit seen by resistor RL. The
Thevenin resistance is given by Equation (1.24) and the Thevenin voltage is given by
Equation (1.25). Therefore the system reduces to the following equivalent circuit
connected to resistor RL.
RTh
i
VTh
+
vL
-
RL
For convenience we repeat here the values for RTh and VTh.
VTh Vs
=
⎛
⎜
⎝
3
R
1
R
R
+
3
−
4
R
2
R
+
4
⎞
⎟
⎠
R
RTh
=
1 3
R R
1
3
R
R
+
+
2 4
R R
2
4
R
R
+
The maximum power delivered to | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
1
3
R
R
+
+
2 4
R R
2
4
R
R
+
The maximum power delivered to RL is
P
max
=
VTh
4
RTh
Vs
2
2
=
4
3
⎛
⎜
⎝
⎛
⎜
⎝
R
3
R
R
1
+
R R
1 3
R
R
1
3
+
−
+
2
R
4
R
R
2
+
R R
2 4
R
R
2
4
+
⎞
⎟
4
⎠
⎞
⎟
⎠
(1.47)
(1.48)
(1.49)
6.071/22.071 Spring 2006. Chaniotakis and Cory
22
In various applications we are interested in decreasing the voltage across a load resistor
by without changing the output resistance of the circuit seen by the load. In such a
situation the power delivered to the load continues to have a maximum at the same
resistance. This circuit is called an attenuator and we will investigate a simple example to
illustrate the principle.
Consider the circuit shown of the following Figure.
RTh
Rs
VTh
Rp
attenuator
+
vo
-
a
b
RL
The network contained in the dotted rectangle is the attenuator circuit.
The constraints are as follows:
1. The equivalent resistance seen trough the port a-b is RTh
2. The voltage vo
k VTh
=
Determine the requirements on resistors Rs and Rp.
First let’s calculate the expression of the equivalent resistance seen across terminals a-b.
By shorting the voltage source the circuit for the calculation of the equivalent resistance
is
RTh
Rs
attenuator
a
Rp
Reff
b
The effective resistance is the parallel combination of Rp with Rs+RTh.
Reff
=
=
Rp
RTh Rs
(
+
RTh Rs Rp
(
+
RTh Rs Rp
+
) //
)
+
(1.50)
6.071/22.0 | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
Rs Rp
(
+
RTh Rs Rp
+
) //
)
+
(1.50)
6.071/22.071 Spring 2006. Chaniotakis and Cory
23
Which is constrained to be equal to RTh.
RTh
=
(
RTh Rs Rp
+
RTh Rs Rp
+
)
+
The second constraint gives
kVTh VTh
=
Rp
Rp RTh Rs
+
+
And so the constant k becomes:
k
=
Rp
Rp RTh Rs
+
+
By combining Equations (1.51) and (1.53) we obtain
And
Rs
=
1 k
−
k
R
Th
Rp
=
1
−
k
1
R
Th
The maximum power delivered to the load occurs at RTh and is equal to
Pmax
=
2
2
k VTh
4
RTh
(1.51)
(1.52)
(1.53)
(1.54)
(1.55)
(1.56)
6.071/22.071 Spring 2006. Chaniotakis and Cory
24
Representative Problems:
P1.
Find the voltage vo using superposition.
(Ans. 4.44 Volts)
vo
2 Ω
3 Ω
4 Ω
1 Ω
6 V
2 V
P2.
Calculate io and vo for the circuit below using superposition
(Ans. io=1.6 A, vo=3.3 V)
2 A
4 Ω
2 Ω
3 Ω
12 V
io
1 Ω
4 Ω
3 Ω
1 A
P3.
using superposition calculate vo and io as indicated in the circuit below
(Ans. io=1.35 A, vo=10 V)
+ vo -
io
4 Ω
3 Ω
3 Ω
24 V
1 Ω
2 A | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
V)
+ vo -
io
4 Ω
3 Ω
3 Ω
24 V
1 Ω
2 A
3 Ω
12 V
6.071/22.071 Spring 2006. Chaniotakis and Cory
25
P4.
P5.
Find the Norton and the Thevenin equivalent circuit across terminals A-B of the
circuit. (Ans.
VTh
Rn = Ω ,
1.7
1.25
2.12
In
V
A
)
=
=
,
A
4 Ω
5 A
3 Ω
1 Ω
3 Ω
Calculate the value of the resistor R so that the maximum power is transferred to
the 5Ω resistor. (Ans. 10Ω)
R
B
24 V
5 Ω
12 V
10 Ω
P6. Determine the value of resistor R so that maximum power is delivered to it from
the circuit connected to it.
Vs
+
-
R1
R3
R2
R4
R
P7
The box in the following circuit represents a general electronic element.
Determine the relationship between the voltage across the element to the current
flowing through it as indicated.
R1
R2
i
Vs
R3
v
+
-
6.071/22.071 Spring 2006. Chaniotakis and Cory
26 | https://ocw.mit.edu/courses/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/004de24dd40e3938b0393a6a2bd91788_linear_crct_ana.pdf |
UNCERTAINTY PRINCIPLE AND COMPATIBLE OBSERVABLES
B. Zwiebach
October 21, 2013
Contents
1 Uncertainty defined
2 The Uncertainty Principle
3 The Energy-Time uncertainty
4 Lower bounds for ground state energies
5 Diagonalization of Operators
6 The Spectral Theorem
7 Simultaneous Diagonalization of Hermitian Operators
8 Complete Set of Commuting Observables
1 Uncertainty defined
1
3
6
9
11
12
16
18
As we know, observables are associated to Hermitian operators. Given one such operator A we can
use it to measure some property of the physical system, as represented by a state Ψ.
If the state
is in an eigenstate of the operator A, we have no uncertainty in the value of the observable, which
coincides with the eigenvalue corresponding to the eigenstate. We only have uncertainty in the value
of the observable if the physical state is not an eigenstate of A, but rather a superposition of various
eigenstates with different eigenvalues.
We want to define the uncertainty ΔA(Ψ) of the Hermitian operator A on the state Ψ. This
uncertainty should vanish if and only if the state is an eigenstate of A. The uncertainty, moreover,
should be a real number. In order to define such uncertainty we first recall | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
should be a real number. In order to define such uncertainty we first recall that the expectation value
of A on the state Ψ, assumed to be normalized, is given by
A
)
(
=
Ψ
(
Ψ
A
|
|
)
=
Ψ, AΨ
(
)
.
(1.1)
is guaranteed to be real since A is Hermitian. We then define the uncertainty
as the norm of the vector obtained by acting with (A
I) on the physical state (I is the identity
A
)
− (
The expectation
A
)
(
operator):
ΔA(Ψ)
≡
A
I Ψ .
A
)
− (
(1.2)
1
The uncertainty, so defined is manifestly non-negative. If the uncertainty is zero, the vector inside the
norm is zero and therefore:
A
(cid:0)
and the last equation confirms that the state is indeed an eigenstate of A (note that
I Ψ = 0
A
)
ΔA(Ψ) = 0
Ψ ,
A
)
(
A Ψ =
− (
→
→
(cid:1)
(1.3)
is a number).
You should also note that
A
)
(
and forming the inner product with another Ψ we get
is indeed the eigenvalue, since taking the eigenvalue equation AΨ = λΨ
A
)
(
Ψ, AΨ
(
)
= λ
Ψ, Ψ
(
)
= λ
→
λ =
.
A
)
(
(1.4)
Alternatively, if the state Ψ is an | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
λ =
.
A
)
(
(1.4)
Alternatively, if the state Ψ is an eigenstate, we now now that the eigenvalue if
state (A
I)Ψ vanishes and its norm is zero. We have therefore shown that
A
)
− (
and therefore the
A
)
(
The uncertainty ΔA(Ψ) vanishes if and only if Ψ is an eigenstate of A .
(1.5)
To compute the uncertainty one usually squares the expression in (1.2) so that
I Ψ
A
)
(cid:1)
A
)
(
(ΔA(Ψ))2 =
A
(cid:0)
\
I Ψ , A
A
)
− (
− (
(cid:1)
(cid:0)
)
(1.6)
Since the operator A is assumed to be Hermitian and consequently
I)† =
A
)
I, and therefore we can move the operator on the first entry onto the second one to find
A
)
is real, we have (A
− (
− (
A
While this is a reasonable form, we can simplify it further by expansion
(ΔA(Ψ))2 = Ψ , A
\
(cid:0)
2
I Ψ
A
)
(cid:1)
− (
.
)
The last two term combine and we find
(ΔA(Ψ))2 = Ψ , A2
\
(cid:0)
A +
A
2
)
(
−
2I Ψ
A
)
(
(cid:1)
.
)
(ΔA(Ψ))2 =
A2
(
2 .
A
)
) − (
(1.7)
(1.8)
( | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
))2 =
A2
(
2 .
A
)
) − (
(1.7)
(1.8)
(1.9)
Since the left-hand side is greater than or equal to zero, this incidentally shows that the expectation
value of A2 is larger than the expectation value of A, squared:
A2
(
2 .
A
)
) ≥ (
(1.10)
An interesting geometrical interpretation of the uncertainty goes as follows. Consider the one-
dimensional vector subspace UΨ generated by Ψ. Take the state AΨ and project it to the subspace
Ψ and the part of A Ψ in the orthogonal subspace U ⊥ is a vector
UΨ. The projection, we claim is
A
)
(
of norm equal to the uncertainty ΔA. Indeed the orthogonal projector PUΨ is
Ψ
PUΨ =
Ψ
|
Ψ
)(
,
|
2
(1.11)
Figure 1: A state Ψ and the one-dimensional subspace UΨ generated by it. The projection of AΨ to UΨ is
Ψ. The orthogonal complement Ψ⊥ is a vector whose norm is the uncertainty ΔA(Ψ).
A
)
(
so that
Ψ
Moreover, the vector A
|
)
Ψ
PUΨA
|
Ψ
)(
|
minus its projection must be a vector
Ψ
A
|
|
Ψ
|
=
=
Ψ
)(
)
)
.
A
)
Ψ⊥)
|
Ψ
A
|
) − (
A
Ψ
=
Ψ⊥)
|
,
)
)|
orthogonal to | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
)
Ψ⊥)
|
Ψ
A
|
) − (
A
Ψ
=
Ψ⊥)
|
,
)
)|
orthogonal to
Ψ
|
)
(1.12)
(1.13)
as is easily confirmed by taking the overlap with the bra Ψ. Since the norm of the above left-hand side
is the uncertainty, we confirm that ΔA =
, as claimed. These results are illustrated in Figure 1.
Ψ⊥|
|
2
The Uncertainty Principle
The uncertainty principle is an inequality that is satisfied by the product of the uncertainties of two
Hermitian operators that fail to commute. Since the uncertainty of an operator on any given physical
state is a number greater than or equal to zero, the product of uncertainties is also a real number
greater than or equal to zero. The uncertainty inequality often gives us a lower bound for this product.
When the two operators in question commute, the uncertainty inequality gives no information.
Let us state the uncertainty inequality. Consider two Hermitian operators A and B and a physical
state Ψ of the quantum system. Let ΔA and ΔB denote the uncertainties of A and B, respectively,
in the state Ψ. Then we have
(ΔA)2(ΔB)2
Ψ
1
2i
|
≥
\
[A, B] Ψ
2
.
(cid:12)
(cid:12)
(2.14)
The left hand side is a real, non-negative number. For this to be consistent inequality, the right-hand | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
is a real, non-negative number. For this to be consistent inequality, the right-hand
side must also be a real number that is not negative. Since the right-hand side appears squared, the
object inside the parenthesis must be real. This can only happen for all Ψ if the operator
1
2i
[A, B]
3
(2.15)
is Hermitian. For this first note that the commutator of two Hermitian operators is anti-Hermitian:
[A, B]† = (AB)†
(BA)† = B†A†
A†B†
−
BA =
[A, B]
−
−
−
(2.16)
The presence of the i then makes the operator in (2.15) Hermitian. Note that the uncertainty inequality
can also be written as
where the bars on the right-hand side denote absolute value.
ΔA ΔB
≥
Ψ
1
2i
|
[A, B] Ψ
[
[
(cid:12)
(cid:12)
(cid:12)
\
(cid:12)
(cid:12)
.
(cid:11)(cid:12)
(cid:12)
(cid:12)
(2.17)
Before we prove the theorem, let’s do the canonical example! Substuting ˆx for A and ˆp for B
results in the position-momentum uncertainty relation you have certainly worked with:
Since [ˆx, pˆ]/(2i) = 1/2 we get
(Δx)2(Δp)2
1
2i
|
Ψ
≥ ( | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
1/2 we get
(Δx)2(Δp)2
1
2i
|
Ψ
≥ (
(cid:16)
Ψ
[ˆx, pˆ]
|
)
(cid:17)
2
.
(Δx)2(Δp)2
12
≥
4
→
Δx Δp
1
2
.
≥
(2.18)
(2.19)
We are interested in the proof of the uncertainty inequality for it gives the information that is
needed to find the conditions that lead to saturation.
Proof. We define the following two states:
Note that by the definition (1.2) of uncertainty,
f
|
g
|
) ≡
) ≡
(A
(B
Ψ
I)
A
|
)
)
Ψ
I)
B
|
)
)
− (
− (
f
(
f
|
g
(
g
|
= (ΔA)2 ,
= (ΔB)2 .
)
)
.
(2.20)
(2.21)
The Schwarz inequality immediately furnishes us an inequality involving precisely the uncertainties
and therefore we have
f
(
f
|
g
)(
g
|
f
) ≥ |(
g
|
2 ,
)|
(ΔA)2(ΔB)2
f
≥ |(
g
|
2 = (Re
f
(
)|
g
|
)2 + (Im
)
f
(
g
|
)2 .
)
Writing Aˇ = (A
I) and Bˇ = (B
A
)
− (
B
− (
I), we now begin to compute the right-hand side:
)
(2.22)
(2.23)
f
(
g
|
)
=
Ψ
(
AˇBˇ
|
Ψ
|
)
= | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
)
(2.23)
f
(
g
|
)
=
Ψ
(
AˇBˇ
|
Ψ
|
)
=
Ψ
(
(A
|
I)(B
A
)
− (
B
− (
Ψ
I)
|
)
)
=
Ψ
(
AB
|
Ψ
|
and since
and
f
|
)
g
|
)
go into each other as we exchange A and B,
g
(
f
|
)
=
Ψ
(
AˇBˇ
|
Ψ
|
)
=
Ψ
(
Ψ
BA
|
|
B
) − (
.
A
)
)(
4
A
B
) − (
)(
,
)
(2.24)
(2.25)
From the two equations above we find a nice expression for the imaginary part of
f
(
g
|
:
)
f
|
For the real part the expression is not that simple, so it is best to leave it as the anticommutator of
Ψ
[A, B]
|
|
g
) − (
) =
)
f
Im
(
(2.26)
Ψ
(
g
|
g
|
)
)
.
1
f
= (
2i
(
1
2i
the checked operators:
Back in (2.23) we get
f
Re
(
g
|
)
=
1
2
f
(
(
g
|
)
+
g
(
f
|
) =
)
1
2
Ψ
(
ˆA, ˆB
|{
Ψ
}|
)
(ΔA)2(ΔB)2
1
2i
Ψ
(
(cid:16)
|
≥
Ψ
[A, B]
|
)
(cid:17)
2
+
Ψ
(
(cid:16)
|
1
2
ˆA, ˆB
{
Ψ
}|
2
. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
Ψ
(
(cid:16)
|
1
2
ˆA, ˆB
{
Ψ
}|
2
.
)
(cid:17)
(2.27)
(2.28)
This can be viewed as the most complete form of the uncertainty inequality. It turns out, however,
that the second term on the right hand side is seldom simple enough to be of use, and many times
it can be made equal to zero for certain states. At any rate, the term is positive or zero so it can be
dropped while preserving the inequality. This is often done, thus giving the celebrated form (2.14)
that we have now established.
Now that we have a proven the uncertainty inequality, we can ask: What are the conditions for this
inequality to be saturated? If the goal is to minimize uncertainties, under what conditions can we
achieve the minimum possible product of uncertainties? As the proof shows, saturation is achieved
under two conditions:
1. The Schwarz inequality is saturated. For this we need
g
|
)
= β
f
|
)
where β
C.
∈
) = 0, so that the last term in (2.28) vanishes. This means that
g
f
2. Re(
)
|
(
f
= β
|
in Condition 2, we get
g
|
)
)
Using
f
(
g
|
)
+
g
(
f
|
)
= 0.
f
(
which requires β + β∗ = 0 or that the real part of β vanish. It follows | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
which requires β + β∗ = 0 or that the real part of β vanish. It follows that β must be purely imaginary.
So, β = iλ, with λ real, and therefore the uncertainty inequality will be saturated if and only if
(2.29)
= (β + β ∗ )
f
(
+ β ∗
= 0 ,
= β
f
(
f
(
g
(
f
|
f
|
f
|
f
|
g
|
+
)
)
)
)
)
More explicitly this requires
g
|
)
= iλ
f
|
,
)
R .
λ
∈
Saturation Condition:
(B
B
− (
Ψ
I)
|
)
)
= iλ (A
Ψ
I)
A
|
)
− (
.
)
(2.30)
(2.31)
This must be viewed as a condition for Ψ, given any two operators A and B. Moreover, note that
and
A
(
)
equation. Taking the norm of both sides we get
B
(
)
are Ψ dependent. What is λ, physically? Well, the norm of λ is actually fixed by the
ΔB =
λ
|
ΔA
|
λ
→ |
=
|
ΔB
ΔA
.
(2.32)
The classic illustration of this saturation condition is worked out for the x, p uncertainty inequality
ΔxΔp
≥
1/2. You will find that gaussian wavefunctions satisfy the saturation condition.
5
3 The Energy-Time uncertainty
A more subtle form of the uncertainty relation deals with energy and time. The inequality is | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
-Time uncertainty
A more subtle form of the uncertainty relation deals with energy and time. The inequality is sometimes
stated vaguely in the form ΔEΔt � 1. In here there is no problem in defining ΔE precisely, after
all we have the Hamiltonian operator, and its uncertainty ΔH is a perfect candidate for the ‘energy
uncertainty’. The problem is time. Time is not an operator in quantum mechanics, it is a parameter,
a real number used to describe the way systems change. Unless we define Δt in a precise way we
cannot hope for a well-defined uncertainty relation.
We can try a rough, heuristic definition, in order to illustrate the spirit of the inequality. Consider
a photon that is detected at some point in space, as a passing oscillatory wave of exact duration
T . Even without quantum mechanical considerations we can ask the observer what was the angular
frequency ω of the pulse.
In order to answer our question the observer will attempt to count the
number N of complete oscillations of the waveform that went through. Of course, this number N is
given by T divided by the period 2π/ω of the wave:
N =
ω T
2π
.
(3.33)
The observer, however, will typically fail to count full waves, because as the pulse gets started from
zero and later | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
typically fail to count full waves, because as the pulse gets started from
zero and later on dies off completely, the waveform will cease to follow the sinusoidal pattern. Thus
we expect an uncertainty ΔN � 1. Given the above relation, this implies an uncertainty Δω in the
value of the angular frequency
Δω T � 2π .
(3.34)
This is all still classical, the above identity is something electrical engineers are well aware of.
It
represents a limit on the ability to ascertain accurately the frequency of a wave that is observed for
a limited amount of time. This becomes quantum mechanical if we speak of a single photon, whose
energy is E = 1ω. Then ΔE = 1Δω, so that multiplying the above inequality by 1 we get
ΔE T � h .
(3.35)
In this uncertainty inequality T is the duration of the pulse. It is a reasonable relation but the presence
of � betrays its lack of full precision.
We can find a precise energy/Q-ness uncertainty inequality by applying the general uncertainty
inequality to the Hamiltonian H and another Hermitian operator Q, as did the distinguished Russian
physicists L. Mandelstam and Tamm shortly after the formulation of the uncertainty principle. We
would then have
ΔH ΔQ
≥
Ψ
1
2i
|
\
(cid:12)
(cid:12) | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
then have
ΔH ΔQ
≥
Ψ
1
2i
|
\
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:11)
(cid:12)
(cid:12)
(cid:12)
This starting point is interesting because the commutator [H, Q] encodes something very physical
about Q. Indeed, let us consider henceforth the case in which the operator Q hasno time dependence.
It could be, for example some function of ˆx and ˆp, or for a spin-1/2 particle, the operator
+
|
. Such
)(−|
6
[H, Q] Ψ
.
(3.36)
operator Q can easily have time-dependent expectation values, but the time dependence originates
from the time dependence of the states, not from the operator Q itself.
To explore the meaning of [H, Q] we begin by computing the time-derivative of the expectation
value of Q:
d
dt
Q
(
)
=
d
dt
\
Ψ , QΨ =
(cid:11)
∂Ψ
∂t
(
, QΨ + Ψ , Q
) (
∂Ψ
∂t
)
(3.37)
where we did not have to differentiate Q as it is time-independent. At this point we can use the
Schr¨odinger equation to find
1
i1
HΨ , QΨ + Ψ , Q HΨ
(
Ψ , QHΨ
HΨ , QΨ
)
1
i1
d
dt
Q
(
)
=
=
=
(
i
1
(cid:16)
\
i | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
1
i1
d
dt
Q
(
)
=
=
=
(
i
1
(cid:16)
\
i
Ψ , (HQ
1
−
\
(cid:11)
QH)Ψ =
−
(cid:17)
Ψ , [H, Q]Ψ
(cid:11)
i
1
\
)
(3.38)
(cid:11)
\
where we used the Hermiticity of the Hamiltonian. We have thus arrived at
(cid:11)
d
dt
Q
(
)
=
i
1
\
[H, Q]
for time-independent Q .
(3.39)
(cid:11)
This is a very important result. Each time you see [H, Q] you should think ‘time derivative of
classical mechanics one usually looks for conserved quantities, that is, functions of the dynamical vari
ables that are time independent. In quantum mechanics a conserved operator is one whose expectation
value is time independent. An operator Q is conserved if it commutes with the Hamiltonian!
With this result, the inequality (3.36) can be simplified. Indeed, using (3.39) we have
Q
(
’. In
)
and therefore
[H, Q]
1
2i
(
(cid:12)
(cid:12)
(cid:12)
ΔH ΔQ
≥
)
(cid:12)
(cid:12)
(cid:12)
1
2
=
(cid:12)
(cid:12)
(cid:12)
Q
d
(
dt
)
1
2i
1
i
Q
d
(
dt
)
Q
d
(
dt
)
1
2
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
=
(cid:12 | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
=
(cid:12)
(cid:12)
(cid:12)
(3.40)
,
for timeindependent Q .
(3.41)
This is a perfectly precise uncertainty inequality. The terms in it suggest a definition of a time ΔtQ
.
(3.42)
(cid:12)
(cid:12)
(cid:12)
to change by ΔQ if both ΔQ and the
This quantity has units of time. It is the time it would take
velocity d�Q) were time-independent. Since they are not necessarily so, we can view ΔtQ as the time
and ΔQ are roughly of the same size.
for “appreciable” change in
Q
(
(cid:12)
(cid:12)
(cid:12)
dt
)
Q
(
. This is certainly so when
)
Q
(
)
In terms of ΔtQ the uncertainty inequality reads
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ΔtQ
≡
ΔQ
d�Q)
dt
ΔHΔtQ
1
2
.
≥
7
(3.43)
This is still a precise inequality, given that ΔtQ has a concrete definition in (3.42).
As you will consider in the homework, (3.41) can be used to derive an inequality for time Δt⊥ that
it takes for a system to become orthogonal to itself. If we | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
time Δt⊥ that
it takes for a system to become orthogonal to itself. If we call the initial state Ψ(0), we call Δt⊥ the
smallest time for which
= 0. You will be able to show that
Ψ(0), Ψ(Δt⊥)
)
(
ΔH Δt⊥ ≥
h
4
.
(3.44)
The speed in which a state can turn orthogonal depends on the energy uncertainty, and in quantum
computation it plays a role in limiting the maximum possible speed of a computer for a fixed finite
energy.
The uncertainty relation involves ΔH.
It is natural to ask if this quantity is time dependent.
As we show now, it is not, if the Hamiltonian is a time-independent operator. Indeed, if H is time
independent, we can use H and H 2 for Q in (3.39) so that
d
dt
d
dt
H
(
H 2
(
)
)
=
=
i
1
i
1
[H, H] = 0 ,
(cid:11)
[H, H 2] = 0 .
(cid:11)
\
\
It then follows that
d
dt
(ΔH)2 =
d
dt
showing that ΔH is a constant. So we have shown that
H 2
(
H
) − (
2 = 0 .
)
(cid:1)
(cid:0)
(3.45)
(3.46)
If H is time independent, the uncertainty ΔH is constant in time. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
3.46)
If H is time independent, the uncertainty ΔH is constant in time.
(3.47)
The concept of conservation of energy uncertainty can be used to understand some aspects of
atomic decays. Consider, for illustration the hyperfine transition in the hydrogen atom. Due to the
existence of proton spin and the electron spin, the ground state of hydrogen is fourfold degenerate,
corresponding to the four possible combinations of spins (up-up, up-down, down-up, down-down).
The magnetic interaction between the spins actually breaks this degeneracy and produces the so-
10−6ev (compare with about 13.6 ev
for the ground state energy). For a hyperfine atomic transition, the emitted photon carries the energy
called “hyperfine” splitting. This is a very tiny split: δE = 5.88
×
difference δE resulting in a wavelength of 21.1cm and a frequency ν = 1420.405751786(30)MHz. The
eleven significant digits of this frequency attest to the sharpness of the emission line. The issue of
uncertainty arises because the excited state of the hyperfine splitting has a lifetime τH for decay to
11 million
the ground state and emission of a photon. This lifetime is extremely long, in fact τH
107sec, accurate to better than 1% ). This
years (= 3.4
lifetime can | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
H
107sec, accurate to better than 1% ). This
years (= 3.4
lifetime can be viewed as the time that takes some observable of the electron-proton system to change
1014 sec, recalling that a year is about π
×
∼
×
significantly (its total spin angular momentum, perhaps) so by the uncertainty principle it must be
10−30ev. of the original excited state of the
related to some energy uncertainty ΔE
2
1/τH
∼
×
≃
8
hydrogen atom. Once the decay takes place the atom goes to the fully stable ground state, without
any possible energy uncertainty. By the conservation of energy uncertainty, the photon must carry the
10−25, an absolutely infinitesimal effect on the photon. There is
no broadening of the 21 cm line! That’s one reason it is so useful in astronomy. For decays with much
uncertainty ΔE. But ΔE/δE
×
∼
3
shorter lifetimes there can be an observable broadening of an emission line due to the energy-time
uncertainty principle.
4
Lower bounds for ground state energies
You may recall that the variational principle could be used to find upper bounds on ground state
energies. The uncertainty principle can be used to find lower bounds for the ground state energy of
1/2 to find rigorous lower
bounds for the ground state | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
ground state energy of
1/2 to find rigorous lower
bounds for the ground state energy of one-dimensional Hamiltonians. This is best illustrated by an
certain systems. use below the uncertainty principle in the form ΔxΔp
≥
example.
Consider a particle in a one-dimensional quartic potential considered earlier
H =
2
p
2m
+ α x4 ,
(4.48)
where α > 0 is a constant with units of energy over length to the fourth power. Our goal is to find
a lower bound for the ground state energy
)gs. Taking the ground state expectation value of the
p2
(
2m
Hamiltonian we have
)gs =
)gs ,
x 4
(
(4.49)
H
(
H
(
)gs
+ α
Recalling that
we see that
(Δp)2 =
p 2
(
p
) − (
2 ,
)
p 2
(
) ≥
(Δp)2 ,
(4.50)
(4.51)
for any state of the system. We should note however, that for the ground state (or any bound state)
= 0 so that in fact
p
(
)
From the inequality
A2
(
2 we have
A
)
) ≥ (
p 2
(
)gs = (Δp)2
gs
,
x 4
(
x 2
) ≥ (
2 .
)
Moreover, just like for momentum above, (Δx)2 =
x2
(
x
) − (
2 leads to
)
so that
x 2
(
) ≥
(Δx)2 | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
2
(
x
) − (
2 leads to
)
so that
x 2
(
) ≥
(Δx)2 ,
x 4
(
) ≥
(Δx)4 ,
9
(4.52)
(4.53)
(4.54)
(4.55)
for the expectation value on arbitrary states. Therefore
H
(
)gs =
p2
(
2m
)gs
+ α
x 4
(
)gs
≥
(Δpgs)2
2m
+ α (Δxgs)4
From the uncertainty principle
Δxgs Δpgs
1
2
≥
→
Δpgs
≥
1
2Δxgs
.
(4.56)
(4.57)
Back to the value of
H
(
)gs we get
H
(
12
8m(Δxgs)2
The quantity to the right of the inequality is a function of Δxgs. This function has been plotted in
Figure 2.
+ α (Δxgs)4 .
(4.58)
)gs
≥
Figure 2: We have that
certain that
Hgs
(
) ≥
Hgs
(
)
f (Δxgs) but we don’t know the value of Δxgs. As a result, we can only be
is greater than or equal to the lowest value the function f (Δxgs) can take.
If we knew the value of Δxgs we would immediately know that
)gs is bigger than the value taken
by the right-hand side. This would be quite nice, since we want the highest possible lower bound.
Since we don | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
would be quite nice, since we want the highest possible lower bound.
Since we don’t know the value of Δxgs, however, the only thing we can be sure of is that
)gs is
bigger than the lowest value that can be taken by the expression to the right of the inequality as we
H
(
H
(
vary Δxgs:
H
(
)gs
≥
MinΔx
(cid:16)
12
8m(Δx)2
+ α (Δx)4
.
(cid:17)
(4.59)
The minimization problem is straightforward. In fact
f (x) =
A
2
x
+ Bx4
is minimized for x =
2
2−1/3
1/3
A
B
(cid:16) (cid:17)
yielding f =
21/3 3
2
(A2B)1/3
.
(4.60)
Applied to (4.59) we obtaine
H
(
)gs
≥
12 α 2/3
21/3 3
√
8 m
(cid:16)
(cid:17)
≃
0.4724
(cid:16)
√
m
12 α 2/3
(cid:17)
.
(4.61)
This is the final lower bound for the ground state energy. It is actually not too bad, for the ground
state instead of the prefactor 0.4724, we have 0.668.
10
5 Diagonalization of Operators
When we have operators we wish to understand, | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
5 Diagonalization of Operators
When we have operators we wish to understand, it can be useful to find a basis on the vector space
for which the operators are represented by matrices that take a simple form. Diagonal matrices are
matrices where all non diagonal entries vanish.
matrix representing an operator is diagonal we say that the operator is diagonalizable.
If we can find a set of basis vectors for which the
If an operator T is diagonal in some basis (u1, . . . un) of the vector space V , its matrix takes the
form diag (λ1, . . . λn), with constants λi, and we have
T u1 = λ1u1 ,
. . . , T un = λnun .
(5.62)
The basis vectors are recognized as eigenvectors with eigenvalues given by the diagonal elements. It
follows that a matrix is diagonalizable if and only if it possesses a set of eigenvectors that span the
vector space. Recall that all operators T on complex vector spaces have at least one eigenvalue and
thus at least a one eigenvector. But not even in complex vector spaces all operators have enough
eigenvectors to span the space. Those operators cannot be diagonalized. The simplest example of
such operator is provided by the two-by-two matrix
0 1
0 0
(
)
.
The only eigenvalue of | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
by the two-by-two matrix
0 1
0 0
(
)
.
The only eigenvalue of this matrix is λ = 0 and the associated eigenvector is
(5.63)
1
0
( )
. Since a two
dimensional vector space cannot be spanned with one eigenvector, this matrix cannot be diagonalized.
Having seen that the question of diagonalization of an operator is ultimately a question about its
eigenvectors, we want to emphasize that the question can be formulated without referring to any
basis. Bases, of course are useful, to express concretely
Suppose we have a vector space V and we have chosen a basis (v1, . . . , vn) such that a linear
) that is not diagonal. As we learned before, if we change
}
operator has a matrix representation Tij (
v
{
basis to a new one (u1, . . . , un) using a linear operator A such that
uk = A vk ,
u
the matrix representation Tij (
{
) = A−1T (
v
}
{
u
T (
{
) of the operator in the new basis takes the form
}
)A
}
u
or Tij (
{
) = (A−1)ikTkp(
v
{
}
) Apj ,
}
(5.64)
(5.65)
where the matrix Aij is the representation of A in the original v-basis. The operator T is diagonalizable
u
if there is an operator A such that Tij (
{
) is diagonal. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
is diagonalizable
u
if there is an operator A such that Tij (
{
) is diagonal.
}
There are two pictures of the diagonalization: One can consider the operator T and state that
its matrix representation is diagonal when referred to the u basis obtained by acting with A on the
original v basis. Alternatively, we can view the result as the existence of a related operator A−1T A
that is diagonal in the original v basis. Indeed, T ui = λiui (i not summed) implies that T A vi = λiAvi
11
and acting with A−1 that (A−1T A) vi = λivi, which confirms that A−1T A is represented by a diagonal
matrix in the original v basis. Both viewpoints are valuable.
v
It is useful to note that the columns of the matrix A are in fact the eigenvectors of T (
{
see this as follows. Since the eigenvectors are the uk we have
uk = Avk
→
uk =
Aikvi .
i
). We
}
(5.66)
Using the original basis means vi is represented by a column vector of zeroes with a single unit entry
at the i-th position. We thus find
uk =
A1 k
. .
.
Ank
.
(5.67)
confirming that the k-th column of A is the | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
.
(5.67)
confirming that the k-th column of A is the k-th eigenvector of T .
While not all operators on complex vector spaces can be diagonalized, the situation is much
improved for Hermitian operators. Recall that T is Hermitian if T = T † . Hermitian operators can be
diagonalized, and so can unitary operators. But even more is true: the operators take diagonal form
in an orthonormal basis!
An operator M
is said to be unitarily diagonalizable if there is an orthonormal basis in which
its matrix representation is a diagonal matrix. That basis, therefore, is an orthonormal basis of
eigenvectors. Starting with an arbitrary orthonormal basis (e1, . . . , en) where the matrix representation
), a unitary transformation of this basis produces the orthonormal basis in which the
e
of M is M (
}
{
operator takes diagonal form. More explicitly, there is a unitary matrix U (U † = U −1) and a diagonal
matrix DM such that
U †M (
) U = DM .
e
}
{
(5.68)
6
The Spectral Theorem
While we could prove, as most textbooks do, that Hermitian operators are unitarily diagonalizable,
this result holds for a more general class of operators, called normal operators. The proof is not harder
than the one for hermitian operators. An operator M is | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
operators. The proof is not harder
than the one for hermitian operators. An operator M is said to be normal if it commutes with its
adjoint:
M is normal :
[M †, M ] = 0 .
(6.69)
Hermitian operators are clearly normal. So are anti-hermitian operators (M † =
M is antihermitian).
Unitary operators U are normal because both U †U and U U † are equal to the identity matrix and thus
U and U † commute.
Exercise. If an operator M is normal show that so is V †M V where V
is a unitary operator.
−
12
Lemma: Let w be an eigenvector of the normal operator M : M w = λw. Then w is also an eigenvector
of M † with complex conjugate eigenvalue:
M † w = λ ∗ w .
(6.70)
λ∗I)w. The result holds if u is the zero vector. To show this we compute
Proof: Define u = (M †
the norm-squared of u:
−
u
|
2 =
|
u, u
(
)
=
(M †
(
−
λ ∗ I)w , (M †
λ ∗ I)w
)
−
Using the adjoint property to move the operator in the first entry to the second entry:
u
|
2 =
|
w , (M
(
−
λI)(M †
λ ∗ I)w
)
−
Since | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
=
|
w , (M
(
−
λI)(M †
λ ∗ I)w
)
−
Since M and M † commute, so do the two factors in parenthesis and therefore
u
|
2 =
|
w , (M †
(
−
λ ∗ I)(M
λI)w
)
−
= 0
since (M
−
λI) kills w. It follows that u = 0 and therefore (6.70) holds.
(6.71)
(6.72)
(6.73)
D
We can now state our main theorem, called the spectral theorem. It states that a matrix is unitarily
diagonalizable if and only if it is normal. More to the point,
Spectral Theorem: Let M be an operator in a complex vector space. The vector space
has a orthonormal basis comprised of eigenvectors of M if and only if M is normal.
(6.74)
Proof. It is easy to show that unitarily diagonalizable implies normality.
Indeed, from (5.68) and
dropping the reference to the e-basis,
M = U DM U † and therefore M † = U D U † .M
†
M †M = U D DM U †
†
M
and M M † = U DM D U † .
†
M
We then get
so that
because any two diagonal matrices commute.
[M †, M ] = U (D DM
†
M
DM D
† )U †
M
= 0 ,
−
Now | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
M ] = U (D DM
†
M
DM D
† )U †
M
= 0 ,
−
Now let us prove that M provides a basis of orthonormal eigenvectors. The proof is by induction.
The result is clearly true for dim V = 1. We assume that it holds for (n
1)-dimensional vector spaces
and consider the case of n-dimensional V . Let M be an n
n matrix referred to the orthonormal basis
−
×
. We know there is at least one eigenvalue λ1 with a non-zero
n
|
) of V so that Mij =
, . . . ,
1
(
|
)
)
x1)
eigenvector
|
of unit norm:
M
i
|
(
j
|
)
M
x1)
|
= λ1|
x1)
and M †
=
x1)
|
λ ∗
x1)
1|
,
(6.75)
13
i
L
x1)
|
. Define now
i
xi)(
|
|
M1 ≡
†
= U M
1
†
= λ1U
1
U1 M U1 .
†
in view of the Lemma. There is, we claim, a unitary matrix U1 such that
x1)
|
1
= U1|
) →
U
†
1
=
x1)
|
.
1
)
|
(6.76)
U1 is not unique and can be constructed as follows: extend
using Gram-Schmidt. Then write U1
=
to an orthonormal basis
, . . . ,
x1)
|
xN
)
|
(6.77)
( | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
an orthonormal basis
, . . . ,
x1)
|
xN
)
|
(6.77)
(6.78)
(6.79)
(6.80)
†
1
1
= U M U1|
)
1
M1 is also normal and M1|
)
x1)
|
1
= λ1|
)
Let us now examine the explicit form of the matrix M1:
1
M1|
)
.
x1)
|
1
= λ1|
)
, so that
which says that the first column of M1 has zeroes in all entries except the first. Moreover
j
(
1
M1|
)
|
j
= λ1(
1
)
|
= λ1δi,j ,
j
M1|
1
|
(
where we used M †
= λ∗
1
1|
1
)
equations that M1, in the original basis, takes the form
) ∗ = (λ ∗
j
1
1(
)
|
j
= (
(
1
)
|
†
M
1
|
)
) ∗ = λ1(
j
1
1
|
)
|
= λ1δi,j ,
)
which follows from the normality of M1. It follows from the two last
M1 =
λ1 0
0
. . .
0
. . . 0
′ M
.
Since M1 is normal, one can see that M
−
hypothesis M can be unitarily diagonalized so that U ′†M ′ U
unitary matrix U | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
M can be unitarily diagonalized so that U ′†M ′ U
unitary matrix U . The matrix U can be extended to an n-by-n unitary matrix Uˆ as follows
is diagonal for some (n
1)-by-(n
′
is a normal (n
−
−
′
′
′
1) matrix. By the induction
1)-by-(n
1)
−
′
ˆU =
1
0
. .
.
0
0
. . . 0
′
U
.
(6.81)
It follows that Uˆ †M1Uˆ = Uˆ †U M U1Uˆ = (U1Uˆ )†M (U1Uˆ ) is diagonal, proving the desired result. D.
†
1
Of course this theorem implies that Hermitian and unitary operators are unitarily diagonalizable.
In other words the eigenvectors form an orthonormal basis. This is true whether or not there are
degeneracies in the spectrum. The proof does not require discussion of this as a special case.
If
an eigenvalue of M is degenerate and appears k times, then there are k orthonormal eigenvectors
associated with the corresponding k-dimensional M -invariant subspace of the vector space.
14
We conclude this section with a description of the general situation that we may encounter when
diagonalizing a normal operator | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
a description of the general situation that we may encounter when
diagonalizing a normal operator T . In general, we expect degeneracies in the eigenvalues so that each
- - - - - - - - - - - - -
eigenvalue λk is repeated dk
V has T -invariant subspaces of different dimensionalities. Let Uk denote the T -invariant subspace of
dimension dk
1 times. An eigenvalue λk is degenerate if dk > 1.
1 spanned by eigenvectors with eigenvalue λk:
It follows that
≥
≥
|
By the spectral theorem Uk has a basis comprised by dk orthonormal eigenvectors (u1
Note that while the addition of eigenvectors with different eigenvalues does not give eigenvectors, in
(k)
, . . . , udk
(k)
∈
).
}
Uk
v
≡ {
V
T v = λkv
,
dim Uk = dk .
(6.82)
the subspace Uk all vectors are eigenvectors with the same eigenvalue, and that’s why addition makes
sense, Uk as defined is a vector space, and adding eigenvectors in Uk gives eigenvectors. The full space
V
is decomposed as the direct sum of the invariant subspaces of T :
V = U1 ⊕
U2 ⊕
. . . Um ,
dim V =
di , m
X
i=1
1 .
≥
(6.83)
m
All Ui subspaces are guaranteed to | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
i=1
1 .
≥
(6.83)
m
All Ui subspaces are guaranteed to be orthogonal to each other. In fact the full list of eigenvectors is
a list of orthonormal vectors that form a basis for V
is conveniently ordered as follows:
(1)
(u1
(1)
, . . . , u ,
d1
. . . , u
(m)
1
, . . . , u
(m)
dm
) .
(6.84)
The matrix T is manifestly diagonal in this basis because each vector above is an eigenvector of T and
is orthogonal to all others. The matrix representation of T reads
T = diag λ1, . . . , λ1 ,
(cid:0)
d1 times
v
(cid:1)
dm times
v
. . .
, λm, . . . , λm
(6.85)
"
This is is clear because the first d1 vectors in the list are in U1, the second d2 vectors are in U2, and
so on and so forth until the last dm vectors are in Um.
"
'
'
If we had no degeneracies in the spectrum the basis (6.84) (with di = 1 for all i) would be rather
unique if we require the matrix representation of T to be unchanged. Each vector could be multiplied
by a phase. On the other hand, with degeneracies that the list (6.84) can be changed considerably
without changing the matrix representation of T . Let Vk be a | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
4) can be changed considerably
without changing the matrix representation of T . Let Vk be a unitary operator on Uk, for each
k = 1, . . . , m. We claim that the following basis of eigenvectors leads to the same matrix T :
V1u
(1)
1
(1)
, . . . V1u ,
d1
. . . , Vmu
(m)
1
, . . . , Vmu
(m)
dm
.
(6.86)
Indeed, this is still a collection of eigenvectors of T with each of them orthogonal to the rest. Moreover,
(cid:0)
(cid:1)
the first d1 vectors are in U1, the second d2 vectors are in U2 and so on and so forth. More explicitly,
for example, within Uk
(k)
Vku
i
(k)
, T (Vku ) = λk
j
(k)
Vku , Vku
i
(k)
j
= λk(
)
(k)
u , u
i
(k)
j
)
(
= λkδij
(6.87)
\
(cid:11)
showing that in the Uk subspace the matrix for T is still diagonal with al entries equal to λk.
15
7 Simultaneous Diagonalization of Hermitian Operators
We say that two operators S and T
onalized if there is some basis of V
in a vector space V operators can be simultaneously diag
in which both the matrix representation of S and the matrix
representation of T are diagonal | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
in which both the matrix representation of S and the matrix
representation of T are diagonal. It then follows that each vector in this basis is an eigenvector of S
and an eigenvector of T .
A necessary condition for simultaneous diagonalization is that the operators S and T commute.
Indeed, if they can be simultaneously diagonalized there is a basis where both are diagonal and
they manifestly commute.
If the operators don’t commute, this is a basis-independent statement
and therefore a simultaneous diagonal presentation cannot exist. Since arbitrary linear operators S
and T on a complex vector space cannot be diagonalized, the vanishing of [S, T ] does not guarantee
simultaneous diagonalization. But if the operators are Hermitian it does, as we show now.
Theorem. If S and T are commuting Hermitian operators they can be simultaneously diagonalized.
Proof. The main complication is that degeneracies in the spectrum require an some discussion. Either
both operators have degeneracies or one has no degeneracies. Without loss of generality we can assume
that there are two cases to consider
(i) There is no degeneracy in the spectrum of T or,
(ii) Both T and S have degeneracies in their spectrum.
Consider case (i) first. Since T is non-degenerate there is a basis (u1, . . . un) of eigenvectors of T with
different eigenvalues
T ui = λiui | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
. . un) of eigenvectors of T with
different eigenvalues
T ui = λiui ,
i not summed , λi = λj for i
=
j .
(7.88)
We now want to understand what kind of vector is Sui. For this we act with T on it
T (Sui) = S(T ui) = S(λiui) = λi(S ui) ,
(7.89)
It follows that Sui is also an eigenvector of T with eigenvalue λi, thus it must equal ui, up to scale,
Sui = ωiui ,
(7.90)
showing that ui is also an eigenvector of S, this time with eigenvalue ωi. Thus any eigenvector of T is
also an eigenvector of S, showing that these operators are simultaneously diagonalizable.
Now consider case (ii). Since T has degeneracies, as explained in the previous section, we have a
decomposition of V in T -invariant subspaces Uk spanned by eigenvectors:
Uk
u
≡ {
T u = λku
|
}
,
dim Uk = dk
orthonormal basis for V :
(1)
(u , . . . , u ,
d1
(1)
1
V = U1 ⊕
, . . . , u
(m)
. . . , u
1
(m)
dm
. . . Um ,
) .
(7.91)
T = diag λ1, . . . , λ1 ,
(cid:0)
d1 | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
)
T = diag λ1, . . . , λ1 ,
(cid:0)
d1 times
z
v
'
"
16
. . .
, λm, . . . , λm
in this basis.
dm times
z
v
'
"
(cid:1)
6
6
We also explained that the alternative orthonormal basis of V
(1)
V1u1
(1)
, . . . V1u ,
d1
. . . , Vmu
(m)
1
, . . . , Vmu
(m)
dm
.
(7.92)
(cid:1)
leads to the same matrix for T when each Vk is a unitary operator on Uk.
(cid:0)
We now claim that the Uk are also S-invariant subspaces! To show this let u
the vector Su. We have
T (Su) = S(T u) = λkSu
→
Su
∈
Uk .
Uk and examine
∈
(7.93)
We use the subspaces Uk and the basis (7.91) to organize the matrix representation of S in blocks.
It follows that this matrix must have block-diagonal form since each subspace is S-invariant and
(k)
orthogonal to all other subspaces. We cannot guarantee, however, that S is diagonal within each
square block because Sui ∈
(k)
Uk but we have no reason to believe that Sui points along u .
i
Since S restricted to each S-invariant subspace Uk is hermitian we | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
along u .
i
Since S restricted to each S-invariant subspace Uk is hermitian we can find an orthonormal basis
of Uk in which the matrix S is diagonal. This new basis is unitarily related to the original basis
(u , . . . , u ) and thus takes the form (Vku , . . . , Vku ) with Vk a unitary operator in Uk. Note
that the eigenvalues of S in this block need not be degenerate. Doing this for each block, we find a
(k)
dk
(k)
dk
(k)
1
(k)
1
(k)
basis of the form (7.92) in which S is diagonal. But T is still diagonal in this new basis, so both S
and T have been simultaneously diagonalized.
D
Remarks:
1. Note that the above proof gives an algorithmic way to produce the common list of eigenvectors.
One diagonalizes one of the matrices and constructs the second matrix in the basis of eigenvectors
of the first. These second matrix is block diagonal, where the blocks are organized by the
degeneracies in the spectrum of the first matrix. One must then diagonalize within the blocks
and is guaranteed that the new basis that works for the second matrix also works for the first.
2. If we had to simultaneously diagonalize three different commuting Hermitian operators S1, S2
and S3, all of which have degenerate | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
ent commuting Hermitian operators S1, S2
and S3, all of which have degenerate spectra, we would proceed as follows. We diagonalize S1
and fix a basis in which S1 is diagonal. In this basis we must find that S2 and S3 have exactly the
same block structure. The corresponding block matrices are simply the matrix representations
of S2 and S3 in each of the invariant spaces Uk appearing in the diagonalization of S1. Since
S2 and S3 commute, their restrictions to Uk commute. These restrictions can be diagonalized
simultaneously, as guaranteed by our theorem which works for two matrices. The new basis in
Uk that makes the restriction of S2 and S3 diagonal, will not disturb the diagonal form of S1 in
this block. This is repeated for each block, until we get a common basis of eigenvectors.
3. An inductive algorithm is clear. If we know how to simultaneously diagonalize n commuting
Hermitian operators we can diagonalize n + 1 of them, call them S1, . . . Sn+1, as follows. We
diagonalize S1 and then consider the remaining n operators in the basis that makes S1 diagonal.
17
We are guaranteed a common block structure for the n operators. The problem becomes one
of simultaneous diagonalization of n commuting Hermitian block matrices, which is | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
problem becomes one
of simultaneous diagonalization of n commuting Hermitian block matrices, which is assumed
known by the induction argument.
Corollary.
S1, . . . , Sn}
{
simultaneously diagonalized.
If
is a set of mutually commuting Hermitian operators they can all be
8
Complete Set of Commuting Observables
We have discussed the problem of finding eigenvectors and eigenvalues of a Hermitian operator S.
This hermitian operator is thought as a quantum mechanical observable. The eigenvectors of S are
physical states of the system in which the observable S can be measured without uncertainty. The
result of the measurement is the eigenvalue associated with the eigenvector.
If the Hermitian operator S has a non-degenerate spectrum, all eigenvalues are different and we
have a rather nice situation in which each eigenvector can be uniquely distinguished by labeling it with
the corresponding eigenvalue of S. The physical quantity associated with the observable can be used
to distinguish the various eigenstates. Moreover, these eigenstates provide an orthonormal basis for
the full vector space. In this case the operator S provides a “complete set of commuting observables”
or a CSCO, in short. The set here has just one observable, the operator S.
The situation is more nontrivial if the Hermitian operator S exhibits degeneracies in its spectrum.
This means that V has an S-invariant | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
S exhibits degeneracies in its spectrum.
This means that V has an S-invariant subspace of dimension d > 1, spanned by orthonormal eigen
vectors (u1, . . . , ud) all of which have S eigenvalue λ. This time, the eigenvalue of S does not allow us
to distinguish or to label uniquely the basis eigenstates of the invariant subspace. Physically this is a
deficient situation, as we have explicitly different states – the various ui’s – that we can’t tell apart
by the measurement of S alone. This time S does not provide a CSCO. Labeling eigenstates by the S
eigenvalue does not suffice to distinguish them.
We are thus physically motivated to find another Hermitian operator T that is compatible with S.
Two Hermitian operators are said to be compatible observables if they commute, since then we can
find a basis of V comprised by simultaneous eigenvectors of the operators. These states can be labeled
by two observables, namely, the two eigenvalues. If we are lucky, the basis eigenstates in each of the
S-invariant subspaces of dimension higher than one can be organized into T eigenstates of different
eigenvalues. In this case T breaks the spectral degeneracy of S and using T eigenvalues as well as S
eigenvalues we can label uniquely a basis of | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
T eigenvalues as well as S
eigenvalues we can label uniquely a basis of orthonormal states of V . In this case we say that S and
T form a CSCO.
We have now given enough motivation for a definition of a complete set of commuting observables.
Consider a set of commuting observables, namely, a set
of Hermitian operators acting on
S1, . . . , Sk}
{
a complex vector space V that represents the physical state-space of some quantum system. By the
theorem in the previous section, we can find an orthonormal basis of vectors in V such that each vector
18
is an eigenstate of every operator in the set. Assume that each eigenstate in the basis is labeled by the
is said to be a complete set of commuting
eigenvalues of the Si operators. The set
observables if no two states have the same labels.
S1, . . . , Sk}
{
It is a physically motivated assumption that for any physical quantum system there is a complete
set of commuting observables, for otherwise there is no physical way to distinguish the various states
that span the vector space. So in any physical problem we are urged to find such complete set, and
we must include operators in such set until all degeneracies are broken. A CSCO need not be unique.
Once we have a complete set of commuting observables, adding another observable causes | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
be unique.
Once we have a complete set of commuting observables, adding another observable causes no harm,
although it is not necessary. Also, if (S1, S2) form a CSCO, so will (S1 + S2, S1 −
the smallest set of operators.
S2). Ideally, we want
The first operator that is usually included in a CSCO is the Hamiltonian H. For bound state
problems in one dimension, energy eigenstates are non-degenerate and thus the energy can be used
to label uniquely the H-eigenstates. A simple example is the infinite square well. Another example
is the one-dimensional harmonic oscillator. In such cases H forms the CSCO. If we have, however, a
two-dimensional isotropic harmonic oscillator in the (x, y) plane, the Hamiltonian has degeneracies.
At the first excited level we can have the first excited state of the x harmonic oscillator or, at the same
energy, the first excited state of the y harmonic oscillator. We thus need another observable that can
be used to distinguish these states. There are several options, as you will discuss in the homework.
19
MIT OpenCourseWare
http://ocw.mit.edu
8.05 Quantum Physics II
Fall 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/005979fa741c3ea2e0430456b70caf93_MIT8_05F13_Chap_05.pdf |
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
Lecture 7: Lumped Elements
I. What is a lumped element?
Lumped elements are physical structures that act and move as a unit when
subjected to controlled forces. Imagine a two-dimensional block of lead on a one-
dimensional frictionless surface.
FORCE
mass=M
Acceleration
x
When a force is imposed on the block, the block moves as a unit in a direction
described by the difference in force acting on its two surfaces, or analytically:
dV
dt
=
Net Force
Mass
(5.1)
The key features is that a gradient of a physical parameter produces a uniform
physical response throughout the lump.
Another example of a lumped element is an electrical resistor where a difference
in the Voltage (E) across the resistive element produces a current (I) that is uniform
throughout the resistor:
I
E1
E2
a resistor
of value R
(
I = E1 − E2
)/ R
(5.2)
where:
30 Sept -2004
page 1
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
II. Lumped Acoustic Elements
A. Elements: A lumped element is a representation of a structure by one or two
physical quantities that are homogenous or varying linearly throughout the structure
Standing Waves in P and
V in a long tube with a
rigid termination at x=0.
The spatial variation in the
sound pressure magnitude
and phase P(x) is defined
by a cosine function. The
spatial variation in particle
velocity magnitude and
phase V(x) is defined by a
sine function. The region
where the tube can act as a
lumped element is the
region where the pressure
amplitude is nearly
constant and the ‘volume
velocity’ (v x tube cross-
section) varies linear with
x.
B. An example of a lumped acoustic element is a short open tube of moderate
diameter, where length l and radius a are | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
x.
B. An example of a lumped acoustic element is a short open tube of moderate
diameter, where length l and radius a are <0.1 λ.
length l
u(t)
(t)
p
1
(t)
p
2
A SHORT
CIRCULAR TUBE
OF RADIUS
a
Under these circumstances particle velocity V and the sound pressures are simply
related by:
dV
dt
=
30 Sept -2004
)
(
P1 − P2
ρ0l
(5.3)
page 2
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
where Eqn. 5.3 is the specific acoustic equivalent of Eqn. 5.1. (Hint: you can describe
the forces acting on the lump by multiplying the pressures by the cross-sectional area
of the tube πa2.
C. Volume Velocity and Acoustic Impedance
In discussing lumped acoustic elements, it is convenient to think about velocity
in terms of a new variable Volume Velocity U where in the case of the tube above,
length l
P1
U
P2
2 x a
the volume velocity is defined by the product of the particle velocity and the cross-
sectional area of the tube, i.e. U = πa2 V = SV .
The relationship between volume velocity and the pressure difference in the open tube
above can be obtained by multiplying both sides of Eqn, 5.3 by S=πa2, i.e.
)
S
S
dV
dt
dU
dt
=
=
(
P1 − P 2
ρ0l
(
P1 − P2
ρ0l
)
S =
(
P1 − P2
ρ0Sl
)
S2 ,where Sl = Tube Volume.
(5.4)
The Acoustic Impedance of the tube is :
P1 − P2
U
III. Separation into ‘Through’ and ‘Across’ Variables
, where power(t) = through (t) across(t)
‘Across’ variable
‘Through’ variable
voltage e(t)
force f(t)
current | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
) = through (t) across(t)
‘Across’ variable
‘Through’ variable
voltage e(t)
force f(t)
current i(t)
velocity v(t)
velocity v(t)
force f(t)
Electrics
Mechanics: Impedance
analogy
Mechanics: Mobility
analogy
Acoustics: Impedance
sound pressure p(t)
volume velocity u(t)
30 Sept -2004
page 3
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
analogy
Acoustics: Mobility
analogy
volume velocity u(t)
sound pressure p(t)
In all of the above analogies, power(t) = through (t) across(t) has units of watts.
IV. Two Terminal Elements
A. Electrical Elements
Resistor
Capacitor
Inductor
i(t)
v(t)
R =
1
G
+
-
v(t) = Ri(t)
or
i(t) = Gv(t)
+
C
-
dv(t)
dt
i(t)
v(t)
i(t) = C
or
v(t) - v(0) =
t
t t
i( )d
1
C
m
0
+
L
i(t)
v(t)
-
di(t)
dt
v(t) = L
or
i(t) - i(0) =
t
v( )d
tt
1
L
m
0
Units of R are ohms (W)
Units of G are siemens (S)
Units of C are farads (F)
Units of L are henries (H)
Ideal Independent Voltage Source
Ideal Independent Current Source
+
i(t)
v(t)
+
-
-
v0(t)
+
v(t)
-
i(t)
i0(t)
v(t) = v0(t) independent of i(t)
i(t) = i0(t) independent of v(t)
Figure 5.1 Simple linear
2-terminal lumped
electrical elements and
their constitutive
relations. The
orientation of the arrow
and the +/- signs
identifies the positive
reference direction for
each | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
rical elements and
their constitutive
relations. The
orientation of the arrow
and the +/- signs
identifies the positive
reference direction for
each element. In this
figure the variable i is
current and v is voltage.
(From Siebert “Circuits,
Signals and System,
1986).
Note that R, C and L are the coefficients of the 0th and 1st order differential equations
that relate v(t) (or e(t)) to i(t).
30 Sept -2004
page 4
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
Figure 5.2
Electric elements and their
mechanical and acoustic counter-
parts in the “Impedance analogy”
From Kinsler, Frey, Coppens, &
Sanders, Fundamentals of Acoustics ,
3rd Ed. (1982)
B. Analogous Elements
m
M
L
Mass
Inertance
Inductance
Cm = 1/s
C
C
Compliance
Compliance
Capacitance
Rm
R
R
Resistance
Resistance
Resistance
Mechanical
Acoustical
Electrical
Fig. 10.3. Acoustic, electrical and mechanical
analogues.
C. Analogous Constitutive Relationships
Mechanical
V vs F
Electrical
I vs E
Acoustical
U vs P
Spring
Capacitor
Compliance
Spring
v(t) = CM
df (t)
dt
Capacitor
i(t) = CE
de(t)
dt
Compliance
dp(t)
dt
u(t) = C A
Damper
Resistor
Resistor
Mass
Inductor
Inertance
30 Sept -2004
Damper
1
RM
v(t) =
f (t)
Resistor
1
RE
i(t) =
e(t)
Resistor
1
RA
u(t) =
p(t)
v(t) =
Mass
1
∫
LM
f (t)
dt
Inductor
∫
1
LE
i(t) =
e | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
v(t) =
Mass
1
∫
LM
f (t)
dt
Inductor
∫
1
LE
i(t) =
e(t)
dt
Inertance
u(t) =
1
L A
∫
p(t)
dt
page 5
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
In the Sinusoidal Steady State:
Mechanical
Electrical
Acoustical
Spring
Capacitor
Compliance
Damper
Resistor
Resistor
Mass
Inductor
Inertance
V vs F
Spring
I vs E
U vs P
Capacitor
Compliance
V(ω) = jωCM F(ω)
I(ω) = jωCE E(ω)
U (ω) = jωCA P(ω)
Damper
1
RM
V (ω) =
F(ω)
Resistor
1
RE
I(ω) =
E(ω)
Resistor
1
RA
U (ω) =
P(ω)
V (ω) =
Mass
1
jωLM
F (ω )
Inductor
1
jωLE
Inertance
1
jωL A
I(ω) =
E (ω )
U (ω) =
P(ω)
}= P cos ωt + ∠P
(
)
p(t) = Real Pe jωt{
dp(t)
dt
{
jωPe jωt
= Real
}= −ωP sin ωt + ∠P
(
(
=ωP cos ωt + ∠P +π/2
)
)
30 Sept -2004
page 6
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
V. Acoustic Element Values and Physics
Element constraints result from physical process and element values are
determined by physical properties including the dimensions of structures, e.g. the
electrical resistance of a resistor depend on the dimensions and the resistivity of the
material from | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
physical properties including the dimensions of structures, e.g. the
electrical resistance of a resistor depend on the dimensions and the resistivity of the
material from which it’s constructed.
A. Acoustic mass: units of kg/m4
An open ended tube with linear dimensions l and a <0.1 λ and S=πa2
l
circular tube
p1
u(t)
p2
p(t) = p1(t) - p2(t)
=
LA =
ρο l
S
ρο Volume
S2
ρο = equilibrium mass density of medium
p(t) = LA
du(t)
dt
assumes only inertial forces
The Electrical Analog
P1-P2=U jω LA .
Note that the acoustic mass is equivalent to the mass of the air in the enclosed
element divided by the square of the cross-sectional area of the element. Also since
some small volume of the medium on either end of the tube is also entrained with the
media inside the tube, the “acoustic” length is usually somewhat larger than the
physical length of the tube. For a single open end, the difference between the physical
length and the acoustic length is ∆l ≈ 0.8a . This difference is called the end
correction.
30 Sept -2004
page 7
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
B. Acoustic Compliance: units of m3/Pa
Volume displaced per unit pressure difference (2 examples, both of which assume
resistance and inertia are negligible).
1. A Diaphragm of a < 0.1 λ
diaphragm at rest
diaphragm displaced
p1 t( )
p2 t( )
volume displacement = vol
The Electrical Analog
vol = p1 t( ) − p2 t( )
(
)C A
u t( ) =
= p t( )CA
)
(
d vol
dt
u t( ) = C A
dp t( )
dt
U = jωC A(P1 − P2 )
For a round, flat, “simply mounted” plate
C | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
t( )
dt
U = jωC A(P1 − P2 )
For a round, flat, “simply mounted” plate
C A =
πa6 7 + v
)
) 1 − v
(
(
16Et 3
,
where: a is the radius of the plate, v = 0.3 is Poisson’s ratio, E is the elastic constant
(Young’s modulus) of the material, and t is the thickness (Roark and Young, 1975, p.
362-3, Case 10a).
30 Sept -2004
page 9
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
2. Enclosed volume of air with linear dimensions <0.1 λ
u(t)
p(t)
P(jω)
U(jω)
Another structure that may be well
approximated by an acoustic
compliance.
C =
Volume
Adiabatic Bulk modulus
U = jωCAP
The variations in sound pressure within an enclosed air volume generally occur about
the steady-state atmospheric pressure, the ground potential in acoustics. Therefore,
one terminal of an electrical-analog of a volume-determined acoustic compliance
should always be grounded.
C. Acoustic Resistance: units of Acoustic Ohms (Pa-s/m3)
1. A narrow tube or radius a << 0.001 λ
l
circular (radius = a) rigid tube --
filled with acoustic medium
u t( )
p1 t( )
p t( ) = p1 t( ) − p2 t( ) = RAu t( ) ← assumption; only viscous forces
p2 t( )
RA =
8ηl
πa4 ⇒
p1(t) − p2 (t)
u(t)
η= viscosity of medium
Because of the viscous forces, relative motions of fluid at one radial position with
respect to an adjacent position exerts a force opposing the motion that is proportional
to the spatial derivative of the velocity and the fluid’s coefficient of shear viscosity η.
30 Sept -2004
page 1 | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
to the spatial derivative of the velocity and the fluid’s coefficient of shear viscosity η.
30 Sept -2004
page 10
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
The action of these forces results in a proportionality of pressure difference and
volume velocity that is analogous to an electric resistance. In the sinusoidal steady
state, then:
U1(jω) RA
P1(jω)
P2(jω)
,
The consequence of the viscosity is
that the velocity at the stationary
walls is zero, and is maximum in the
center of the tube (see Fig. 5.3).
The viscous forces produce energy
loss near the walls where the
velocity changes with position.
where P1-P2=U1 RA .
Relative velocity
amplitude
1.0
0.5
0.1
0.05
0
0.05
0.1
Radial position, r, in cm; a = 0.1
Fig. 5.3 Relative Particle velocity amplitude as a
function of radial position in a small pipe of radius
a = 0.1 cm, at frequency f = 200 Hz.
After Kinsler and Frey, 1950; p. 238
⎛
(
− 0.1−r
⎜
δ
⎝
The velocity profile in Figure 5.3 varies as v(r) =1− e
1/ 2]
[
)
δ = η/ ρ0ω(
constant”
m-2 for air at STP, ρ0, density of air = 1.2 kg-m3, and ω, radian frequency = 2πf.
⎞
⎟
⎠
, where the “space
, with η, the coefficient of shear viscosity = 1.86x10-5 N-s-
)
At 200 Hz δ=1.1x10-4 m = 0.011 cm (Figure 5.3)
At 20 Hz δ=3.5x10-5 m = | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
0.011 cm (Figure 5.3)
At 20 Hz δ=3.5x10-5 m = 0.035 cm
The effect of the viscous forces is insignificant when the radius of the tube is an order
of magnitude or more larger than the space constant and therefore we can ignore
viscosity for short tubes of moderate to large radius, 0.01λ < a <0.1λ.
30 Sept -2004
page 11
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
2. An infinitely long tube
The action of an acoustic resistor is to absorb sound power. The viscous forces
within a narrow tube convert the sound power into heat that dissipates away. A second
type of acoustic resistance can be constructed from a long tube of moderate cross-
sectional dimensions (0.01 λ < a < 0.2 λ). Such a construction can conduct sound
power away from a system and can be treated as an acoustic resistance where:
R =
ρ0 c
πa2 .
30 Sept -2004
page 12
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
There is a catch, however, in that
l is effectively infinite
p(t)
u(t)
P0
U1(jω) RA
P1(jω)
this lumped element always has
one end coupled to ground and
therefore can only be used to
either terminate acoustic circuits
or be placed in parallel with other
elements. There are ways of
dealing with long tubes as a
collection of series and parallel
elements that have already been
discussed in Lecture 2.
D. Two Mixed mass-resistance acoustic loads
1. A tube of intermediate radius (neither wide nor narrow) has an impedance
determined by the combination of an acoustic mass or inertance (associated with
accelerating the fluid mass within the tube) and a resistance (associated with
overcoming viscous drag at the stationary walls of the tube). Since the pressure
drop across the | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
within the tube) and a resistance (associated with
overcoming viscous drag at the stationary walls of the tube). Since the pressure
drop across the resistance and the mass elements add, we think of these as an R
and L in series.
(t)
p
1
length l
u(t)
An intermediate
tube
(t)
p
2
)
∆P = P2 − P1 = U jωρol S + R
(
30 Sept -2004
page 13
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
LE
RE
where S is the cross-sectional area of the tube and R is the resistance.
2. The radiation impedance acts whenever sound radiates from some element
and is made up of an acoustic mass associated with accelerating the air particles
near the surface of the element and a resistance associated with the transmission
of sound energy into the far field. Since the volume velocities associated with
these two processes add (some fraction of U goes into accelerating the mass
layer, while the rest radiates away from the element), we can think of these as
two parallel elements.
Radiation from the end of an organ pipe of radius a can be modeled by the following:
U
+
P
_
LR
RR
where:
U
P
= Y Rad =
1
Z Rad
=
1
jωLR
=
+
1
jω0.8a
2πa2
+
1
RR
1
ρ0c
2πa2
Note that the radiation mass is equivalent to the addition of a tube of radius a and
length 0.8a to the end of the pipe. This is the end correction!!
Range of applicability of acoustic circuit theory.
E.
1. Pressure and volume-velocity ranges consistent with “linear acoustics”.
30 Sept -2004
page 14
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
2. Frequency range limited by the assumption of “lumped” elements, i.e. the
dimensions of the structures need to | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
714J
2. Frequency range limited by the assumption of “lumped” elements, i.e. the
dimensions of the structures need to be small compared to a wavelength:
a and l < 0.1 λ.
30 Sept -2004
page 15
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
VI. Circuit Descriptions of a Real Acoustic System
A Jug or Helmholtz Resonator
A. An Acoustic Circuit Description
If we are using acoustic volume velocity as a through variable; the flow of volume
velocity through the neck suggests a series combination of Acoustic Elements. The
volume velocity first flows through an a series combination of an acoustic inertance
LA, and an acoustic resistor RA, and then into the acoustic compliance CA of the closed
cavity, where:
ρ0 ′ l
πa2 ; RA = g(l,a, frequency); C A =
Furthermore if we really treat the neck as an L and R combination than U2 = U1.
Volume
γP0
L A =
.
In the sinusoidal steady state: P1 ω( ) =U1 ω( ) jωL A + RA +
⎛
⎜ ⎜
⎝
1
jωC A
⎞
⎟ ⎟
⎠
The ratio of P1/U1 defines the acoustic input impedance of the bottle and in this case it
is equal to the series sum of the impedance of the three series elements.
30 Sept -2004
page 16
Lecture 7
Acoustics of Speech & Hearing
6.551 - HST 714J
Z IN ω( )= jωL A + RA +
1
jωCA
B. An Electrical Analog of the Acoustic Circuit Description
In Electrical circuits the wires that connect the ideal elements are perfect conductors.
LE
RE
+
E1
+
E2
+
E3
CE
If the numerical values of LE = LA, RE=RA, and CE=CA, then I1=U1, E1=P1 and
E2=P2. | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
of LE = LA, RE=RA, and CE=CA, then I1=U1, E1=P1 and
E2=P2.
C. A Mechanical Analog of the Acoustic Circuit Description
In Mechanical circuits the rods that attach ideal mechanical elements are rigid and
massless.
If the numerical values of LM = LA,
RM=RA, CM=CA, then V1=U1=U2, and
F=P1, then
F
V1
= jωLM + RM +
1
jωCM
Where the total force acting on the
elements equals the sum of the forces
acting on each.
F = V1 jωLM + RM +
⎛
⎜
⎝
1
jωCM
⎞
⎠
page 17
30 Sept -2004 | https://ocw.mit.edu/courses/6-551j-acoustics-of-speech-and-hearing-fall-2004/00610bbb35656bb99990aa06a6b78205_lec_7_2004.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.006 Introduction to Algorithms
Spring 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 2 Ver 2.0
More on Document Distance
6.006 Spring 2008
Lecture 2: More on the Document Distance
Problem
Lecture Overview
Today we will continue improving the algorithm for solving the document distance problem.
• Asymptotic Notation: Define notation precisely as we will use it to compare the
complexity and efficiency of the various algorithms for approaching a given problem
(here Document Distance).
• Document Distance Summary - place everything we did last time in perspective.
• Translate to speed up the ‘Get Words from String’ routine.
• Merge Sort instead of Insertion Sort routine
– Divide and Conquer
– Analysis of Recurrences
• Get rid of sorting altogether?
Readings
CLRS Chapter 4
Asymptotic Notation
General Idea
For any problem (or input), parametrize problem (or input) size as n Now consider many
different problems (or inputs) of size n. Then,
T (n) = worst case running time for input size n
=
max
X: Input of Size n
running time on X
How to make this more precise?
• Don’t care about T (n) for small n
• Don’t care about constant factors (these may come about differently with different
computers, languages, . . . )
For example, the time (or the number of steps) it takes to complete a problem of size n
might be found to be T (n) = 4n2 − 2n + | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/0084579211f6874fb95258a10a701ae3_lec2.pdf |
of size n
might be found to be T (n) = 4n2 − 2n + 2 µs. From an asymptotic standpoint, since n2
will dominate over the other terms as n grows large, we only care about the highest order
term. We ignore the constant coefficient preceding this highest order term as well because
we are interested in rate of growth.
1
Lecture 2 Ver 2.0
More on Document Distance
6.006 Spring 2008
Formal Definitions
1. Upper Bound: We say T (n) is O(g(n)) if ∃ n0, ∃ c s.t. 0 ≤ T (n) ≤ c.g(n) ∀n ≥ n0
Substituting 1 for n0, we have 0 ≤ 4n2 − 2n + 2 ≤ 26n2 ∀n ≥ 1
∴ 4n2 − 2n + 2 = O(n2)
Some semantics:
• Read the ‘equal to’ sign as “is” or � belongs to a set.
• Read the O as ‘upper bound’
2. Lower Bound: We say T (n) is Ω(g(n)) if ∃ n0, ∃ d s.t. 0 ≤ d.g(n) ≤ T (n) ∀n ≥ n0
Substituting 1 for n0, we have 0 ≤ 4n2 + 22n − 12 ≤ n2 ∀n ≥ 1
∴ 4n2 + 22n − 12 = Ω(n2)
Semantics:
• Read the ‘equal to’ sign as “is” or � belongs to a set.
•
Read the Ω as ‘lower bound’
3. Order: We say T (n) is Θ(g(n)) iff T (n) = O(g(n)) and T (n) = Ω(g(n))
Semantics: Read the Θ as ‘high order term is g(n)’
Document | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/0084579211f6874fb95258a10a701ae3_lec2.pdf |
)) and T (n) = Ω(g(n))
Semantics: Read the Θ as ‘high order term is g(n)’
Document Distance so far: Review
To compute the ‘distance’ between 2 documents, perform the following operations:
Θ(n2)
+ op on list
Θ(n2)
double loop
insertion sort, double loop Θ(n2)
�
arccos D1·D2
�D1�∗�D2�
�
Θ(n)
For each of the 2 files:
Read file
Make word list
Count frequencies
Sort in order
Once vectors D1,D2 are obtained:
Compute the angle
2
Lecture 2 Ver 2.0
More on Document Distance
6.006 Spring 2008
The following table summarizes the efficiency of our various optimizations for the Bobsey
vs. Lewis comparison problem:
Version Optimizations
initial
add profiling
wordlist.extend(. . . )
dictionaries in count-frequency
process words rather than chars in get words from string
merge sort rather than insertion sort
eliminate sorting altogether
V1
V2
V3
V4
V5
V6
V6B
Time Asymptotic
?
?
195 s
84 s Θ(n2) Θ(n)
→
41 s Θ(n2) Θ(n)
→
→
13 s Θ(n) Θ(n)
6 s Θ(n2) Θ(n lg(n))
→
1 s
a Θ(n) algorithm
The details for the version 5 (V5) optimization will not be covered in detail in this lecture.
The code, results and implementation details can be accessed at this link. The only big
obstacle that remains is to replace Insertion Sort with something faster because it takes
time Θ(n2) in the worst case. This will be | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/0084579211f6874fb95258a10a701ae3_lec2.pdf |
Insertion Sort with something faster because it takes
time Θ(n2) in the worst case. This will be accomplished with the Merge Sort improvement
which is discussed below.
Merge Sort
Merge Sort uses a divide/conquer/combine paradigm to scale down the complexity and
scale up the efficiency of the Insertion Sort routine.
Figure 1: Divide/Conquer/Combine Paradigm
3
input array of size nALRsortsortL’R’mergesorted array A2 arrays of size n/22 sorted arrays of size n/2sorted array of size nLecture 2 Ver 2.0
More on Document Distance
6.006 Spring 2008
Figure 2: “Two Finger” Algorithm for Merge
The above operations give us T (n) = C1 + 2.T (n/2) + C.n
����
� �� �
recursion merge
����
divide
Keeping only the higher order terms,
T (n) = 2T (n/2) + C n·
= C n + 2 × (C n/2 + 2(C (n/4) + . . .))
·
·
·
Detailed notes on implementation of Merge Sort and results obtained with this improvement
are available here. With Merge Sort, the running time scales “nearly linearly” with the size
of the input(s) as n lg(n) is “nearly linear”in n.
An Experiment
Insertion Sort Θ(n2)
Merge Sort
Built in Sort Θ(n lg(n))
Θ(n lg(n)) if n = 2i
• Test Merge Routine: Merge Sort (in Python) takes ≈ 2.2n lg(n) µs
• Test Insert Routine: Insertion Sort (in Python) takes ≈ 0.2n2 µs
• Built in Sort or sorted (in C) takes ≈ 0.1n lg(n) µs
The 20X constant factor difference comes about because Built in Sort is written in | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/0084579211f6874fb95258a10a701ae3_lec2.pdf |
� 0.1n lg(n) µs
The 20X constant factor difference comes about because Built in Sort is written in C while
Merge Sort is written in Python.
4
5473619234571269ij12345679inc jinc jinc iinc iinc iinc jinc iinc j(array L done)(array R done)Lecture 2 Ver 2.0
More on Document Distance
6.006 Spring 2008
Figure 3: Efficiency of Running Time for Problem of size n is of order Θ(n lg(n))
Question: When is Merge Sort (in Python) 2n lg(n) better than Insertion Sort (in C)
0.01n2?
Aside: Note the 20X constant factor difference between Insertion Sort written in Python
and that written in C
Answer: Merge Sort wins for n ≥ 212 = 4096
Take Home Point: A better algorithm is much more valuable than hardware or compiler
even for modest n
See recitation for more Python Cost Model experiments of this sort . . .
5
CnC(n/2)C(n/2)C(n/4)CCCnCnCnCnCn}lg(n)+1 levelsincluding leavesT(n) = Cn(lg(n)+1) = Θ(nlgn) | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/0084579211f6874fb95258a10a701ae3_lec2.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.046J / 18.410J Design and Analysis of Algorithms
Spring 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-046j-design-and-analysis-of-algorithms-spring-2015/009c51db8900141fb181971f2bb826f3_MIT6_046JS15_writtenlec1.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.641 Electromagnetic Fields, Forces, and Motion, Spring 2005
Please use the following citation format:
Markus Zahn, 6.641 Electromagnetic Fields, Forces, and Motion, Spring
2005. (Massachusetts Institute of Technology: MIT OpenCourseWare).
http://ocw.mit.edu (accessed MM DD, YYYY). License: Creative
Commons Attribution-Noncommercial-Share Alike.
Note: Please use the actual date you accessed this material in your citation.
For more information about citing these materials or our Terms of Use, visit:
http://ocw.mit.edu/terms
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4: The Scalar Electric Potential and the Coulomb Superposition
Integral
I. Quasistatics
Electroquasistatics (EQS)
Magnetquasistatics (MQS)
∇ × E = −
0
∂ (µ 0H) ≈ 0
∂t
∇ i (ε 0E) = ρ
∇ × H = J +
∂
∂
t (ε0E)
∇ i J +
∂ρ
∂t
= 0
II. Irrotational EQS Electric Field
1. Conservative Electric Field
∇ × E = −
∂ (µ H)
∂t
0
∇ × H = J +
0
0E)
ε(∂
∂t
∇ i (µ 0H) = 0
∇ i J 0
=
∇ i (ε 0E) = ρ
0
d
dt
∫ µ 0H da ≈ 0
i
S
(cid:118)∫ E ds
i
= −
C
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 1 of 6
b
a
b
b
i
(cid:118)∫ E ds = ∫ E ds + ∫
b | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
b
a
b
b
i
(cid:118)∫ E ds = ∫ E ds + ∫
b
a
II
I
i
C
E ds
= 0 ⇒
i
∫ i
E ds
a
I(cid:11)(cid:9)(cid:11)
(cid:8) (cid:10)
Electromotive Force
(EMF)
= ∫ E ds
i
a
II
EMF between 2 points (a, b) independent of path
E field is conservative
− Φ r
( ref )
rref
∫
= E
r
i
ds
( )
Φ r
(cid:47) Scalar
electric potential
b
i
∫
E ds =
a
rref
∫
a
E ds +
i
b
∫
rref
E ds
i
= Φ (
(
) − Φ r
a
ref
)
(
+ Φ r
ref
)
− Φ (
( )
)b = Φ a − Φ (
)b
2. The Electric Scalar Potential
_
r = x i
x + y i
_
_
y + z i
z
_
_
_
∆r = ∆ x i
x + ∆ y i
y + ∆ z i
z
r
∆ = ∆
n
cos
θ
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 2 of 6
∆Φ = Φ (x + ∆x, y + ∆y, z + ∆z) − Φ (x, y, z )
= Φ (x, y, z ) +
∂Φ
∂x
∆x +
∂Φ
∂y
∆y +
∂Φ
∂z
∆z − Φ (x, y, z )
=
∂Φ
∂x
∆x +
∂Φ
∂y
∆y +
∂Φ
∂z | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
x, y, z )
=
∂Φ
∂x
∆x +
∂Φ
∂y
∆y +
∂Φ
∂z
∆z
= ⎢
_
i x +
∂Φ _
∂y
⎤
⎡ ∂Φ _
i z ⎥ i ∆r
⎣ ∂x
⎦
(cid:8)(cid:11)(cid:11)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:11)(cid:11)(cid:10)
grad Φ = ∇Φ
∂Φ
∂z
i y +
_
∇ = i
x
_
+ i
y
∂
∂x
∂
∂ y
_ ∂
+ i
z
∂z
grad
_ ∂Φ
Φ = ∇Φ = i
x
∂x
_ ∂Φ
+ i
y
∂y
_ ∂Φ
+ i
z
∂z
∫ E ds = Φ ( ) − Φ (r
i(cid:9)
r
+ ∆ )
r = −∆Φ = −∇Φ i ∆ = E
r
i ∆r
r +∆r
r
E = −∇Φ
∆Φ =
∆Φ
∆n
∆r cos θ =
∆Φ
∆n
n i ∆r = ∇Φ i ∆r
∇Φ =
∆Φ
∆n
n =
∂Φ
∂n
n
The gradient is in the direction perpendicular to the equipotential
surfaces.
III. Vector Identity
∇ × E 0=
E = −∇Φ
(
∇ × ∇Φ
) =
0
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 3 of 6
IV. Sample Problem
V xy
Φ (x, y ) = 0
2a
(Equipotential lines hyperbolas: | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
IV. Sample Problem
V xy
Φ (x, y ) = 0
2a
(Equipotential lines hyperbolas: xy=constant)
⎡ ∂Φ
E = −∇Φ = −
⎢
⎣
∂x x
_
i +
_
⎤
∂Φ
i
∂y y ⎥
⎦
=
_
_
⎞
−V0 ⎛
⎜ y i + x i ⎟
a2
⎠
⎝
y
x
Electric Field Lines [lines tangent to electric field]
dy
dx
=
Ey =
Ex
x
y
⇒ ydy = xdx
2
y
2
=
2
x
2
+ C
y2 − x2 = y2 − x2
0
(hyperbolas orthogonal to xy)
0
[lines pass through point (x , y0 ) ]
0
Courtesy of Hermann A. Haus and James R. Melcher. Used with permission.
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 4 of 6
V. Poisson’s Equation
(
i
∇ E = ∇ −∇Φ = ρ
i
)
2
0ε ⇒ ∇ Φ = − ρ 0ε
(
2∇ Φ = ∇ ∇Φ =
i
)
_
⎡
i
⎢
⎣
x
_
i+
y
∂
x
∂
_
i+
z
∂
y
∂
∂
z
∂
⎤ ⎡
∂Φ
i
⎥ ⎢
x
∂
⎦ ⎣
_
i
x
+
∂Φ
y
∂
_
i
y
+
∂Φ
z
∂
_
⎤
i
⎥
z
⎦
=
2
∂ Φ
2x
∂
+
2
∂ Φ
2y
∂
+
2
∂ Φ
2z
∂
VI. | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
∂
+
2
∂ Φ
2y
∂
+
2
∂ Φ
2z
∂
VI. Coulomb Superposition Integral
1. Point Charge
E
r
= −
∂Φ
r
∂
=
q
2
ε
r
4
π
0
⇒ Φ =
q
ε
r
4
π
0
+
C
Take reference
)
Φ → ∞ = ⇒
(
r
0 C 0
=
Φ =
q
πε
r
0
4
2. Superposition of Charges
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 5 of 6
dΦT (
) =
P
⎡
⎢
1
⎢
4πε0 r
⎢
⎢
⎣
q1
r
− 1
+
r
q2
r
− 2
+ ...
r
dq
1 +
'
1
r−
N
⎡
⎢
⎢
⎢∑
4π 0 ⎢n 1
r
⎢
⎣
1
ε
=
qn
rn−
+
∫
all line,
surface, and
volume ch arg es
dq
r
−
r'
⎤
⎥
2 + ...⎥
⎥
'
⎥
2
⎦
dq
r−
r
⎤
⎥
⎥
⎥
⎥
⎥
⎦
ΦT (
)P =
=
⎡
⎢
N
⎢∑ qn
rn
−r
=
1
ε
4π 0 ⎢n 1
⎢
⎣
'
λ r dl
'
⎛
⎜
⎝
⎞
⎟
⎠
r − r'
+ ∫
L
'
⎛
s ⎜
⎝
⎞
'
σ r da
⎟
⎠
+ ∫
r
r'−
V
⎛
ρ r
⎜
⎝
r
+ ∫
S
⎞
'
⎟ | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
r
r'−
V
⎛
ρ r
⎜
⎝
r
+ ∫
S
⎞
'
⎟
⎠
r'−
⎤
'
dV
⎥
⎥
⎥
⎥
⎦
Short-hand notation
Φ ( )r = ∫
'
⎞
ρ r dV
⎟
⎠
⎛
⎜
⎝
'
V 4πε0 r − r'
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 4
Page 6 of 6 | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/00b2fd4ae919656537f533c1aa421721_lecture4.pdf |
Chapter 7 Notes - Inference for Single Samples
• You know already for a large sample, you can invoke the CLT so:
¯X ∼ N (µ, σ2).
Also for a large sample, you can replace an unknown σ by s.
• You know how to do a hypothesis test for the mean, either:
– calculate z-statistic
z = √
x¯ − µ0
σ/ n
and compare it with zα or zα/2.
– calculate pvalue and compare with α or α/2.
– calculate CI and see whether µ0 is within it.
Let’s add two more calculations.
1) Determine n to achieve a certain width for a 2-sided confidence interval. Of course, small
width → large n.
Derivation of Sample Size Calculation for CI
n =
2
zα/2σ
E
(Sample Size Calculation)
where E is the half-width of the CI.
Example
2) Power Calculation
• For upper 1-sided z-tests:
H0
H1
: µ ≤ µ0
: µ > µ0, in fact, we’ll take µ = µ1.
The calculation only makes sense if µ1 > µ0. We want to know what the power of the
test is to detect mean µ1. We’ll compute power as a function of µ1.
Derivation of Power Calculation for Upper 1-sided z-tests
�
π(µ1) = P (test rejects H0 in favor of H1|H1) = Φ −zα + √
�
.
µ1 − µ0
σ/ n
1
Now we can consider π(µ1) as a function of µ1. Again, the alternative hypothesis only
make sense if µ1 > µ0. As µ1 increases, what happens to π(µ1)?
• For lower 1-sided tests,
�
(cid:18)
π(µ1) = Φ −zα + √
µ0 − µ1
σ | https://ocw.mit.edu/courses/15-075j-statistical-thinking-and-data-analysis-fall-2011/00da8087b863814d7d150a63912fa9da_MIT15_075JF11_chpt07.pdf |
�
(cid:18)
π(µ1) = Φ −zα + √
µ0 − µ1
σ/ n
�
(cid:19)
.
The alternative hypothesis only makes sense when µ1 < µ0. As µ1 increases (and gets
closer to a µ0), what happens to π(µ1)?
• For 2-sided tests,
(cid:18)
π(µ1) = P
�
¯
X < µ0 − zα/2
�
(cid:18)
�
(cid:18)
+ P
(cid:19)
�
σ
√
µ = µ1
n
(cid:18)
�
�
(cid:19)
µ0 − µ1
σ/ n
¯
X > µ0 + zα/2
(cid:19)
�
µ1 − µ0
σ/ n
= Φ −zα/2 + √
+ Φ −zα/2 + √
�
(cid:19)
µ = µ1
σ
√
n
As µ1 changes, what happens to π(µ1)?
2
3) Sample size calculation for power. Want to find the n required to guarantee a certain
power, 1 − β, for an α-level z-test.
Let δ := µ1 − µ0 so that µ1 = µ0 + δ.
• For upper 1-sided, we have (look up at the power calculation we did for upper 1-sided):
�
(cid:18)
π(µ1) = π(µ0 + δ) = Φ −zα + √ = 1 − β.
�
(cid:19)
δ
σ/ n
Since our notation says that zβ is defined as the number where Φ(zβ ) = 1 − β:
Now solve that for n:
δ
−zα + √ = zβ .
σ/ n
n = | https://ocw.mit.edu/courses/15-075j-statistical-thinking-and-data-analysis-fall-2011/00da8087b863814d7d150a63912fa9da_MIT15_075JF11_chpt07.pdf |
:
Now solve that for n:
δ
−zα + √ = zβ .
σ/ n
n =
2
(zα + zβ)σ
δ
.
• For lower 1-sided, n is the same by symmetry.
• For 2-sided, turns out one of the two terms of π(µ1) can be ignored to get an approxi
mation:
n ≈
2
(zα/2 + zβ)σ
δ
.
Remember to round up to the next integer when doing sample-size calculations!
Example
3
7.2 Inferences on Small Samples
If n < 30, we often need to use the t-distribution rather than z-distribution N (0, 1) since s
doesn’t approximate σ very well. Need X1, . . . , Xn ∼ N (µ, σ2).
The bottom line is that we replace:
Z = √
¯X − µ
σ/ n
¯X − µ
by T = √
S/ n
for a t-test on the mean. Replace zα by tn−1,α. Replace σ by S. There’s a chart in your
book on page 253 that summarizes this.
Note that the power calculation is harder for t-tests, so for this class, just say S ≈ σ and
use the normal distribution power calculation. You’ll get an approximation.
Example
7.3 Inferences on Variances
Assume X1, . . . , Xn ∼ N (µ, σ2). Inferences on variance are very sensitive to this assumption,
so inference only with caution!
The bottom line is that we replace:
Z = √
¯X − µ
σ/ n
by χ2 =
(n − 1)S2
σ2
(and test for σ2 not µ). | https://ocw.mit.edu/courses/15-075j-statistical-thinking-and-data-analysis-fall-2011/00da8087b863814d7d150a63912fa9da_MIT15_075JF11_chpt07.pdf |
/ n
by χ2 =
(n − 1)S2
σ2
(and test for σ2 not µ). Replace zα by χ2
n−1,1−α
and/or χ2
n−1,α.
Hypothesis tests on variance are not quite the same as on the mean. Let’s do some of the
computations to show you. First, we’ll compute the CI.
4
2-sided CI for σ2 . As usual, start with what we know:
(cid:18)
1 − α = P χ2
n−1,1−α/2
≤
⇑
(*1)
(n−1)S2
σ2
≤ χ2
(cid:19)
n−1,α/2
and remember χ2 =
(n − 1)S2
,
σ2
⇑
(*2)
and we want:
1 − α = P (L ≤ σ2 ≤ U ) for some L and U.
Let’s solve it on the left for (*1)
and on the right for (*2):
σ2 ≤
(n − 1)S2
χ2
n−1,1−α/2
(n − 1)S2
χ2
n−1,α/2
≤ σ2
Putting it together we have:
1 − α =
1 − α =
P
P
(n − 1)S2
χ2
n−1,α/2
≤
L ≤
σ2 ≤
σ2 ≤
(n − 1)S2
χ2
n−1,1−α/2
U .
The 100(1 − α)% confidence interval for σ2 is then
(n − 1)s2
χ2
n−1,α/2
≤ σ2 ≤
(n − 1)s2
χ2
.
n−1,1−α/2
Similarly, 1-sided CI’s for σ2 are:
(n − 1)s2
χ2
n−1,α | https://ocw.mit.edu/courses/15-075j-statistical-thinking-and-data-analysis-fall-2011/00da8087b863814d7d150a63912fa9da_MIT15_075JF11_chpt07.pdf |
Similarly, 1-sided CI’s for σ2 are:
(n − 1)s2
χ2
n−1,α
≤ σ2
and
σ2 ≤
(n − 1)s2
.
χ2
n−1,1−α
Hypothesis tests on Variance (a chi-square test)
To test H0 : σ2
= σ2
0 vs H1 : σ2
= σ2
0 , we can either:
• Compute χ2 statistic:
and reject H0 when either χ2 > χ2
n−1,α/2
• Compute pvalue:
χ2 =
(n − 1)s2
σ2
0
or χ2 < χ2
n−1,1−α/2.
First we calculate the probability to be as extreme in either direction:
5
PU = P (χ2 ≥ χ2)
n−1
or
PL = P (χ2 ≤ χ2)
n−1
depending on which is smaller (more extreme). The probability to obtain a χ2 at least
as extreme under H0 is:
2 min(PU , PL).
This accounts for being extreme in either direction.
• Compute CI (already done)
Table 7.6 on page 257 summarizes the chi-square hypothesis test on variance.
Note that this is not the most commonly used chi-square test!
See Wikipedia: A chi-square test is any statistical hypothesis test in which the sampling
distribution of the test statistic is a chi-square distribution when the null hypothesis is true...
(In this case, we have normal random variables, so the distribution of the test statistic
is chi-square.)
(n−1)S2
σ2
Example
6
MIT OpenCourseWare
http://ocw.mit.edu
15.075J / ESD.07J Statistical Thinking and Data Analysis
Fall 2011
For information about citing these materials or our Terms of Use, | https://ocw.mit.edu/courses/15-075j-statistical-thinking-and-data-analysis-fall-2011/00da8087b863814d7d150a63912fa9da_MIT15_075JF11_chpt07.pdf |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4