text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
Note: as h → ∞. tanh kh → 1, and we obtain deep water dispersion relation deduced in our wind-over
water lecture.
Physical Interpretation
Chapter 19. Water waves
• relative importance of σ and g is prescribed by the Bond number Bo = ρg
σk2 = σ(2π)2
2
ρgλ2 = (2π)2 ℓ
c
λ2
where ℓc =
σ/ρg is the capillary length.
• for air-water, Bo ∼ 1 for λ ∼ 2πℓc ∼ 1.7cm.
J
• Bo ≫ 1, λ ≫ 2πℓc: surface effects negligible ⇒ gravity waves.
• Bo ≪ 1 : λ ≪ 2πℓc: influence of g is negligible ⇒ capillary waves.
Special Cases: deep and shallow water. Can expand via Taylor series: For kh ≪ 1, tanh kh =
kh − 1 (kh)3 + O (kh)5 , and for kh ≫ 1, tanh kh ≈ 1.
3
)
(
A. Gravity waves Bo ≫ 1: c2 = k
√
Shallow water (kh ≪ 1) ⇒ c =
one can only surf in shallow water.
Deep water (kh ≫ 1) ⇒ c =
g tanh kh.
gh. All wavelengths travel at the same speed (i.e. non-dispersive), so
g/k, so longer waves travel faster, e.g. drop large stone into a pond.
B. Cap | https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf |
so longer waves travel faster, e.g. drop large stone into a pond.
B. Capillary Waves: Bo ≪ 1, c = ρ tanh kh.
σk
2
J
Deep water kh ≫ 1 ⇒ c =
fastest, e.g. raindrop in a puddle.
√
σkρ so short waves travel
Shallow water kh ≪ 1 ⇒ c =
An interesting note: in lab modeling of shallow water waves
(kh ≪ 1) c ≈
=
J
kh − 1 k3h3 + O (kh)5
2
σhk2
.ρ
g
k
+
3
− 1 gh2 k2 + O (kh)4 gh.
) (
(
σk
ρ
1/2
(
)
(
))
In ripple tanks,
gh +
σh
ρ
3
(
choose h =
)
3σ
ρg
(
)
nondispersive waves.
0.5cm.
to get a good approximation to
1/2
In water,
∼
3·70
103
1/2 ∼
(
)
3σ
ρg
(
)
1/4
Figure 19.2: Deep water capillary waves,
whose speed
increases as wavelength de
creases.
4gσ
ρ
From c(k) can deduce cmin =
.
(
(
Group velocity: when c = c(λ), a wave is called dispersive
since its different Fourier components (corresponding to different k or λ) separate or disperse, e.g. deep
water gravity waves: c ∼ λ. In a dispersive system, the energy of a wave component does not propagate
at c = ω/k (phase speed), | https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf |
a dispersive system, the energy of a wave component does not propagate
at c = ω/k (phase speed), but at the group velocity:
Image courtesy of Andrew Davidhazy.
Used with permission.
for kmin =
√
ρg
σ
)
)
1/2
cg =
dω
dk
=
d
dk
(ck)
(19.5)
Deep gravity waves: ω = ck =
√
gk. cg =
∂
∂k
ω =
∂
∂k
√
gk =
1
2
Deep capillary wave: c =
σ/ρ
k
1/2
, ω =
σ/ρk3/2 ⇒ cg =
J
J
=
∂ω
∂k
c
g/k = 2 .
3
2
σ/ρk1/2 =
3
2
c.
J
MIT OCW: 18.357 Interfacial Phenomena
79
Prof. John W. M. Bush
Chapter 19. Water waves
Flow past an obstacle.
If U < cmin, no steady waves are generated by the obstacle.
If U > cmin, there are two k−values, for which c = U :
1. the smaller k is a gravity wave with cg = c/2 < c ⇒ energy swept downstream.
2. the larger k is a capillary wave with cg = 3c/2 > c, so the energy is swept upstream.
Figure 19.3: Phase speed c of surface waves as a function of their wavelength λ.
MIT OCW: 18.357 Interfacial Phenomena
80
Prof. John W. M. Bush | https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf |
OCW: 18.357 Interfacial Phenomena
80
Prof. John W. M. Bush
MIT OpenCourseWare
http://ocw.mit.edu
357 Interfacial Phenomena
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf |
LINEAR ALGEBRA: VECTOR SPACES AND OPERATORS
Contents
1 Vector spaces and dimensionality
2 Linear operators and matrices
3 Eigenvalues and eigenvectors
4
Inner products
5 Orthonormal basis and orthogonal projectors
6 Linear functionals and adjoint operators
7 Hermitian and Unitary operators
1 Vector spaces and dimensionality
B. Zwiebach
October 21, 2013
1
5
11
14
18
20
24
In quantum mechanics the state of a physical system is a vector in a complex vector space. Observables
are linear operators, in fact, Hermitian operators acting on this complex vector space. The purpose
of this chapter is to learn the basics of vector spaces, the structures that can be built on those spaces,
and the operators that act on them.
Complex vector spaces are somewhat different from the more familiar real vector spaces. I would
say they have more powerful properties. In order to understand more generally complex vector spaces
it is useful to compare them often to their real dimensional friends. We will follow here the discussion
of the book Linear algebra done right, by Sheldon Axler.
In a vector space one has vectors and numbers. We can add vectors to get vectors and we can
multiply vectors by numbers to get vectors. If the numbers we use are real, we have a real vector space.
If the numbers we use are complex, we have a complex vector space. More generally, the numbers
we use belong to what is | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
have a complex vector space. More generally, the numbers
we use belong to what is called in mathematics a ‘field’ and denoted by the letter F. We will discuss
just two cases, F = R, meaning that the numbers are real, and F = C, meaning that the numbers are
complex.
The definition of a vector space is the same for F being R or C. A vector space V is a set of vectors
V . This means
with an operation of addition (+) that assigns an element u + v
that V is closed under addition. There is also a scalar multiplication by elements of F, with av
V to each u, v
∈
∈
V
∈
1
for any a
F and v
∈
∈
operations must satisfy the following additional properties:
V . This means the space V
is closed under multiplication by numbers. These
1.
u + v = v + u
V for all u, v
∈
∈
V (addition is commutative).
2.
u + (v + w) = (u + v) + w and (ab)u = a(bu) for any u, v, w
V and a, b
∈
∈
F (associativity).
3.
There is a vector 0
∈
V such that 0 + u = u for all u
V (additive identity).
∈
V there is a u
V such that v + u = 0 (additive inverse).
4.
For | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
a u
V such that v + u = 0 (additive inverse).
4.
For each v
∈
5.
The element 1
∈
∈
F satisfies 1v = v for all v
V (multiplicative identity).
∈
6.
a(u + v) = au + av and (a + b)v = av + bv for every u, v
V and a, b
∈
∈
F (distributive property).
This definition is very efficient. Several familiar properties follow from it by short proofs (which
we will not give, but are not complicated and you may try to produce):
The additive identity is unique: any vector 0 ′ that acts like 0 is actually equal to 0.
•
0v = 0, for any v
V , where the first zero is a number and the second one is a vector. This
•
∈
means that the number zero acts as expected when multiplying a vector.
a0 = 0, for any a
F. Here both zeroes are vectors. This means that the zero vector multiplied
•
∈
by any number is still the zero vector.
The additive inverse of any vector v
V is unique. It is denoted by
v and in fact
v = (
1)v.
•
We must emphasize that while the numbers, in F are sometimes real or complex, we never speak
of the vectors themselves as real or complex. A vector multiplied by a complex number | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
speak
of the vectors themselves as real or complex. A vector multiplied by a complex number is not said to
−
−
−
∈
be a complex vector, for example! The vectors in a real vector space are not themselves real, nor are
the vectors in a complex vector space complex. We have the following examples of vector spaces:
1.
The set of N -component vectors
a1
a2
. . .
aN
,
ai
R ,
∈
i = 1, 2, . . . N .
form a real vector space.
2.
The set of M
×
N matrices with complex entries
a11
a21
.
.
.
aM 1
a1N
a2N
.
.
.
. . .
. . .
.
.
.
. . . aM N
2
,
C ,
aij
∈
(1.1)
(1.2)
is a complex vector space.
In here multiplication by a constant multiplies each entry of the
matrix by the constant.
3. We can have matrices with complex entries that naturally form a real vector space. The space
of two-by-two hermitian matrices define a real vector space. They do not form a complex vector
space since multiplication of a hermitian matrix by a complex | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
not form a complex vector
space since multiplication of a hermitian matrix by a complex number ruins the hermiticity.
4. The set
(F) of polynomials p(z). Here the variable z
P
has coefficients a0, a1, . . . an also in F:
F and p(z)
∈
∈
F. Each polynomial p(z)
n
p(z) = a0 + a1z + a2z + . . . + anz .
2
(1.3)
By definition, the integer n is finite but it can take any nonnegative value. Addition of poly
nomials works as expected and multiplication by a constant is also the obvious multiplication.
The space
P
(F) of all polynomials so defined form a vector space over F.
5. The set F∞ of infinite sequences (x1, x2, . . .) of elements xi
F. Here
∈
(x1, x2, . . .) + (y1, y2, . . .) = (x1 + y1, x2 + y2, . . .)
a(x1, x2, . . .) = (ax1, ax2, . . .)
a
F .
∈
This is a vector space over F.
(1.4)
6. The set of complex functions on an interval x
[0, L], form a vector space over C.
∈
To better understand a vector space one can try to figure out its possible subspaces. A subspace
of a vector space V
is a subset of V | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
its possible subspaces. A subspace
of a vector space V
is a subset of V that is also a vector space. To verify that a subset U of V
is a
subspace you must check that U contains the vector 0, and that U is closed under addition and scalar
multiplication.
Sometimes a vector space V can be described clearly in terms of collection U1, U2, . . . Um of sub-
is the direct sum of the subspaces U1, U2, . . . Um and we
spaces of V . We say that the space V
write
V = U1
U2
⊕
⊕ ·
· · ⊕
Um
(1.5)
if any vector in V can be written uniquely as the sum u1 + u2 + . . . + um, where ui
Ui. To check
uniqueness one can, alternatively, verify that the only way to write 0 as a sum u1 + u2 + . . . + um with
W , it suffices to
ui
prove that any vector can be written as u + w with u
Ui is by taking all ui’s equal to zero. For the case of two subspaces V = U
W and that U
U and w
W = 0.
⊕
∈
∈
∈
∈
∩
Given a vector space we can produce lists of vectors. A list (v1, v2, . . . , vn) of vectors in V contains,
by definition, a fi | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
v2, . . . , vn) of vectors in V contains,
by definition, a finite number of vectors. The number of vectors in the list is the length of the list.
The span of a list of vectors (v1, v2,
, vn), is the set of all linear
vn) in V , denoted as span(v1, v2,
· · ·
· · ·
combinations of these vectors
a1v1 + a2v2 + . . . anvn ,
F
ai
∈
(1.6)
3
A vector space V is spanned by a list (v1, v2,
vn) if V = span(v1, v2,
vn).
Now comes a very natural definition: A vector space V
spanned by some list of vectors in V . If V
· · ·
· · ·
is said to be finite dimensional if it is
is not finite dimensional, it is infinite dimensional. In
such case, no list of vectors from V can span V .
Let us show that the vector space of all polynomials p(z) considered in Example 4 is an infinite
dimensional vector space. Indeed, consider any list of polynomials. In this list there is a polynomial
of maximum degree (recall the list is finite). Thus polynomials of higher degree are not in the span of
the list. Since no list can span the space, it is infinite dimensional.
For example 1, consider the list of vectors (e1, e2, . . . eN ) with
0 | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
For example 1, consider the list of vectors (e1, e2, . . . eN ) with
0
0
.
.
.
1
1
0
.
.
.
0
0
1
.
.
.
0
e1 =
, e2 =
,
. . .
eN =
.
(1.7)
This list spans the space (the vector displayed is a1e1 + a2e2 + . . . aN eN ). This vector space is finite
dimensional.
A list of vectors (v1, v2, . . . , vn), with vi
∈
V
is said to be linearly independent if the equation
a1v1 + a2v2 + . . . + anvn = 0 ,
(1.8)
= an = 0. One can show that the length of any linearly independent
only has the solution a1 = a2 =
list is shorter or equal to the length of any spanning list. This is reasonable, because spanning lists
· · ·
can be arbitrarily long (adding vectors to a spanning list gives still a spanning list), but a linearly
independent list cannot be enlarged beyond a certain point.
Finally, we get to the concept of a basis for a vector space. A basis | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
Finally, we get to the concept of a basis for a vector space. A basis of V
is a list of vectors
in V that both spans V and it is linearly independent. Mathematicians easily prove that any finite
dimensional vector space has a basis. Moreover, all bases of a finite dimensional vector space have the
same length. The dimension of a finite-dimensional vector space is given by the length of any list of
basis vectors. One can also show that for a finite dimensional vector space a list of vectors of length
dim V
is a basis if it is linearly independent list or if it is a spanning list.
For example 1 we see that the list (e1, e2, . . . eN ) in (1.7) is not only a spanning list but a linearly
independent list (prove it!). Thus the dimensionality of this space is N .
For example 3, recall that the most general hermitian two-by-two matrix takes the form
a0 + a3 a1
ia2
�
a1 + ia2 a0
a3
�
−
−
,
a0, a1, a2, a3
R.
∈
(1.9)
Now consider the following list of four ‘vectors’ (1, σ1, σ2, σ3). All entries in this list are hermitian
matrices, so this is a list of vectors in the space. Moreover they span the space since the most general
hermitian matrix, as shown above, | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
Moreover they span the space since the most general
hermitian matrix, as shown above, is simply a01 + a1σ1 + a2σ2 + a3σ3. The list is linearly independent
4
as a01 + a1σ1 + a2σ2 + a3σ3 = 0 implies that
a0 + a3 a1
a1 + ia2 a0
�
a3
!
�
ia2
−
−
=
0 0
�
0 0
!
�
,
(1.10)
and you can quickly see that this implies a0, a1, a2, and a3 are zero. So the list is a basis and the space
in question is a four-dimensional real vector space.
Exercise. Explain why the vector space in example 2 has dimension M
N .
It seems pretty obvious that the vector space in example 5 is infinite dimensional, but it actually
·
takes a bit of work to prove it.
2 Linear operators and matrices
A linear map refers in general to a certain kind of function from one vector space V to another vector
space W . When the linear map takes the vector space V to itself, we call the linear map a linear
operator. We will focus our attention on those operators. Let us then define a linear operator.
A linear operator T on a vector space V
is a function that takes V to V with the properties:
1. T (u + v) = T u + T v, for | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
V with the properties:
1. T (u + v) = T u + T v, for all u, v
V .
2. T (au) = aT u, for all a
∈
∈
F and u
V .
∈
We call
(V ) the set of all linear operators that act on V . This can be a very interesting set, as we
L
will see below. Let us consider a few examples of linear operators.
1. Let V denote the space of real polynomials p(x) of a real variable x with real coefficients. Here
are two linear operators:
•
Let T denote differentiation: T p = p . This operator is linear because (p1 + p2) ′ = p + p
and (ap) ′ = ap .
′
′
1
′
′
2
Let S denote multiplication by x: Sp = xp. S is also a linear operator.
•
2. In the space F∞ of infinite sequences define the left-shift operator L by
L(x1, x2, x3, . . .) = (x2, x3, . . .) .
(2.11)
We lose the first entry, but that is perfectly consistent with linearity. We also have the right-shift
operator R that acts as follows:
R(x1, x2, . . .) = (0, x1, x2, . . .) .
(2.12)
Note that the first entry in the result is zero. It could not be any other number because the zero
element (a sequence of all zeroes) should be mapped | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
It could not be any other number because the zero
element (a sequence of all zeroes) should be mapped to itself (by linearity).
5
3. For any V , the zero map 0 such that 0v = 0. This map is linear and maps all elements of V to
the zero element.
4. For any V , the identity map I for which Iv = v for all v
invariant.
V . This map leaves all vectors
∈
Since operators on V can be added and can also be multiplied by numbers, the set
(V ) introduced
above is itself a vector space (the vectors being the operators!). Indeed for any two operators T, S
L
(V ) we have the natural definition
L
(S + T )v = Sv + T v ,
(aS)v = a(Sv) .
The additive identity in the vector space
(V ) is the zero map of example 3.
L
In this vector space there is a surprising new structure: the vectors (the operators!)
∈
(2.13)
can be
multiplied. There is a multiplication of linear operators that gives a linear operator. We just let one
operator act first and the second later. So given S, T
(V ) we define the operator ST as
∈ L
(ST )v
S(T v)
≡
(2.14)
You should convince yourself that ST
is a linear operator. This product structure in the space of
linear operators is associative: S(T U ) = (ST )U , for S, | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
the space of
linear operators is associative: S(T U ) = (ST )U , for S, T, U , linear operators. Moreover it has an
identity element: the identity map of example 4. Most crucially this multiplication is, in general,
noncommutative. We can check this using the two operators T and S of example 1 acting on the
polynomial p = x . Since T differentiates and S multiplies by x we get
n
(T S)x = T (Sxn) = T (x
n
n+1)
n
= (n + 1)x , while
(ST )x = S(T xn) = S(nx
n
n−1)
= nx
n
.
(2.15)
We can quantify this failure of commutativity by writing the difference
ST )x = (n + 1)x n
n
(T S
−
−
n
nx = x = I x
n
n
(2.16)
where we inserted the identity operator at the last step. Since this relation is true for any xn, it would
also hold acting on any polynomial, namely on any element of the vector space. So we write
[ T , S ] = I .
(2.17)
Y X.
,
where we introduced the commutator [
·
] of two operators X, Y , defined as [X, Y ]
·
≡
XY
−
The most basic features of an operator are captured by two simple concepts: its null space and its
range. Given some linear operator T on V
mapped to the zero element. The | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
its
range. Given some linear operator T on V
mapped to the zero element. The null space (or kernel) of T
it is of interest to consider those elements of V that are
(V ) is the subset of vectors in V
∈ L
that are mapped to zero by T :
null T =
v
{
V ; T v = 0
}
.
∈
6
(2.18)
Actually null T is a subspace of V (The only nontrivial part of this proof is to show that T (0) = 0.
This follows from T (0) = T (0 + 0) = T (0) + T (0) and then adding to both sides of this equation the
additive inverse to T (0)).
A linear operator T : V
V , implies u = v. An
injective map is called a one-to-one map, because not two different elements can be mapped to the
same one. In fact, physicist Sean Carroll has suggested that a better name would be two-to-two as
V is said to be injective if T u = T v, with u, v
→
∈
injectivity really means that two different elements are mapped by T to two different elements! We
leave for you as an exercise to prove the following important characterization of injective maps:
Exercise. Show that T is injective if and only if null T =
.
0
}
{
Given a linear operator | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
is injective if and only if null T =
.
0
}
{
Given a linear operator T on V it is also of interest to consider the elements of V of the form T v.
The linear operator may not produce by its action all of the elements of V . We define the range of
T as the image of V under the map T :
Actually range T is a subspace of V (can you prove it?). The linear operator T is said to be surjective
range T =
T v; v
{
V
.
}
∈
(2.19)
if range T = V . That is, if the image of V under T is the complete V .
Since both the null space and the range of a linear operator T : V
can assign a dimension to them, and the following theorem is nontrivial:
V are subspaces of V , one
→
dim V = dim (null T ) + dim (range T ) .
(2.20)
Example. Describe the null space and range of the operator
T =
0 1
0 0
(
)
Let us now consider invertible linear operators. A linear operator T
(2.21)
(V ) is invertible if there
∈ L
exists another linear operator S
(V ) such that ST and T S are identity maps (written as I). The
linear operator S is called the inverse of T . The inverse is actually unique. Say S and S ′ are inverses
of T . Then | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
The inverse is actually unique. Say S and S ′ are inverses
of T . Then we have
∈ L
S = SI = S(T S ′ ) = (ST )S ′ = IS ′ = S ′ .
(2.22)
Note that we required the inverse S to be an inverse acting from the left and acting from the right.
This is useful for infinite dimensional vector spaces. For finite-dimensional vector spaces one suffices;
one can then show that ST = I if and only if T S = I.
It is useful to have a good characterization of invertible linear operators. For a finite-dimensional
vector space V the following three statements are equivalent!
Finite dimension:
T is invertible
←→
T is injective
←→
T is surjective
(2.23)
7
For infinite dimensional vector spaces injectivity and surjectivity are not equivalent (each can fail
independently). In that case invertibility is equivalent to injectivity plus surjectivity:
Infinite dimension:
T is invertible
←→
T is injective and surjective
(2.24)
The left shift operator L is not injective (maps (x1, 0, . . .) to zero) but it is surjective. The right shift
operator is not surjective although it is injective.
Now we consider the matrix associated to a linear operator T that acts on a vector space V .
This matrix will depend on the basis | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
T that acts on a vector space V .
This matrix will depend on the basis we choose for V . Let us declare that our basis is the list
(v1, v2, . . . vn). It is clear that the full knowledge of the action of T on V
T on the basis vectors, that is on the values (T v1, T v2, . . . , T vn). Since T vj is in V , it can be written
as a linear combination of basis vectors. We then have
is encoded in the action of
T vj = T1 j v1 + T2 j v2 +
. . . + Tn j vn ,
(2.25)
where we introduced the constants Ti,j that are known if the operator T is known. As we will see,
these are the entries form the matrix representation of the operator T in the chosen basis. The above
relation can be written more briefly as
T vj =
n
Tij vi .
i=1
L
When we deal with different bases it can be useful to use notation where we replace
Tij
Tij (
v
{
) ,
}
→
(2.26)
(2.27)
so that it makes clear that T is being represented using the v basis (v1, . . . , vn).
I want to make clear why (2.25) is reasonable before we show that it makes for a consistent
association between operator multiplication and matrix multiplication. The left-hand side, where we
have the | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
association between operator multiplication and matrix multiplication. The left-hand side, where we
have the action of the matrix for T on the j-th basis vector, can be viewed concretely as
T vj
←→
j-th position
(2.28)
T11
T21
.
.
.
T1 j
T2 j
.
.
.
· · ·
· · ·
.
.
.
T1n
T2n
.
.
.
· · ·
· · ·
.
.
.
Tn1
Tn j
· · ·
· · ·
Tnn
0
. . .
1
. .
.
0
where the column vector has zeroes everywhere except on the j-th entry. The product, by the usual
rule of matrix multiplication is the column vector
T1 j
T2 j
.
.
.
Tn j
=
T1 j
1
0
.
.
.
0
+ T2 j
0
1
.
.
.
0
| https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
�
0
0
0
.
.
.
1
+
. . .
+ Tn j
8
T1 j v1 + . . . Tn j vn .
(2.29)
←→
which we identify with the right-hand side of (2.25). So (2.25) is reasonable.
Exercise. Verify that the matrix representation of the identity operator is a diagonal matrix with an
entry of one at each element of the diagonal. This is true for any basis.
Let us now examine the product of two operators and their matrix representation. Consider the
operator T S acting on vj :
(T S)vj = T (Svj ) = T
Spj vp =
Spj T vp =
Spj
Tipvi
p
L
p
L
p
i
L L
so that changing the order of the sums we find
(T S)vj =
TipSpj vi .
(2.30)
(2.31)
p
L(L
Using the identification implicit in (2.26) we see that the object in parenthesis is the i, j matrix element
)
i
of the matrix that represents T S. Therefore we found
(T S)ij =
TipSpj ,
p
L
(2.32)
which is precisely the right formula for matrix multiplication. In other words, the matrix | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
32)
which is precisely the right formula for matrix multiplication. In other words, the matrix that repre
sents T S is the product of the matrix that represents T with the matrix that represents S, in that
order.
Changing basis
While matrix representations are very useful for concrete visualization, they are basis dependent.
It is a good idea to try to figure out if there are quantities that can be calculated using a matrix
representation that are, nevertheless, guaranteed to be basis independent. One such quantity is the
trace of the matrix representation of a linear operator. The trace is the sum of the matrix elements
in the diagonal. Remarkably, that sum is the same independent of the basis used. Consider a linear
operator T in
(V ) and two sets of basis vectors (v1, . . . , vn) and (u1, . . . , un) for V . Using the explicit
notation (2.27) for the matrix representation we state this property as
L
v
tr T (
{
u
) = tr T (
{
}
) .
}
(2.33)
We will establish this result below. On the other hand, if this trace is actually basis independent, there
should be a way to define the trace of the linear operator T without using its matrix representation.
This is actually possible, as we will see. Another basis independent quantity is the determinant of the
matrix representation of T .
Let us then consider the effect of a change of basis on the matrix representation of an operator | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
the effect of a change of basis on the matrix representation of an operator.
Consider a vector space V and a change of basis from (v1, . . . vn) to (u1, . . . un) defined by the linear
operator A as follows:
A : vk
→
uk, for k = 1, . . . , n .
(2.34)
9
This can also be written as
Avk = uk
(2.35)
Since we know how A acts on every element of the basis we know, by linearity how it acts on any
vector. The operator A is clearly invertible because, letting B : uk
vk or
→
we have
Buk = vk ,
BAvk = B(Avk) = Buk = vk
ABuk = A(Buk) = Avk = uk ,
(2.36)
(2.37)
showing that BA = I and AB = I. Thus B is the inverse of A. Using the definition of matrix
representation, the right-hand sides of the relations uk = Avk and vk = Buk can be written so that
the equations take the form
uk = Ajk vj ,
vk = Bjk uj ,
(2.38)
where we used the convention that repeated indices are summed over. Aij are the elements of the
matrix representation of A in the v basis and Bij are the elements of the matrix representation of B
in the u basis. Replacing the second relation on | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
of the matrix representation of B
in the u basis. Replacing the second relation on the first, and then replacing the first on the second
we get
uk = Ajk Bij ui = Bij Ajk ui
vk = Bjk Aij vi = Aij Bjk vi
Since the u’s and v’s are basis vectors we must have
Bij Ajk = δik and Aij Bjk = δik
which means that the B matrix is the inverse of the A matrix. We have thus learned that
vk = (A−1)jk uj .
(2.39)
(2.40)
(2.41)
We can now apply these preparatory results to the matrix representations of the operator T . We
have, by definition,
We now want to calculate T on uk so that we can read the formula for the matrix T on the u basis:
v
T vk = Tik(
{
) vi .
}
(2.42)
Computing the left-hand side, using the linearity of the operator T , we have
u
T uk = Tik(
{
) ui .
}
v
T uk = T (Ajkvj ) = AjkT vj = AjkTpj (
{
) vp
}
10
(2.43)
(2.44)
and using (2.41) we get
v
T uk = AjkTpj (
{
) (A−1)ip ui = (A−1)ip Tpj (
v
{
}
) Ajk ui = A−1T (
v
{
}
(
)
(
Comparing with (2.43) we get
)A ui .
ik
}
)
u
Tij (
{ | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
{
}
(
)
(
Comparing with (2.43) we get
)A ui .
ik
}
)
u
Tij (
{
) = A−1T (
v
{
}
)A
}
ij
→
u
T (
{
) = A−1T (
v
{
}
)A .
}
This is the result we wanted to obtain.
(
)
(2.45)
(2.46)
The trace of a matrix Tij is given by Tii, where sum over i is understood. To show that the trace
of T is basis independent we write
u
tr(T (
{
) = (A−1)ij Tjk(
v
u
)) = Tii(
{
}
{
}
= Aki(A−1)ij Tjk(
v
{
)
}
)Aki
}
v
= δkj Tjk(
{
v
) = Tjj (
{
}
v
) = tr(T (
{
}
)) .
}
(2.47)
For the determinant we recall that det(AB) = (detA)(detB). Therefore det(A) det(A−1) = 1. From
(2.46) we then get
u
detT (
{
) = det(A−1) detT (
v
{
}
v
) det A = detT (
{
}
) .
}
(2.48)
Thus the determinant of the matrix that represents a linear operator is independent of the basis used.
3
Eigenvalues and eigenvectors
In quantum mechanics we need to consider eigenvalues and eigenstates of hermitian operators acting on
complex vector spaces. These operators are called observables and their eigenvalues represent possible
results of a measurement. In order to acquire a better perspective on these matters, we consider the
eigenvalue/eigenvector problem in | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
a better perspective on these matters, we consider the
eigenvalue/eigenvector problem in more generality.
One way to understand the action of an operator T
(V ) on a vector space V
is to understand
∈ L
how it acts on subspaces of V , as those are smaller than V and thus possibly simpler to deal with. Let
U denote a subspace of V . In general, the action of T may take elements of U outside U . We have a
noteworthy situation if T acting on any element of U gives an element of U . In this case U is said to
be invariant under T , and T is then a well-defined linear operator on U . A very interesting situation
arises if a suitable list of invariant subspaces give the space V as a direct sum.
Of all subspaces, one-dimensional ones are the simplest. Given some vector u
the one-dimensional subspace U spanned by u:
U =
cu : c
{
F
.
}
∈
11
V one can consider
∈
(3.49)
We can ask if the one-dimensional subspace U is left invariant by the operator T . For this T u must
be equal to a number times u, as this guarantees that T u
U . Calling the number λ, we write
∈
T u = λ u .
(3.50)
This equation is so ubiquitous that names have been invented to label the objects involved. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
This equation is so ubiquitous that names have been invented to label the objects involved. The
number λ
F is called an eigenvalue of the linear operator T if there is a nonzero vector u
satisfying this equation. Then it follows that cu, for any c
such that the equation above is satisfied. Suppose we find for some specific λ a nonzero vector u
F also satisfies equation (3.50), so that
the solution space of the equation includes the subspace U , which is now said to be an invariant
subspace under T . It is convenient to call any vector that satisfies (3.50) for a given λ an eigenvector
∈
V
∈
∈
of T corresponding to λ. In doing so we are including the zero vector as a solution and thus as an
eigenvector. It can often happen that for a given λ there are several linearly independent eigenvectors.
In this case the invariant subspace associated with the eigenvalue λ is higher dimensional. The set of
eigenvalues of T is called the spectrum of T .
Our equation above is equivalent to
for some nonzero u. It is therefore the case that
λI) u = 0 ,
(T
−
λ
is an eigenvalue
←→
(T
−
λI) not injective.
(3.51)
(3.52)
Using (2.23) we conclude that λ is an eigenvalue also means that (T
surjective. We also note | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
) we conclude that λ is an eigenvalue also means that (T
surjective. We also note that
λI) is not invertible, and not
−
Set of eigenvectors of T corresponding to λ = null (T
λI) .
−
(3.53)
It should be emphasized that the eigenvalues of T and the invariant subspaces (or eigenvectors as
sociated with fixed eigenvalues) are basis independent objects. Nowhere in our discussion we had to
invoke the use of a basis, nor we had to use any matrix representation. Below, we will discuss the
familiar calculation of eigenvalues and eigenvectors using a matrix representation of the operator T in
some particular basis.
Let us consider some examples. Take a real three-dimensional vector space V (our space to great
accuracy!). Consider the rotation operator T that rotates all vectors by a fixed angle small about
the z axis. To find eigenvalues and eigenvectors we just think of the invariant subspaces. We must
ask which are the vectors for which this rotation doesn’t change their direction and effectively just
multiplies them by a number? Only the vectors along the z-direction do not change direction upon
this rotation. So the vector space spanned by ez is the invariant subspace, or the space of eigenvectors.
The eigenvectors are associated with the eigenvalue of one, as the vectors are not altered at all by the
rotation.
12
Consider now the case | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
are not altered at all by the
rotation.
12
Consider now the case where T is a rotation by ninety degrees on a two-dimensional real vector
space V . Are there one-dimensional subspaces left invariant by T ? No, all vectors are rotated, none
remains pointing in the same direction. Thus there are no eigenvalues, nor, of course, eigenvectors.
If you tried calculating the eigenvalues by the usual recipe, you will find complex numbers. A complex
eigenvalue is meaningless in a real vector space.
Although we will not prove the following result, it follows from the facts we have introduced and
no extra machinery. It is of interest being completely general and valid for both real and complex
vector spaces:
Theorem: Let T
∈ L
(V ) and assume λ1, . . . λn are distinct eigenvalues of T and u1, . . . un are corre
sponding nonzero eigenvectors. Then (u1, . . . un) are linearly independent.
Note that we cannot ask if the eigenvectors are orthogonal to each other as we have not yet
introduced an inner product on the vector space V . In this theorem there may be more than one
linearly independent eigenvector associated with some eigenvalues. In that case any one eigenvector
will do. Since an n-dimensional vector space V does not have more than n linearly independent
vectors, no linear operator on V can have more than | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
than n linearly independent
vectors, no linear operator on V can have more than n distinct eigenvalues.
We saw that some linear operators in real vector spaces can fail to have eigenvalues. Complex
vector spaces are nicer. In fact, every linear operator on a finite-dimensional complex vector space has
at least one eigenvalue. This is a fundamental result. It can be proven without using determinants
with an elegant argument, but the proof using determinants is quite short.
When λ is an eigenvalue, we have seen that T
λI
is not an invertible operator. This also
means that using any basis, the matrix representative of T
λI is non-invertible. The condition of
−
−
non-invertibility of a matrix is identical to the condition that its determinant vanish:
This condition, in an N -dimensional vector space looks like
det(T
−
λ1) = 0 .
(3.54)
T11
λ
−
T21
.
.
.
TN 1
T12
λ
T22
−
.
.
.
TN 2
T1N
. . .
T2N
. . .
.
.
.
.
.
.
. . . TN N
−
λ
det
= 0 .
(3.55)
The left-hand side is a polynomial f (λ) in λ of degree N called the characteristic polynomial:
f (λ) = det(T | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
(λ) in λ of degree N called the characteristic polynomial:
f (λ) = det(T
λ1) = (
−
−
λ)N + bN −1λN −1 + . . . b1λ + b0 ,
(3.56)
where the bi are constants. We are interested in the equation f (λ) = 0, as this determines all possible
eigenvalues. If we are working on real vector spaces, the constants bi are real but there is no guarantee
of real roots for f (λ) = 0. With complex vector spaces, the constants bi will be complex, but a complex
solution for f (λ) = 0 always exists. Indeed, over the complex numbers we can factor the polynomial
f (λ) as follows
f (λ) = (
−
1)N (λ
λ1)(λ
−
−
λ2) . . . (λ
λN ) ,
−
(3.57)
13
where the notation does not preclude the possibility that some of the λi’s may be equal. The λi’s
are the eigenvalues, since they lead to f (λ) = 0 for λ = λi.
If all eigenvalues of T are different
the spectrum of T is said to be non-degenerate. If an eigenvalue appears k times it is said to be a
degenerate eigenvalue with of multiplicity k. Even in the most degenerate case we must have at least
one eigenvalue. The eigenvectors exist because (T
λI) non-invertible means it is not injective, and
therefore there are | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
because (T
λI) non-invertible means it is not injective, and
therefore there are nonzero vectors that are mapped to zero by this operator.
−
4
Inner products
We have been able to go a long way without introducing extra structure on the vector spaces. We
have considered linear operators, matrix representations, traces, invariant subspaces, eigenvalues and
eigenvectors. It is now time to put some additional structure on the vector spaces. In this section
we consider a function called an inner product that allows us to construct numbers from vectors. A
vector space equipped with an inner product is called an inner-product space.
An inner product on a vector space V over F is a machine that takes an ordered pair of elements
of V , that is, a first vector and a second vector, and yields a number in F. In order to motivate the
definition of an inner product we first discuss the familiar way in which we associate a length to a
vector.
The length of a vector, or norm of a vector is a real number that is positive or zero, if the vector
is the zero vector. In Rn a vector a = (a1, . . . an) has norm
defined by
a
|
|
=
a
|
|
2
a + . . . a2
1
n
Squaring this one may think of
a
|
2 as the dot product of a with a:
| | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
one may think of
a
|
2 as the dot product of a with a:
|
2 = a
|
a
|
·
2
2
a = a1 + . . . a
n
Based on this the dot product of any two vectors a and b is defined by
b = a1b1 +
. . . + anbn .
a
·
(4.58)
(4.59)
(4.60)
If we try to generalize this dot product we may require as needed properties the following
1. a
a
·
≥
0, for all vectors a.
2. a
3. a
·
·
a = 0 if and only if a = 0.
(b1 + b2) = a
b1 + a
·
·
b2. Additivity in the second entry.
4. a
·
(α b) = α a
·
b, with α a number.
5. a
·
b = b
a.
·
14
Along with these axioms, the length
a
|
|
of a vector a is the positive or zero number defined by relation
2 = a
|
a
|
·
a .
(4.61)
These axioms are satisfied by the definition (4.60) but do not require it. A new dot product defined
b = c1a1b1 + . . . + cnanbn, with c1, . . . cn positive constants, would do equally well! So whatever
by a
can be proven with these axioms holds true not only for the conventional dot | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
by a
can be proven with these axioms holds true not only for the conventional dot product.
·
The above axioms guarantee that the Schwarz inequality holds:
To prove this consider two (nonzero) vectors a and b and then consider the shortest vector joining the
tip of a to the line defined by the direction of b (see the figure below). This is the vector a⊥, given by
b
a
|
·
| ≤ |
a
b
.
|
| |
(4.62)
a⊥
a
≡
−
a
b
b
·
b
·
b .
is there because the vector is perpendicular to b, namely a⊥
(4.63)
b = 0, as you can quickly
·
The subscript
⊥
see. To write the above vector we subtracted from a the component of a parallel to b. Note that the
vector a⊥ is not changed as b
should, the vector a⊥ is zero if and only if the vectors a and b are parallel. All this is only motivation,
we could have just said “consider the following vector a⊥”.
cb; it does not depend on the overall length of b. Moreover, as it
→
Given axiom (1) we have that a⊥
a⊥
·
≥
0 and therefore using (4.63)
a⊥ = a
a⊥
·
a
·
−
(a
b
b)2
b
·
·
0 .
≥
Since b is not the zero vector | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
b)2
b
·
·
0 .
≥
Since b is not the zero vector we then have
b)2
(a
·
(a
a)(b
·
b) .
·
≤
(4.64)
(4.65)
Taking the square root of this relation we obtain the Schwarz inequality (4.62). The inequality becomes
an equality only if a⊥ = 0 or, as discussed above, when a = cb with c a real constant.
For complex vector spaces some modification is necessary. Recall that the length
of a complex
γ∗γ, where the asterisk superscript denotes complex conjugation. It is
γ
|
|
number γ is given by
=
√
γ
|
|
15
not hard to generalize this a bit. Let z = (z1, . . . , zn) be a vector in Cn . Then the length of the vector
z
|
is a real number greater than zero given by
|
=
z
|
|
∗
z z1 + . . . + z ∗ zn .
1
n
(4.66)
We must use complex conjugates, denoted by the asterisk superscript, to produce a real number greater
than or equal to zero. Squaring this we have
∗
2 = z1 z1 + . . . + z zn .
|
∗
n
z
|
(4.67)
This suggests that for vectors z = (z1, . . . , zn) and w = (w1, . . . , wn) an inner product could be given
by
∗
w1z1 + . . . + w zn ,
∗
n | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
could be given
by
∗
w1z1 + . . . + w zn ,
∗
n
(4.68)
and we see that we are not treating the two vectors in an equivalent way. There is the first vector,
in this case w whose components are conjugated and a second vector z whose components are not
conjugated. If the order of vectors is reversed, we get for the inner product the complex conjugate of
the original value. As it was mentioned at the beginning of the section, the inner product requires an
ordered pair of vectors. It certainly does for complex vector spaces. Moreover, one can define an inner
product in general in a way that applies both to complex and real vector spaces.
An inner product on a vector space V over F is a map from an ordered pair (u, v) of vectors
are inspired by the axioms we listed for the dot
in F. The axioms for
in V to a number
u, v
(
)
u, v
(
)
product.
1.
2.
3.
4.
5.
v , v
(
) ≥
0, for all vectors v
V .
∈
v, v
(
)
= 0 if and only if v = 0.
u , v1 + v2
(
)
=
u , v1
(
)
+
u , v2
(
. Additivity in the second entry.
)
u , α v
(
)
= α
u , v
(
, with α
)
∈
F. Homogeneity in the second entry | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
= α
u , v
(
, with α
)
∈
F. Homogeneity in the second entry.
u , v
(
)
=
v , u
(
∗ . Conjugate exchange symmetry.
)
This time the norm
of a vector v
v
|
|
V
∈
is the positive or zero number defined by relation
2 =
|
v
|
v , v
(
)
.
(4.69)
From the axioms above, the only major difference is in number five, where we find that the inner
product is not symmetric. We know what complex conjugation is in C. For the above axioms to
apply to vector spaces over R we just define the obvious: complex conjugation of a real number is a
conjugation does nothing and the inner product is strictly
real number. In a real vector space the
symmetric in its inputs.
∗
16
A few comments. One can use (3) with v2 = 0 to show that
= 0 for all u
V , and thus, by
u, 0
)
(
∈
(5) also
= 0. Properties (3) and (4) amount to full linearity in the second entry. It is important
0, u
(
)
to note that additivity holds for the first entry as well:
u1 + u2, v
(
)
=
v, u1 + u2
(
v, u1
= (
(
v, u1
(
u1, v
(
+
)
∗ +
)
+
=
=
)
∗
)
v, u2
(
v, u2
( | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
+
)
∗ +
)
+
=
=
)
∗
)
v, u2
(
v, u2
(
u2, v
(
)
Homogeneity works differently on the first entry, however,
α u , v
(
)
=
∗
v , α u
)
(
) ∗
= (α
v , u
(
)
= α ∗
u , v
(
)
.
) ∗
)
∗
)
.
(4.70)
(4.71)
Thus we get conjugate homogeneity on the first entry. This is a very important fact. Of course,
for a real vector space conjugate homogeneity is the same as just plain homogeneity.
Two vectors u, v
V are said to be orthogonal if
= 0. This, of course, means that
u, v
(
)
∈
v, u
)
(
= 0
as well. The zero vector is orthogonal to all vectors (including itself). Any vector orthogonal to all
vectors in the vector space must be equal to zero. Indeed, if x
V
is such that
= 0 for all v,
∈
x, v
(
)
= 0 implies x = 0 by axiom 2. This property is sometimes stated as the
pick v = x, so that
non-degeneracy of the inner product. The “Pythagorean” identity holds for the norm-squared of
x, x
(
)
orthogonal vectors in an inner-product vector space. As you can quickly verify,
u + v
|
2 =
|
2 +
|
u
|
2 ,
|
v
|
for u, v
∈
V, orthogonal vectors.
(4 | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
+
|
u
|
2 ,
|
v
|
for u, v
∈
V, orthogonal vectors.
(4.72)
The Schwarz inequality can be proven by an argument fairly analogous to the one we gave above
for dot products. The result now reads
Schwarz Inequality:
u , v
|(
u
≤ |
.
v
| |
|
)|
(4.73)
The inequality is saturated if and only if one vector is a multiple of the other. Note that in the
left-hand side
denotes the norm of a complex number and on the right-hand side each
denotes
the norm of a vector. You will prove this identity in a slightly different way in the homework. You
will also consider there the triangle inequality
...
|
|
...
|
|
which is saturated when u = cv for c a real, positive constant. Our definition (4.69) of norm on a
vector space V
is mathematically sound: a norm is required to satisfy the triangle inequality. Other
u + v
|
u
| ≤ |
+
|
,
v
|
|
(4.74)
properties are required: (i)
0 for all v, (ii)
v
|
| ≥
some constant. Our norm satisfies all of them.
= 0 if and only if v = 0, and (iii)
v
|
|
=
cv
|
|
a
c
||
|
for c
|
17
A complex vector space with an inner product as we have defined is a Hilbert space if it is | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
an inner product as we have defined is a Hilbert space if it is finite
dimensional. If the vector space is infinite dimensional, an extra completeness requirement must be
satisfied for the space to be a Hilbert space: all Cauchy sequences of vectors must converge to vectors
in the space. An infinite sequence of vectors vi, with i = 1, 2, . . . ,
vn
ǫ > 0 there is an N such that
|
< ǫ whenever n, m > N .
vm
−
|
is a Cauchy sequence if for any
∞
5
Orthonormal basis and orthogonal projectors
In an inner-product space we can demand that basis vectors have special properties. A list of vectors
is said to be orthonormal if all vectors have norm one and are pairwise orthogonal. Consider a list
(e1, . . . , en) of orthonormal vectors in V . Orthonormality means that
ei, ej
(
)
= δij .
(5.75)
We also have a simple expression for the norm of a1e1 + . . . + anen, with ai
F:
∈
a1e1 + . . . + anen
|
2 = a1e1 + . . . + anen , a1e1 + . . . + anen
|
=
=
a1e1 , a1e1
(
(
a1
|
)
2 + . . . +
|
+ . . . +
2 .
|
an
|
anen , anen
(
)
)
(5.76) | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
|
+ . . . +
2 .
|
an
|
anen , anen
(
)
)
(5.76)
This result implies the somewhat nontrivial fact that the vectors in any orthonormal list are linearly
2 . This
independent. Indeed if a1e1 + . . . + anen = 0 then its norm is zero and so is
|
implies all ai = 0, thus proving the claim.
2 + . . . +
|
an
|
a1
|
An orthonormal basis of V
is a list of orthonormal vectors that is also a basis for V . Let
(e1, . . . , en) denote an orthonormal basis. Then any vector v can be written as
for some constants ai that can be calculated as follows
v = a1e1 + . . . + anen ,
ei, v
(
)
=
ei , aiei
(
)
= ai ,
( i not summed).
Therefore any vector v can be written as
v =
e1, v
(
)
e1 +
. . . +
en , v
(
)
=
ei , v
(
)
ei .
(5.77)
(5.78)
(5.79)
To find an orthonormal basis on an inner product space V we just need to start with a basis and
then use an algorithm to turn it into an orthogonal basis. In fact, a little more generally:
Gram-Schmidt: Given a list (v1, . . . , vn) of linearly independent vectors in V one can construct a
list (e1, . . . , en) of | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
linearly independent vectors in V one can construct a
list (e1, . . . , en) of orthonormal vectors such that both lists span the same subspace of V .
The Gram-Schmidt algorithm goes as follows. You take e1 to be v1, normalized to have unit norm:
e1 = v1/
. Then take v2 + αe1 and fix the constant α so that this vector is orthogonal to e1. The
v1
|
|
18
answer is clearly v2
In fact we can write the general vector in a recursive fashion. If we know e1, e2, . . . , ej−1, we can write
ej as follows:
e1. This vector, normalized by dividing it by its norm, is set equal to e2.
)
e1, v2
− (
−
−
It should be clear to you by inspection that this vector is orthogonal to the vectors ei with i < j and
has unit norm. The Gram-Schmidt procedure is quite practical.
− (
− (
− (
− (
(5.80)
ej =
|
vj
vj
|
e1
e1, vj
)
e1
e1, vj
)
. . .
. . .
ej−1
ej−1, vj
)
ej−1
ej−1, vj
)
With an inner product we can construct interesting subspaces of a vector space V . Consider a
subset U of vectors in V (not necessarily a subspace). Then we can define a subspace U ⊥, called the
orthogonal complement of U as the | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
define a subspace U ⊥, called the
orthogonal complement of U as the set of all vectors orthogonal to the vectors in U :
U ⊥ =
v
{
V
v, u
= 0, for all u
U
.
(5.81)
|(
∈
)
This is clearly a subspace of V . When U is a subspace, then U and U ⊥ actually give a direct sum
decomposition of the full space:
Theorem: If U is a subspace of V , then V = U
⊕
Proof: This is a fundamental result and is not hard to prove. Let (e1, . . . en) be an orthonormal basis
for U . We can clearly write any vector v in V as
U ⊥ .
∈
}
e1, v
v = (
(
e1 + . . . +
)
en, v
(
en ) + ( v
)
e1, v
− (
e1
)
−
. . .
en, v
− (
en ) .
)
(5.82)
On the right-hand side the first vector in parenthesis is clearly in U as it is written as a linear
combination of U basis vectors. The second vector is clearly in U ⊥ as one can see that it is orthogonal
to any vector in U . To complete the proof one must show that there is no vector except the zero
U ⊥ . Then v is in U
U ⊥ (recall the comments below (1.5)). Let v
U
vector in the intersection U
∩
and in | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
the comments below (1.5)). Let v
U
vector in the intersection U
∩
and in U ⊥ so it should satisfy
v, v
(
)
= 0. But then v = 0, completing the proof.
∈
∩
∈
and w
Given this decomposition any vector v
U
U ⊥ . One can define a linear operator PU , called the orthogonal projection of V onto U ,
that and that acting on v above gives the vector u. It is clear from this definition that: (i) the range
of PU is U . (ii) the null space of PU is U ⊥, (iii) that PU is not invertible and, (iv) acting on U , the
operator PU is the identity operator. The formula for the vector u can be read from (5.82)
V can be written uniquely as v = u + w where u
∈
∈
PU v =
e1, v
(
e1 + . . . +
)
en, v
(
en .
)
(5.83)
It is a straightforward but a good exercise to verify that this formula is consistent with the fact that
acting on U , the operator PU is the identity operator. Thus if we act twice in succession with PU on
a vector, the second action has no effect as it is already acting on a vector in U . It follows from this
that
PU PU = I PU = PU
P 2
U
= PU . | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
follows from this
that
PU PU = I PU = PU
P 2
U
= PU .
→
(5.84)
The eigenvalues and eigenvectors of PU are easy to describe. Since all vectors in U are left invariant by
the action of PU , an orthonormal basis of U provides a set of orthonormal eigenvectors of P all with
19
eigenvalue one. If we choose on U ⊥ an orthonormal basis, that basis provides orthonormal eigenvectors
of P all with eigenvalue zero.
In fact equation (5.84) implies that the eigenvalues of PU can only be one or zero. T he eigenvalues
of an operator satisfy whatever equation the operator satisfies (as shown by letting the equation act
on a presumed eigenvector) thus λ2 = λ is needed, and this gives λ(λ
only possibilities.
1) = 0, and λ = 0, 1, as the
−
Consider a vector space V = U
U ⊥ that is (n + k)-dimensional, where U is n-dimensional and
U ⊥ is k-dimensional. Let (e1, . . . , en) be an orthonormal basis for U and (f1, . . . fk) an orthonormal
basis for U ⊥ . We then see that the list of vectors (g1, . . . gn+k) defined by
⊕
(g1 , . . . , gn+k) = (e1, . . . , en, f1, . . . fk)
is an orthonormal basis for V.
(5.85)
Exercise: Use PU ei = | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
is an orthonormal basis for V.
(5.85)
Exercise: Use PU ei = ei, for i = 1, . . . n and PU fi = 0, for i = 1, . . . , k, to show that in the above basis
the projector operator is represented by the diagonal matrix:
PU = diag 1, . . . 1 , 0, . . . , 0 ) .
(5.86)
n entries
k entries
(
' -v
We see that, as expected from its non-invertibility, det(PU ) = 0. But more interestingly we see that
the trace of the matrix PU is n. Therefore
' -v
"
"
tr PU = dim U .
(5.87)
The dimension of U is the rank of the projector PU . Rank one projectors are the most common
projectors. They project to one-dimensional subspaces of the vector space.
Projection operators are useful in quantum mechanics, where observables are described by opera
tors. The effect of measuring an observable on a physical state vector is to turn this original vector
instantaneously into another vector. This resulting vector is the orthogonal projection of the original
vector down to some eigenspace of the operator associated with the observable.
6
Linear functionals and adjoint operators
When we consider a linear operator T on a vector space V that has an inner product, we can construct
a related linear operator T † on V called the adjoint of T . This is a very | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
related linear operator T † on V called the adjoint of T . This is a very useful operator and is typically
different from T . When the adjoint T † happens to be equal to T , the operator is said to be Hermitian.
To understand adjoints, we first need to develop the concept of a
linear functional.
A linear functional φ on the vector space V is a linear map from V to the numbers F: for v
φ(v)
∈
F. A linear functional has the following two properties:
V ,
∈
1. φ(v1 + v2) = φ(v1) + φ(v2) , with v1, v2
V .
∈
2. φ(av) = aφ(v) for v
V and a
∈
F.
∈
20
As an example, consider the three-dimensional real vector space R3 with inner product equal to the
familiar dot product. Writing a vector v as the triplet v = (v1, v2, v3), we take
φ(v) = 3v1 + 2v2
4v3 .
−
(6.1)
Linearity is clear as the right-hand side features the components v1, v2, v3 appearing linearly. We can
4) to write the linear functional as an inner product. Indeed, one can readily
use a vector u = (3, 2,
see that
−
This is no accident, in fact. We can prove that any linear functional φ(v) admits such representation
φ(v) =
u | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
fact. We can prove that any linear functional φ(v) admits such representation
φ(v) =
u, v
(
.
)
(6.2)
with some suitable choice of vector u.
Theorem: Let φ be a linear functional on V . There is a unique vector u
V .
for all v
Proof: Consider an orthonormal basis, (e1, . . . , en) and write the vector v as
∈
V such that φ(v) =
∈
When φ acts on v we find, first by linearity and then by conjugate homogeneity
v =
e1, v
(
e1 + . . . +
)
en, v
(
en .
)
φ(v) = φ
e1 + . . . +
)
e1, v
en, v
en
(
)
(
φ(e1) + . . . +
e1, v
en, v
φ(en)
(
)
(
)
(
)
φ(e1) ∗ e1, v
φ(en) ∗ en , v
+ . . . +
(
(
φ(e1) ∗ e1 + . . . + φ(en) ∗ en , v
.
)
)
=
=
=
)
u, v
(
)
(6.3)
(6.4)
We have thus shown that, as claimed
(
φ(v) =
u, v
(
)
with u = φ(e1) ∗ e1 + . . . + φ(en) ∗ en .
(6.5)
Next, we prove that this u is unique. If there exists another vector, u ′ , that also gives the correct
u, we see
result for all v, then
′
= 0 for all v. Taking v = | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
see
result for all v, then
′
= 0 for all v. Taking v = u
′
u , v
=
′
u , v
(
u = 0 or u = u, proving uniqueness.1
, which implies
)
u, v
(
′
u
(
−
)
)
−
′
that this shows u
−
We can modify a bit the notation when needed, to write
where the left-hand side makes it clear that this is a functional acting on v that depends on u.
φu(v)
u, v
≡ (
,
)
(6.6)
We can now address the construction of the adjoint. Consider: φ(v) =
a linear functional, whatever the operator T is. Since any linear functional can be written as
u, T v
(
, which is clearly
)
w, v
(
,
)
with some suitable vector w, we write
u, T v
(
1This theorem holds for infinite dimensional Hilbert spaces, for continuous linear functionals.
w , v
(
)
)
=
,
(6.7)
21
Of course, the vector w must depend on the vector u that appears on the left-hand side. Moreover,
it must have something to do with the operator T , who does not appear anymore on the right-hand
side. So we must look for some good notation here. We can think of w as a function of the vector u
and thus write w = T †u where T † denotes a map (not obviously linear) | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
thus write w = T †u where T † denotes a map (not obviously linear) from V to V . So, we think of
T †u as the vector obtained by acting with some function T † on u. The above equation is written as
u , T v
(
)
=
T † u , v
(
)
,
(6.8)
Our next step is to show that, in fact, T † is a linear operator on V . The operator T † is called the
adjoint of T . Consider
u1 + u2, T v
(
)
=
T †(u1 + u2), v
(
)
,
and work on the left-hand side to get
u1 + u2, T v
(
)
=
+
u1, T v
(
)
T † u1, v
(
u2, T v
)
(
T † u2, v
+
=
(
)
= T † u1 + T † u2 , v .
)
Comparing the right-hand sides of the last two equations we get the desired:
)
(
Having established linearity now we establish homogeneity. Consider
T †(u1 + u2) = T † u1 + T † u2 .
au, T v
(
=
)
T †(au) , v
(
)
.
The left hand side is
au, T v
(
= a ⋆
u, T v
(
= a ⋆
T † u, v
(
)
)
)
=
(
aT † u, v
.
)
This time we conclude that
T †(au) = aT † u .
This concludes the proof that T †, so | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
T †(au) = aT † u .
This concludes the proof that T †, so defined is a linear operator on V .
A couple of important properties are readily proven:
Claim: (ST )† = T †S† . We can show this as follows:
u, ST v
(
)
=
S†u, T v
(
)
=
T †S†u, v
(
.
)
(6.9)
(6.10)
(6.11)
(6.12)
(6.13)
(6.14)
Claim: The adjoint of the adjoint is the original operator: (S†)† = S. We can show this as follows:
u, S†v
u, S†v
=
. Comparing with
(
)
(
the first result, we have shown that (S†)†u = Su, for any u, which proves the claim
. Now, additionally
)
(S†)†u, v
(
S†v, u
(
v, Su
(
Su, v
(
∗ =
)
∗ =
)
=
)
)
Example: Let v = (v1, v2, v3), with vi
space, C3 . Define a linear operator T that acts on v as follows:
∈
C denote a vector in the three-dimensional complex vector
T (v1, v2, v3) = ( 0v1 + 2v2 + iv3 , v1
−
iv2 + 0v3 , 3iv1 + v2 + 7v3 ) .
(6.15)
22
Calculate the action of T † on a vector. Give the matrix representations of T and T � | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
action of T † on a vector. Give the matrix representations of T and T † using the
orthonormal basis e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1). Assume the inner product is the standard
on on C3 .
Solution: We introduce a vector u = (u1, u2, u3) and will use the basic identity
The left-hand side of the identity gives:
u, T v
(
)
=
T †u, v
(
.
)
u, T v
(
)
= u (2v2 + iv3) + u (v1
∗
2
∗
1
iv2) + u (3iv1 + v2 + 7v3) .
∗
3
(6.16)
−
This is now rewritten by factoring the various vi’s
u, T v
(
)
∗
= (u + 3iu ∗ )v1 + (2u
1
∗
2
3
−
iu 2
∗ + u )v2 + (iu ∗ + 7u )v3 .
∗
3
∗
3
1
Identifying the right-hand side with
T †u, v
(
)
we now deduce that
T †(u1, u2, u3) = ( u2
3iu3 , 2u1 + iu2 + u3 ,
−
iu1 + 7u3 ) .
−
(6.17)
(6.18)
This gives the action of T † . To find the matrix representation we begin with T . Using basis vectors,
we have from (6.15)
T e | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
representation we begin with T . Using basis vectors,
we have from (6.15)
T e1 = T (1, 0, 0) = (0, 1, 3i) = e2 + 3ie3 = T11e1 + T21e2 + T31e3 ,
(6.19)
and deduce that T11 = 0, T21 = 1, T31 = 3i. This can be repeated, and the rule becomes clear quickly:
the coefficients of vi read left to right fit into the i-th column of the matrix. Thus, we have
T =
0
1
3i
2
i
i 0
−
7
1
and
T † =
1
i
i 0
0
2
−
3i
−
1
7
.
(6.20)
These matrices are related: one is the transpose and complex conjugate of the other! This is not an
accident.
Let us reframe this using matrix notation. Let u = ei and v = ej where ei and ej are orthonormal
basis vectors. Then the definition
can be written as
u, T v
(
=
)
=
T †u, v
(
)
T † ei, ej
)
(
†
=
T ek, ej
ki
)
(
†(T ) ∗ δkj = Tjkδik
ki
(T †) ∗ = Tij
ei, T ej
(
)
ei, Tkj ek
(
ji
)
(6.21)
Relabeling i and j and taking | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
(
)
ei, Tkj ek
(
ji
)
(6.21)
Relabeling i and j and taking the complex conjugate we find the familiar relation between a matrix
and its adjoint:
(T †)ij = (Tji) ∗ .
(6.22)
If we did not, in the equation above the use of
The adjoint matrix is the transpose and complex conjugate matrix only if we use an orthonormal basis.
= gij , where
ei, ej
(
)
gij is some constant matrix that would appear in the rule for the construction of the adjoint matrix.
= δij would be replaced by
ei, ej
)
(
23
7 Hermitian and Unitary operators
Before we begin looking at special kinds of operators let us consider a very surprising fact about
operators on complex vector spaces, as opposed to operators on real vector spaces.
Suppose we have an operator T that is such that for any vector v
vanishes
V the following inner product
∈
v, T v
(
)
= 0
for all v
V.
∈
(7.23)
What can we say about the operator T ? The condition states that T is an operator that starting from
a vector gives a vector orthogonal to the original one. In a two-dimensional real vector space, this is
simply the operator that rotates any vector by ninety degrees! It is quite surprising and important
that for complex vector spaces the result is very strong: any such | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
surprising and important
that for complex vector spaces the result is very strong: any such operator T necessarily vanishes.
This is a theorem:
Theorem: Let T be a linear operator in a complex vector space V :
If
v , T v
(
)
= 0 for all v
∈
V, then T = 0.
(7.24)
Proof: Any proof must be such that it fails to work for real vector space. Note that the result
V . Indeed, if this holds, then take u = T v,
follows if we could prove that
= 0, for all u, v
u, T v
(
)
∈
= 0 for all v implies that T v = 0 for all v and therefore T = 0.
We will thus try to show that
∈
vanish, whatever # is. So we must aim to form linear combinations of such terms in order
)
= 0 for all u, v
V . All we know is that objects of the form
u , T v
(
then
T v, T v
(
)
#, T #
(
to reproduce
)
u , T v
(
. We begin by trying the following
)
u + v, T (u + v)
(
) − (
u
v, T (u
v)
)
= 2
u, T v
(
+ 2
v, T u
(
)
)
.
−
−
(7.25)
We see that the “diagonal” term vanished, but instead of getting just
.
v , T u
)
(
Here is where complex numbers help, we can get | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
of getting just
.
v , T u
)
(
Here is where complex numbers help, we can get the same two terms but with opposite signs by trying,
u , T v
(
we also got
)
u + iv, T (u + iv)
(
) − (
u
iv, T (u
iv)
)
= 2i
u, T v
(
−
2i
) −
v, T u
(
)
.
−
(7.26)
It follows from the last two relations that
The condition
= 0 for all v, implies that each term of the above right-hand side vanishes, thus
u
v, T (u
+
v)
)
−
−
1
i
u+iv, T (u+iv)
(
) −
1
i
u
(
−
iv, T (u
−
) − (
. (7.27)
iv)
)
)
u , T v
(
)
=
1
4
showing that
u+v, T (u+v)
(
(
v, T v
(
u , T v
(
)
)
= 0 for all u, v
V . As explained above this proves the result.
∈
An operator T is said to be Hermitian if T † = T . Hermitian operators are pervasive in quantum
mechanics. The above theorem in fact helps us discover Hermitian operators. It is familiar that the
expectation value of a Hermitian operator, on any state, is real. It is also true, however, that any
operator whose expectation value is real for all states must be Hermitian:
24
T = T † if and only if
v, T v | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
mitian:
24
T = T † if and only if
v, T v
(
) ∈
R for all v .
To prove this first go from left to right. If T =
T †
v, T v
(
)
=
T † v, v
(
)
=
T v, v
(
)
=
v, T v
(
∗ ,
)
(7.28)
(7.29)
showing that
v, T v
(
)
is real. To go from right to left first note that the reality condition means that
v, T v
(
)
=
T v, v
(
)
=
v, T † v
(
)
,
(7.30)
where the last equality follows because (T †)† = T . Now the leftmost and rightmost terms can be
combined to give
= 0, which holding for all v implies, by the theorem, that T = T † .
T †)v
v, (T
(
−
)
We can prove two additional results of Hermitian operators rather easily. We have discussed earlier
the fact that on a complex vector space any linear operator has at least one eigenvalue. Here we learn
that the eigenvalues of a hermitian operator are real numbers. Moreover, while we have noted that
eigenvectors corresponding to different eigenvalues are linearly independent, for Hermitian operators
they are guaranteed to be orthogonal. Thus we have the following theorems
Theorem 1: The eigenvalues of Hermitian operators are real.
Theorem 2: Different | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
The eigenvalues of Hermitian operators are real.
Theorem 2: Different eigenvalues of a Hermitian operator correspond to orthogonal eigenfunctions.
Proof 1: Let v be a nonzero eigenvector of the Hermitian operator T with eigenvalue λ: T v = λv.
Taking the inner product with v we have that
v, T v
(
)
=
v, λv
(
)
= λ
v, v
(
)
.
Since T is hermitian, we can also evaluate
v, T v
(
)
as follows
v, T v
(
)
=
T v, v
(
)
=
λv, v
(
)
= λ ∗
v, v
(
)
.
(7.31)
(7.32)
The above equations give (λ
showing the reality of λ.
λ∗)
v, v
(
)
−
= 0 and since v is not the zero vector, we conclude that λ∗ = λ,
Proof 2: Let v1 and v2 be eigenvectors of the operator T :
T v1 = λ1v1,
T v2 = λ2v2 ,
(7.33)
with λ1 and λ2 real (previous theorem) and different from each other. Consider the inner product
v2, T v1
(
and evaluate it in two different ways. First
)
v2, T v1
(
)
=
v2, λ1v1
(
)
= λ1
v2, v1
(
)
,
(7.34)
25
and second, using hermiticity of T ,
v2, T v1
(
)
=
T v2, v | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
and second, using hermiticity of T ,
v2, T v1
(
)
=
T v2, v1
(
)
=
λ2v2, v1
(
)
= λ2
v2, v1
(
)
.
From these two evaluations we conclude that
(λ1
λ2)
v1, v2
(
)
−
= 0
(7.35)
(7.36)
and the assumption λ1
=
λ2, leads to
v1, v2
(
)
=
0, showing the orthogonality of the eigenvectors.
Let us now consider another important class of linear operators on a complex vector space, the so-
(V ) in a complex vector space V is said to be a unitary
called unitary operators. An operator U
operator if it is surjective and does not change the magnitude of the vector it acts upon:
∈ L
=
U u
|
|
u
|
|
, for all u
V .
∈
(7.37)
We tailored the definition to be useful even for infinite dimensional spaces. Note that U can only kill
vectors of zero length, and since the only such vector is the zero vector, null U = 0, and U is injective.
Since U is also assumed to be surjective, a unitary operator U is always invertible.
A simple example of a unitary operator is the operator λI with λ a complex number of unit-norm:
λ
|
= 1. Indeed
|
λIu
|
|
=
λu
|
|
=
u
λ
|
||
=
|
u | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
. Indeed
|
λIu
|
|
=
λu
|
|
=
u
λ
|
||
=
|
u
|
|
for all u. Moreover, the operator is clearly surjective.
For another useful characterization of unitary operators we begin by squaring (7.37)
U u, U u
(
)
=
u, u
(
)
By the definition of adjoint
u, U †U u
(
=
)
u, u
(
) → (
u , (U †U
I)u
)
−
= 0
for all u .
(7.38)
(7.39)
So by our theorem U †U = I, and since U is invertible this means U † is the inverse of U and we also
have U U † = I:
Unitary operators preserve inner products in the following sense
U †U = U U † = I .
U u , U v
=
)
u , v
(
)
.
(
(7.40)
(7.41)
This follows immediately by moving the second U to act on the first input and using U †U = I.
Assume the vector space V is finite dimensional and has an orthonormal basis (e1, . . . en). Consider
the new set of vectors (f1, . . . , fn) where the f ’s are obtained from the e’s by the action of a unitary
operator U :
fi = U ei .
(7.42)
26
This also means that ei = U †fi. We readily see that the f ’s are also a basis, because they are linearly | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
We readily see that the f ’s are also a basis, because they are linearly
independent: Acting on a1f1 + . . . + anfn = 0 with U † we find a1e1 + . . . + anen = 0, and thus ai = 0.
We now see that the new basis is also orthonormal:
fi , fj
(
)
=
U ei , U ej
)
(
=
ei , ej
)
(
= δij .
The matrix elements of U in the e-basis are
Let us compute the matrix elements U
′
ki
of U in the f -basis
Uki =
ek , U ei
(
)
.
′ Uki =
fk , U fi
(
)
=
U ek , U fi
(
)
=
ek , fi
(
)
=
ek , U ei
(
)
= Uki
The matrix elements are the same! Can you find an explanation for this result?
(7.43)
(7.44)
(7.45)
27
MIT OpenCourseWare
http://ocw.mit.edu
8.05 Quantum Physics II
Fall 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf |
Lectures 13 & 14
Packet Multiple Access: The Aloha protocol
Eytan Modiano
Massachusetts Institute of Technology
Eytan Modiano
Slide 1
Multiple Access
• Shared Transmission Medium
– a receiver can hear multiple transmitters
– a transmitter can be heard by multiple receivers
•
the major problem with multi-access is allocating the channel
between the users; the nodes do not know when the other nodes
have data to send
– Need to coordinate transmissions
Eytan Modiano
Slide 2
Examples of Multiple Access Channels
• Local area networks (LANs)
– Traditional Ethernet
– Recent trend to non-multi-access LANs
• satellite channels
• Multi-drop telephone
• Wireless radio
NET
DLC
PHY
Eytan Modiano
Slide 3
MAC
LLC
• Medium Access Control (MAC)
– Regulates access to channel
• Logical Link Control (LLC)
– All other DLC functions
Approaches to Multiple Access
• Fixed Assignment (TDMA, FDMA, CDMA)
– each node is allocated a fixed fraction of bandwidth
– Equivalent to circuit switching
– very inefficient for low duty factor traffic
• Contention systems
– Polling
– Reservations and Scheduling
– Random Access
Eytan Modiano
Slide 4
Aloha
Single receiver, many transmitters
Receiver
...
.
Transmitters
E.g., Satellite system, wireless
Eytan Modiano
Slide 5
Slotted Aloha
• Time is divided into “slots” of one packet duration
– E.g., fixed size packets
• When a node has a packet to send, it waits until the start of the
next slot to send it
– Requires synchronization
•
If no other nodes attempt transmission during that slot, the
transmission is successful
– Otherwise “collision”
– Collided packet are retransmitted after a random delay
1
Success
2
Idle
3
4
5
Collision
Idle
Success
Eytan Modiano
Slide 6
Slotted Aloha Assumptions
• Poisson external arrivals
• No capture
– Packets involved in a collision are lost
– | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
Slotted Aloha Assumptions
• Poisson external arrivals
• No capture
– Packets involved in a collision are lost
– Capture models are also possible
• Immediate feedback
–
Idle (0) , Success (1), Collision (e)
• If a new packet arrives during a slot, transmit in next slot
• If a transmission has a collision, node becomes backlogged
– while backlogged, transmit in each slot with probability qr until
successful
• Infinite nodes where each arriving packet arrives at a new node
– Equivalent to no buffering at a node (queue size = 1)
– Pessimistic assumption gives a lower bound on Aloha performance
Eytan Modiano
Slide 7
Markov chain for slotted aloha
P03
1
0
P
10
P
13
P34
2
3
• state (n) of system is number of backlogged nodes.
pi,i-1 = prob. of one backlogged attempt and no new arrival
pi,i =prob. of one new arrival and no backlogged attempts or no
new arrival and no success
pi,i+1= prob of one new arrival and one or more backlogged attempts
pi,i+j = Prob. Of J new arrivals and one or more backlogged attempts
or J+1 new arrivals and no
backlogged attempts
• Steady state probabilities do not exists
– Backlog tends to infinity => system unstable
– More later
Eytan Modiano
Slide 8
slotted aloha
•
let g(n) be the attempt rate (the expected number of packets
transmitted in a slot) in state n
g(n) = λ + nqr
• The number of attempted packets per slot in state n is
approximately a Poisson random variable of mean g(n)
– P (m attempts) = g(n)me-g(n)/m!
– P (idle) = probability of no attempts in a slot = e-g(n)
– p (success) = probability of one attempt in a slot = g(n)e-g(n)
– P (collision) = P (two or more attempts) = 1 - P(idle) - P(success)
Eytan Modiano
Slide 9
Throughput of Slotted Aloha | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
) = 1 - P(idle) - P(success)
Eytan Modiano
Slide 9
Throughput of Slotted Aloha
• The throughput is the fraction of slots that contain a successful
transmission = P(success) = g(n)e-g(n)
– When system is stable throughput must also equal the external
arrival rate (λ)
-1e
g(n)e-g(n)
Departure rate
1
g(n)
– What value of g(n)
maximizes throughput?
– g(n) < 1 => too many idle slots
– g(n) > 1 => too many collisions
–
g( n)e−g( n) = e−g( n) − g( n)e−g( n) = 0
d
dg( n)
⇒ g(n) = 1
⇒ P( success) = g(n )e−g( n) = 1/ e ≈ 0.36
Eytan Modiano
Slide 10
If g(n) can be kept close to 1, an external arrival rate of 1/e packets
per slot can be sustained
Instability of slotted aloha
•
if backlog increases beyond unstable point (bad luck) then it tends
to increase without limit and the departure rate drops to 0
• Drift in state n, D(n) is the expected change in backlog over one
time slot
– D(n) = λ - P(success) = λ - g(n)e-g(n)
negative drift
-1
e
λ
Departure rate
negative drift
-G
Ge
Stable
Unstable
positive
drift
G=0
G=1
G = λ + nq
r
Arrival rate
positive
drift
Eytan Modiano
Slide 11
Stabilizing slotted aloha
• choosing qr small increases the backlog at which instability
occurs ( since g(n) = λ + nqr), but also increases delay (since mean
retry time is 1/qr)
• solution: estimate the backlog (n) from past feedback
– Given the backlog estimate, choose qr to keep g(n) = 1
Assume all arrivals are immediately backlogged
g(n) = n | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
– Given the backlog estimate, choose qr to keep g(n) = 1
Assume all arrivals are immediately backlogged
g(n) = nqr , P(success) = nqr (1-qr)n-1
To maximize P(success) choose qr = min{1,1/n}
– When the estimate of n is perfect:
idles occur with probability 1/e,
successes with 1/e, and
collisions with 1-2/e.
– When the estimate is too large, too many idle slots occur
– When the estimate is too small, too many collisions occur
• Nodes can use feedback information (0,1,e) to make estimates
– A good rule is increase the estimate of n on each collision, and to
decrease it on each idle slot or successful slot
note that the increase on a collision should be (e-2)-1 times as large as the
decrease on an idle slot
Eytan Modiano
Slide 12
stabilized slotted aloha
• assume all arrivals are immediately backlogged
– g(n) = nqr = attempt rate
– p(success) = nqr (1-qr)n-1
for max throughput set g(n) = 1 => qr = min{1,1/n’}
where n’ is the estimate of n
– Let nk = estimate of backlog after kth slot
nk+1
=
max {λ, nk+λ-1}
idle or success
nk+λ+(e-2)-1
collision
– Can be shown to be stable for λ < 1/e
Eytan Modiano
Slide 13
TDM vs. slotted aloha
TDM, m=16
TDM, m=8
DELAY
8
4
ALOHA
0
0.2
0.4
0.6
0.8
ARRIVAL RATE
• Aloha achieves lower delays when arrival rates are low
• TDM results in very large delays with large number of users, while
Aloha is independent of the number of users
Eytan Modiano
Slide 14
Pure (unslotted) Aloha
• New arrivals are transmitted immediately (no slots)
– No need for synchronization
– No need for fixed length packets | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
otted) Aloha
• New arrivals are transmitted immediately (no slots)
– No need for synchronization
– No need for fixed length packets
• A backlogged packet is retried after an exponentially distributed
random delay with some mean 1/x
• The total arrival process is a time varying Poisson process of rate
g(n) = λ + nx (n = backlog, 1/x = ave. time between retransmissions)
• Note that an attempt suffers a collision if the previous attempt is not
yet finished (ti-ti-1<1) or the next attempt starts too soon (ti+1-ti<1)
New Arrivals
τ
3
τ
4
Eytan Modiano
Slide 15
t 1
t 2
t
3
Collision
t
4
t
5
Retransmission
Throughput of Unslotted Aloha
• An attempt is successful if the inter-attempt intervals on both
sides exceed 1 (for unit duration packets)
– P(success) = e-g(n) e-g(n) = e-2g(n)
– Throughput (success rate) = g(n) e-2g(n)
– For max throughput at g(n) = 1/2, Throughput = 1/2e ~ 0.18
– stabilization issues are similar to slotted aloha
–
advantages of unslotted aloha are simplicity and possibility of
unequal length packets
Eytan Modiano
Slide 16
Splitting Algorithms
• More efficient approach to resolving collisions
– Simple feedback (0,1,e)
– Basic idea: assume only two packets are involved in a collision
Suppose all other nodes remain quiet until collision is resolved, and
nodes in the collision each transmit with probability 1/2 until one is
successful
On the next slot after this success, the other node transmits
The expected number of slots for the first success is 2, so the expected
number of slots to transmit 2 packets is 3 slots
Throughput over the 3 slots = 2/3
–
In practice above algorithm cannot really work
Cannot assume only two users involved in collision
Practical algorithm must allow for collisions involving unknown number
of users
Eytan Modiano
Slide 17 | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
assume only two users involved in collision
Practical algorithm must allow for collisions involving unknown number
of users
Eytan Modiano
Slide 17
Tree algorithms
• After a collision, all new arrivals and all backlogged packets not
in the collision wait
• Each colliding packet randomly joins either one of two groups
(Left and Right groups)
– Toss of a fair coin
– Left group transmits during next slot while Right group waits
If collision occurs Left group splits again (stack algorithm)
Right group waits until Left collision is resolved
– When Left group is done, right group transmits
(1,2,3,4)
collision
(1,2,3)
success
1
idle
success
4
collision
(2,3)
collision
(2,3)
success
success
2
3
Eytan Modiano
Slide 18
Notice that after the idle slot,
collision between (2,3) was
sure to happen and could have
been avoided
Many variations and improvements
on the original tree splitting algorithm
Throughput comparison
• stabilized pure aloha T = 0.184 = (1/(2e))
• stabilized slotted aloha T = 0.368 = (1/e)
• Basic tree algorithm T = 0.434
• Best known variation on tree algorithm T = 0.4878
• Upper bound on any collision resolution algorithm with (0,1,e)
feedback T <= 0.568
• TDM achieves throughputs up to 1 packet per slot, but the delay
increases linearly with the number of nodes
Eytan Modiano
Slide 19 | https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf |
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Outline:
A. Optical Invariant
B. Composite Lenses
C. Ray Vector and Ray Matrix
D. Location of Principal Planes for an Optical System
E. Aperture Stops, Pupils and Windows
A. Optical Invariant
-What happens to an arbitrary “axial” ray that originates from the axial intercept of
the object, after passing through a series of lenses?
If we make use of the relationship between launching angle and the imaging
conditions, we have:
Rearranging, we obtain:
𝜃𝑖𝑛 =
𝑥𝑖𝑛
𝑠𝑜
𝜃𝑖𝑛
𝜃𝑜𝑢𝑡
and 𝜃𝑜𝑢𝑡 = −
= −
𝑠𝑖
𝑠𝑜
=
ℎ𝑖
ℎ𝑜
𝑥𝑖𝑛
𝑠𝑖
𝜃𝑖𝑛ℎ𝑜 = 𝜃𝑜𝑢𝑡ℎ𝑖
We see that the product of the image height and the angle with respect to the axis
(the components of the ray vector!) remains a constant. Indeed a more general
result, 𝑛ℎ𝑜𝑠𝑖𝑛𝜃𝑖𝑛 = 𝑛′ℎ𝑖𝑠𝑖𝑛𝜃𝑜𝑢𝑡 is a constant (often referred as a Lagrange
invariant in different textbooks) across any surface of the imaging system.
- The invariant may be used to deduce other quantities of the optical system, without
the necessity of certain intermediate ray-tracing calculations.
- You may regard it as a precursor to wave optics: the angles are approximately
proportional to lateral momentum of light, and the image height is equivalent to
separation of two geometric points. For two points that are separated far apart,
there is a limiting angle to transmit their information across the imaging system.
B. Composite L | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
points. For two points that are separated far apart,
there is a limiting angle to transmit their information across the imaging system.
B. Composite Lenses
To elaborate the effect of lens in combinations, let’s consider first two lenses
separated by a distance d. We may apply the thin lens equation and cascade the
imaging process by taking the image formed by lens 1 as the object for lens 2.
1
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
1
𝑠𝑜1
+
1
𝑠𝑖2
= (
1
𝑓1
+
1
𝑓2
) −
𝑑
(𝑑 − 𝑠𝑖1)𝑠𝑖1
A few limiting cases:
a) Parallel beams from the left: 𝑠𝑖2 is the back-focal length (BFL)
1
BFL
= (
1
𝑓1
+
1
𝑓2
) −
𝑑
(𝑑 − 𝑓1)𝑓1
b) collimated beams to the right: 𝑠𝑜1 is the front-focal length (FFL)
1
FFL
= (
1
𝑓1
+
1
𝑓2
) −
𝑑
(𝑑 − 𝑓2)𝑓2
The composite lens does not have the same apparent focusing length in front and
back end!
c) d=f1+f2: parallel beams illuminating the composite lens will remain parallel at
the exit; the system is often called afocal. This is in fact the principle used in
most telescopes, as the object is located at infinity and the function of the
instrument is to send the image to the eye with a large angle of view. On the
other hand, a point source located at the left focus of the first lens is imaged at
the right focus of the second lens (the two are called conjugate points). This is
often used as a condenser for illumination.
2
f1f2f1f2d
Lecture Notes on Geomet | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
as a condenser for illumination.
2
f1f2f1f2d
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Huygens eyepiece
A Huygens eyepiece is designed with two plano-convex lenses separated by the
average of the two focal length. Ideally, such eyepiece should produce a virtual image at
infinity distance. Let f1=30cm and f2=10cm, so the spacing d=20cm, let’s find these
parameters:
a) BFL and FFL,
b) the location of PPs,
c) the EFL.
C. Ray Vector and Ray Matrix
In principle, ray tracing can help us to analyze image formation in any given optical
system as the rays refract or reflect at all interfaces in the optical train. If we restrict
the analysis to paraxial rays only, then
such process can be described in a
matrix approach.
In the Feb 10 lecture, we defined a
light ray by two co-ordinates:
a. its position, x
3
ABCDOptical system ↔ Ray matrix 𝑖𝑛𝜃𝑖𝑛 𝑜𝑢𝑡𝜃𝑜𝑢𝑡d=20cm
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
b. its slope,
These parameters define a ray vector, which will change with distance and as the
ray propagates through optics.
Associated with the input ray vector (
𝑖𝑛
𝜃𝑖𝑛
express the effect of the optical elements in the general form of a 2x2 ray matrix:
) and output ray vector(
), we can
𝑜𝑢𝑡
𝜃𝑜𝑢𝑡
𝑜𝑢𝑡
(
𝜃𝑜𝑢𝑡
These matrices are often (uncreatively) called | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
𝑜𝑢𝑡
(
𝜃𝑜𝑢𝑡
These matrices are often (uncreatively) called ABCD Matrices.
Since the displacements and angles are assumed to be small, we can think in terms
of partial derivatives.
𝐴 𝐵
𝐶 𝐷
𝑖𝑛
𝜃𝑖𝑛
) = [
] (
)
𝐴 = (
) : spatial magnification;
Therefore, we can connect the Matrix components with the functions of the imaging
elements:
𝜕𝑥𝑜𝑢𝑡
𝜕𝑥𝑖𝑛
𝜕𝜃𝑜𝑢𝑡
𝜕𝜃𝑖𝑛
𝜕𝑥𝑜𝑢𝑡
𝜕𝜃𝑖𝑛
𝜕𝜃𝑜𝑢𝑡
𝜕𝑥𝑖𝑛
) : mapping position to angles(momentum) (also function of a prism).
) : mapping angles(momentum) to position (function of a prism);
) : angular magnification;
𝐷 = (
𝐵 = (
𝐶 = (
For cascaded elements, we simply multiply ray matrices. (please notice the order of
matrices starts from left to right on optical axis!!)
Significance of the matrix elements: (Pedrotti Figure 18.9)
4
ininoutininoutoutxxxxxininoutininoutoutxxO1O | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
�O1O3O2 𝑜𝑢𝑡𝜃𝑜𝑢𝑡 𝑖𝑛𝜃𝑖𝑛 𝑜𝑢𝑡𝜃𝑜𝑢𝑡= 2 1 𝑖𝑛𝜃𝑖𝑛
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
© Pearson Prentice Hall. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
(a) If the input surface is at the front focal plane, the outgoing ray angles depend
only on the incident height.
(b) Similarly, if the output surface is at the back focal plane, the outgoing ray heights
depend only on the incoming angles.
(c) If the input and output plane are conjugate, then all incoming rays from constant
height y0 will converge at a constant height regardless of their angle.
(d) When the system is “afocal”, the refracting angles of the outgoing beams are
independent of the input positions.
Example 1: refraction matrix from a spherical interface (only changes but not x)
Right at the interface,
𝑖𝑛 = 𝑜𝑢𝑡
𝑛1(𝜃𝑖𝑛 + 𝑖𝑛 𝑅⁄ ) ≈ 𝑛2(𝜃𝑜𝑢𝑡 + 𝑖𝑛 𝑅⁄ )
𝜃𝑜𝑢𝑡 ≈ (
𝑛1
𝑛2
[(
) 𝜃𝑖𝑛 +
𝑛1
𝑛2
So we can write the matrix:
) − 1]
𝑅
𝑖𝑛
5
n1n2xin12insRzout� | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
�
5
n1n2xin12insRzouts
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Example 2: matrix of a ray propagating in a medium (changes x but not )
Example 3: refraction matrix through a thin lens (combined refraction)
Example 4: Imaging matrix through a thick lens (combined refraction and
translation)
From left to right:
-
Translation O1:
-
Refraction O2:
[
1 𝑠𝑜1
1
0
]
1
) − 1]
[(
[
𝑛
𝑛′
𝑅1
0
𝑛
𝑛′
(
]
)
- Translation O3:
- Refraction O4:
[
1 𝑑
0 1
]
1
0
[(
[
) − 1]
𝑛′
𝑛
−𝑅2
]
(
𝑛′
𝑛
)
6
xin , inz= 0xout , outzR1R2dso1-|si1|si2n'nnoutininoutinxxz21211010(1)/(1/1)/1/thinlenscurvedcurvedinterfaceinterfaceOOOnRnnRn
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
- Translation O5:
[
1 𝑠𝑖2
1
0
]
D. Location of Principal Planes for an Optical System
A ray matrix of the optical system (composite lenses and other elements) can give
us a complete | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
Location of Principal Planes for an Optical System
A ray matrix of the optical system (composite lenses and other elements) can give
us a complete description of the rays passing through the overall optical train. In
this session, we show that the focusing properties of the composite lens, such as
the principal planes.
In order to facilitate our analysis, we choose the input plane to be the front surface
of the lens arrays, and the output plane to be the back surface of the lenses.
(Adapted from Pedrotti Figure 18‐12)
Let’s start with the process of focusing at back focus first. In this case, an incoming
) is refracted from the 2nd principal plane (PP) so it passes through
parallel ray(
0
0
the back focal point (BF). At the output plane, the ray vector of the refracted ray
reads(
𝑓
−𝜃𝑓
).
𝑓
−𝜃𝑓
(
) = [
𝐴 𝐵
𝐶 𝐷
] (
0
0
)
This gives 𝑓 = 𝐴 0 and −𝜃𝑓 = 𝐶 0
.
7
dBFLx0Input Plane Output Plane 2ndPPxfEFL-f-f
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Using the small angle approximation, we can connect the ratio of beam height 0 and
the effective focal length (EFL) by the steering angle 𝜃𝑓:
𝜃𝑓 = 0/𝐸𝐹𝐿
EFL = - 1/C.
Thus
Also from the similar triangles,
We can find BFL:
𝑓/ 0 = 𝐵𝐹𝐿/𝐸𝐹𝐿.
𝐵𝐹𝐿 = −𝐴/𝐶.
Thus the 2nd PP is located at a distance from the output plane given by:
𝐵𝐹𝐿 − 𝐸𝐹𝐿 | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
is located at a distance from the output plane given by:
𝐵𝐹𝐿 − 𝐸𝐹𝐿 = −(𝐴 − 1)/𝐶.
Likewise, we can find FFL and the first principal plane by the matrix components.
′0
(
0
) = [
𝐴 𝐵
𝐶 𝐷
] (
− ′𝑓
−𝜃′𝑓
)
8
dFFLx0Input Plane Output Plane 1stPPx’fEFL-’f-’f
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
You could consider this as an inverse problem of the previous example, or solve the
relationship:
′0 = −𝐴 ′𝑓 − 𝐵𝜃′𝑓
0 = −𝐶 ′𝑓 − 𝐷𝜃′𝑓
𝜃′𝑓 = ′0/𝐸𝐹𝐿 and 𝜃′𝑓 = ′𝑓/𝐹𝐹𝐿
So how is the ray matrix experimentally determined by ray tracing?
Generally, for a given (2D) optical system with unknown details, one way to
determine the transfer matrix is to take measurement of two arbitrary input and
output rays. To elaborate that idea, we can treat a pair of the input ray vectors as a
2x2 matrix:
(
1
𝑜𝑢𝑡
1
𝜃𝑜𝑢𝑡
2
𝑜𝑢𝑡
2 ) = [
𝜃𝑜𝑢𝑡
𝐴 𝐵
𝐶 𝐷
] (
1
𝑖𝑛
1
𝜃𝑖𝑛
2
𝑖𝑛
2 )
𝜃𝑖𝑛
Therefore
𝐴 𝐵
[
𝐶 𝐷
] =
𝐴 𝐵
[
� | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
��
Therefore
𝐴 𝐵
[
𝐶 𝐷
] =
𝐴 𝐵
[
𝐶 𝐷
] = (
1
𝑜𝑢𝑡
1
𝜃𝑜𝑢𝑡
2
𝑜𝑢𝑡
2 ) (
𝜃𝑜𝑢𝑡
1
𝑖𝑛
1
𝜃𝑖𝑛
−1
2
𝑖𝑛
2 )
𝜃𝑖𝑛
1
2 − 𝑖𝑛
1 )
2 𝜃𝑖𝑛
( 𝑖𝑛
1 𝜃𝑖𝑛
(
1 𝜃𝑖𝑛
𝑜𝑢𝑡
1 𝜃𝑖𝑛
𝜃𝑜𝑢𝑡
2 − 𝑜𝑢𝑡
2 − 𝜃𝑜𝑢𝑡
1
2 𝜃𝑖𝑛
1
2 𝜃𝑖𝑛
2 𝑖𝑛
𝑜𝑢𝑡
2 𝑖𝑛
𝜃𝑜𝑢𝑡
1 − 𝑜𝑢𝑡
1 − 𝜃𝑜𝑢𝑡
2
1 𝑖𝑛
2 )
1 𝑖𝑛
As a special case you may select the two rays to be marginal and chief rays as
defined in the following section.
9
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Rays Going Through 2F/4F Lens system
Please determine the ray transfer matrix of the following lens elements, with
their input and output planes located at the front and back focal point of the
corresponding lens.
10
ff
Lecture Notes on Geometrical Optics (02/18/14)
2. | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
10
ff
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
E. Aperture Stops, Pupils and Windows
o The Aperture Stops and Numerical Aperture
o Numerical Aperture(NA):
-
- also defines the resolution (or resolving power) of the optical system
limits the optical flux that is admitted through the system;
o The concept of marginal rays and chief rays
- Marginal ray: the ray that passes through the edge of the aperture.
- Chief ray (also called principal rays): the ray from an object point that
passes through the axial point of the aperture stop (also appears as
emitting from the axis of exit pupil).
Together, the C.R. and M.R. define the angular acceptance of spherical ray
bundles originating from an off-axis object.
o The entrance and exit pupils
11
multi-elementoptical systemaperturestopimage throughpreceding elementsimage throughsucceeding elements
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
o The field stop and corresponding windows
o Field stop:
- Limits the angular acceptance of Chief Rays
- Defines the Field of View
- Proper FS should be at intermediate image plane
o Entrance & Exit Windows
12
image throughpreceding elementsimage throughsucceeding elementsfieldstop
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
o Effect of Aperture and field stops
13
NAentrancepupilaperturestopexitpupilFoVentrancewindowexitwindowfieldstop(momentum)x(location)Effect of Apertures and stops‘(momentum)X’(location)123123aperturesField stops
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Single lens camera:
© Pearson Prentice Hall. All rights reserved. This content is excluded from our Creative
Commons license | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
Fang
Practice Example: Single lens camera:
© Pearson Prentice Hall. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
-
-
-
Please determine the position and size of the image.
Please determine the entrance and exit pupils.
Please sketch the chief ray and marginal rays from the top of the object to
the image.
14
MIT OpenCourseWare
http://ocw.mit.edu
(cid:21)(cid:17)(cid:26)(cid:20)(cid:3)(cid:18)(cid:3)(cid:21)(cid:17)(cid:26)(cid:20)(cid:19)(cid:3)(cid:50)(cid:83)(cid:87)(cid:76)(cid:70)(cid:86)
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
Day 3
Hashing, Collections,
and Comparators
Wed. January 25th 2006
Scott Ostler
Hashing
Yesterday we overrode .equals()
Today we override .hashCode()
Goal: understand why we need to, and how
to do it
What is a Hash?
An integer that “stands in” for an object
Quick way to check for inequality, construct
groupings
Equal things (should) have equal hashs
What is .hashCode()
Well known method name that returns int
Is defined in java.lang.Object to return a
value mostly unique to that instance
All classes either inherit it, or override it
hashCode Object Contract
An object’s hashcode cannot change until it
is no longer equal to what it was
Two equal objects must have an equal
hashCode
It is good if two unequal objects have distinct
hashes
Hashcode Examples
String scott = "Scotty";
String scott2 = “Scotty”;
String corey = "Corey";
System.out.println(scott.hashCode());
System.out.println(scott2.hashCode());
System.out.println(corey.hashCode());
=> -1823897190, -1823897190, 65295514
Integer int1 = 123456789;
Integer int2 = 123456789;
System.out.println(int1.hashCode());
System.out.println(int2.hashCode());
=> 123456789, 123456789
A Name Class with equals()
public class Name {
public String first;
public String last;
public Name(String first, String last) {
this.first = first;
this.last = last;
}
public String toString() {
return first + " " + last;
}
public boolean equals(Object o) {
return (o instanceof Name &&
((Name) o).first.equals(this.first) &&
((Name) o).last.equals(this.last));
}}
Do our Names work?
Name kyle = new Name(“Kyle”, “MacLaughlin”);
Name jack = new Name("Jack", "Nance");
Name jack2 = new Name("Jack", "Nance");
System.out.println(kyle.equals(jack));
System.out.println(jack | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
ance");
Name jack2 = new Name("Jack", "Nance");
System.out.println(kyle.equals(jack));
System.out.println(jack.equals(jack2));
System.out.println(kyle.hashCode());
System.out.println(jack.hashCode());
System.out.println(jack2.hashCode());
⇒ false, true, 6718604, 7122755, 14718739
Objects are equal, hashCodes aren’t
Who cares about hashCode?
Name code seems to work
Is this really a problem?
If we don’t use hashCode(), why bother
writing it?
ANSWER: JAVA CARES!
We have violated the Object contract
We have embarked upon a path filled with
Bad, Strange Things
Bad, Strange Thing #1
Set<String> strings = new HashSet<String>();
Set<Name> names = new HashSet<Name>()
strings.add(“jack”);
names.add(new Name(“Jack”, “Nance”));
System.out.println(strings.contains(“jack”));
System.out.println(names.contains(
new Name(“Jack”, “Nance”));
=> true, false
Solution? make .hashCode()
Remember our requirements:
hashCode() must obey equality
hashCode() must be consistent
hashCode() must generate int
hashCode() should recognize inequality
Possible Implementation
public class Name {
…
public int hashCode() {
return first.hashCode()
+ last.hashCode();
}
}
Does this work?
Good, Normal Thing #1
Set<Name> names = new HashSet<Name>()
names.add(jack);
System.out.println(names.contains(
new Name(“Jack”, “Nance”));
⇒ true
Could it be better?
A Better Implementation
public class Name {
…
public int hashCode() {
return first.hashCode() * 37
+ last.hashCode();
}
}
Why is it better? (remember contract)
hashCode Object Contract
An object’s hashcode cannot change until it
is no longer equal to what it was
Two equal objects must have an equal
hashCode
It is good if | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
is no longer equal to what it was
Two equal objects must have an equal
hashCode
It is good if two unequal objects have distinct
hashes
Ex: Jack Nance will be different from Nance Jack
Before We Switch Topics
Any questions about hashCode, please ask!
It will be an important point later today
It will cause bizarre problems if you don’t
understand it
What Collections Do
“Framework” of Interfaces and Classes to
handle:
Collecting objects
Storing objects
Sorting objects
Retrieving objects
Provides common syntax across variety of
different Collection implementations
How to use Collections
add import java.util.*; to the top of every java
file
package lab2;
import java.util.*;
public class CollectionUser {
List<String> list = new ArrayList<String>();
… //rest of class
}
Basic Collection<Foo> Syntax
boolean add(Foo o);
boolean contains(Object o);
boolean remove(Foo o);
int size();
Example Usage
List<Name> iapjava = new ArrayList<Name>();
iapjava.add(new Name(“Laura”, “Dern”);
iapjava.add(new Name(“Toby”, “Keeler”);
System.out.println(iapjava.size()); => 2
iapjava.remove(new Name(“Toby”, “Keeler”);
System.out.println(iapjava.size()); => 1
List<Name> iapruby = new ArrayList<Name>();
Iapruby.add(new Name(“Scott”, “Ostler”));
iapjava.addAll(iapruby);
System.out.println(iapjava.size()); => 2
Generic Collections
We can specify the type of object that a
collection will hold
Ex: List<String> strings
We are reasonably sure that strings contains
only String objects
Is optional, but very useful
Why Use Generics?
List untyped = new ArrayList();
List<String> typed = new ArrayList<String>();
Object obj = untyped.get(0);
String | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
Why Use Generics?
List untyped = new ArrayList();
List<String> typed = new ArrayList<String>();
Object obj = untyped.get(0);
String sillyString = (String) obj;
String smartString = typed.get(0);
Retrieving objects
Given Collection<Foo> coll
Iterator:
Iterator<Foo> it = coll.iterator();
while (it.hasNext) {
Foo obj = it.next();
// do something with obj
}
For each:
for (Foo obj : coll) {
// do something with obj
}
Object Removing Caveat
Can’t remove objects from a Collection while
iterating over it
for (Foo obj : coll)
coll.remove(obj) // ConcurrentModificationException
}
Only the Iterator can remove an object it’s iterating over
Iterator<Foo> it = coll.iterator();
while (it.hasNext) {
Foo obj = it.next();
it.remove(Obj); // NOT coll.remove(Obj);
}
Note that iter.remove is optional, and not all Iterator objects
will support it
General Collection Types
List
ArrayList
Set
HashSet
TreeSet
Map
HashMap
List Overview
Ordered list of objects, similar to Array
Unlike Array, no set size
List order generally equals insert order
List<String> strings = new ArrayList<String>();
strings.add(“one”);
strings.add(“two”);
strings.add(“three”);
// strings = [ “one”, “two”, “three”]
Other Ways
Insert at an index
List<String> strings = new ArrayList<String>();
strings.add(“one”);
strings.add(“three”);
strings.add(1, “two”);
// strings = [ “one”, “two”, “three”]
Retrieve objects with an index:
s.o.print(strings.get(0))
s.o.print(strings.indexOf(“one”))
// => “one”
// => 0
Set Overview
No set size, no set order
No duplicate objects allowed!
Set<Name> names = new HashSet | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
Set Overview
No set size, no set order
No duplicate objects allowed!
Set<Name> names = new HashSet<Name>();
names.add(new Name(“Jack”, “Nance”));
names.add(new Name(“Jack”, “Nance”));
System.out.println(names.size()); => 1
Set Contract
A set element cannot be changed in a way
that affects its equality
This is a danger of object mutability
If you don’t obey the contract, prepare for
Bad, Strange Things
Bad, Strange Thing #2
Set<Name> names = new HashSet<Name>();
Name jack = new Name(“Jack”, “Nance”);
names.add(jack);
System.out.println(names.size());
System.out.println(names.contains(jack)); => true;
jack.last = "Vance";
System.out.println(names.contains(jack)); => false
System.out.println(names.size()); => 1
Solutions to the Problem?
None.
So don’t do it.
If at all possible, use immutable set elements
Otherwise, be careful
Map Overview
Mapping between a set of “Key-Value Pairs”
That is, for every Key object, there is a Value
object
Essentially a “lookup service”
Keys must be unique, but values don’t have
to be
Note: Map is not a Collection
Map doesn’t support:
boolean add(Foo obj);
boolean contains(Object obj);
Rather, it supports:
boolean put(Foo key, Bar value);
boolean containsKey(Foo key);
boolean containsValue(Bar value);
Sample Map Usage
Map<String, String> dns = new HashMap<String, String>();
dns.put(“scotty.mit.edu”, “18.227.0.87”);
System.out.println(dns.get(“scotty.mit.edu”));
System.out.println(dns.containsKey(“scotty.mit.edu”));
System.out.println(dns.containsValue(“18.227.0.87”));
dns.remove | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
ns.containsKey(“scotty.mit.edu”));
System.out.println(dns.containsValue(“18.227.0.87”));
dns.remove(“scotty.mit.edu”);
System.out.println(dns.containsValue(“18.227.0.87”));
// => “18.227.0.87”, true, true, false
Other Useful Methods
keySet() - returns a Set of all the keys
values() - returns a Collection of all the values
entrySet() - returns a Set of Key,Value Pairs
Each pair is a Map.Entry object
Map.Entry supports getKey, getValue, setValue
Dangers of Key Mutability
A key must always be equal to what it was
This is a restatement of the Set discussion
If a key chanages, it and its value will be
“lost”
Bad, Strange Thing #3
Name isabella = new Name("Isabella", "Rosellini")
Map<Name, String> directory = new HashMap<Name, String>();
directory.put(isabella, "123-456-7890");
System.out.println(directory.get(isabella));
isabella.first = "Dennis";
System.out.println(directory.get(isabella));
directory.put(new Name("Isabella", "Rosellini"), "555-555-1234")
isabella.first = "Isabella";
System.out.println(directory.get(isabella));
What happens?
Two Answers
Right Answer:
// => 123-456-7890, null, 555-555-1234
Righter Answer:
Doesn’t matter because we shouldn’t be doing
it
Unspecified behavior
How to Fix Mutable Keys?
We want to be able to use any object to
stand in for another
But mutable objects are dangerous
Copy the Key
Name dennis = new Name(“Dennis”, “Hopper”);
Name copy = new Name(dennis.first, dennis.last);
map.put(copy, “555-555-1234”);
Now changes to dennis don’t mess up map
But the keys themselves can still be | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
-1234”);
Now changes to dennis don’t mess up map
But the keys themselves can still be changed
For (Name name : map.keySet()) {
name.first = “u r wrecked”; // uh oh
}
Make Immutable Keys
public class Name {
public final String first;
public final String last;
public Name(String first, String last) {
this.first = first;
this.last = last;
}
public boolean equals(Object o) {
return (o instanceof Name &&
((Name) o).first.equals(this.first) &&
((Name) o).last.equals(this.last));
}}
Immutable Proxy for Keys
Map<String, String> dir = new HashMap<String, String>();
Name naomi = new Name(“Naomi”, “Watts”);
String key = naomi.first + “,” + naomi.last;
dir.put(key, “888-444-1212”);
Strings are immutable, so our Maps will be safe
“Freeze” Keys
public class Name {
private String first;
private String last;
private boolean frozen = false;
…
public void setFirst(String s) {
if (!frozen) first = s;
}
… // do same with setLast
public void freeze() {
frozen = true;
}
}
Summary: Mutable Keys
Each approach has tradeoffs
But where appropriate, choose the simplest,
strongest solution
If a key cannot ever be changed, there will
never be problems
“Put and Pray” only as a lost resort
Collection Wrap-up
Common problems
Sharing obects between Collections
Trying to remove an Object during iteration
Mutable Keys, Sets
Any questions?
Comparing and Sorting
Used to decide, between two objects, if one
is bigger or they are equal
(a.compareTo(b)) should result in:
< 0 if a < b
= 0 if a = b
> 0 if a > b
Comparison Example
Integer one = 1;
System.out.println(one.compareTo(3));
System | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
�� > 0 if a > b
Comparison Example
Integer one = 1;
System.out.println(one.compareTo(3));
System.out.println(one.compareTo(-50));
String frank = “Frank”;
System.out.println(frank.compareTo(“Booth”));
System.out.println(frank.compareTo(“Hopper”));
// => -1 , 1, 4, -2
Sorting a List Alphabetically
List<String> names = new ArrayList<String>();
names.add(“Sailor”);
names.add(“Lula”);
names.add(“Bobby”);
names.add(“Santos”);
names.add(“Dell”);
Collections.sort(names);
// names => [ “Bobby”, “Dell”, “Lula”, “Sailor”,
“Santos” ]
Comparable Interface
We can sort Strings because they implement
Comparable
That is, they have a “Natural Ordering”.
To make Foo class Comparable, we have to
implement:
int compareTo(Foo obj);
A Sortable Name
public class Name implements Comparable<Name> {
…
public int compareTo(Name o) {
int compare = this.last.compareTo(o.last)
if (compare != 0)
return compare;
else return this.first.compareTo(o.first);
}
}
Sorting Names in Action
List<Name> names = new ArrayList<Name>();
names.add(new Name("Nicolas", "Cage"));
names.add(new Name("Laura", "Dern"));
names.add(new Name("Harry", "Stanton"));
names.add(new Name("Diane", "Ladd"));
names.add(new Name("William", "Morgan"));
names.add(new Name(“Dirty”, "Glover"));
names.add(new Name("Johnny", "Cage"));
names.add(new Name("Metal", "Cage"));
System.out.println(names);
Collections.sort(names);
System.out.println(names);
// => [Johnny Cage, Metal Cage, Nicolas Cage, Laura Dern, Crispin Glover,
Diane Ladd, William Morgan, Harry Stanton]
Comparator Objects
To create multiple sortings for a given Type,
we can define Comparator classes
A Comparator takes in two objects, and
determines which is bigger
For type | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
we can define Comparator classes
A Comparator takes in two objects, and
determines which is bigger
For type Foo, a Comparator<Foo> has:
int compare(Foo o1, Foo o2);
A First-Name-First Comparator
public class FirstNameFirst implements
Comparator<Name> {
public int compare(Name n1, Name n2) {
int ret = n1.first.compareTo(n2.first);
if (ret != 0)
return ret;
else return n1.last.compareTo(n2.last);
}
}
This goes in a separate file, FirstNameFirst.java
Does it Work?
List<Name> names = new ArrayList<Name>();
..
Comparator<Name> first = new FirstNameFirst();
Collections.sort(names, first);
System.out.println(names);
// => [Crispin Glover, Diane Ladd, Harry Stanton, Johnny
Cage, Laura Dern, Metal Cage, Nicolas Cage, William
Morgan]
It works!
Comparison Contract
Once again, there are rules that we must
follow
Specifically, be careful when
(compare(e1, e2)==0) != e1.equals(e2)
With such a sorting, using SortedSet or
SortedMap will cause Bad, Strange Things
Another Way of Sorting
Use a TreeSet - automatically kept sorted!
Either the Objects in TreeSet must implement Comparable
Or give a Comparator Object when making the TreeSet
SortedSet<Name> names = new TreeSet<Name>(new
FirstNameFirst());
names.add(new Name("Laura", "Dern"));
names.add(new Name("Harry", "Stanton"));
names.add(new Name("Diane", "Ladd"));
System.out.println(names);
// => [Diane Ladd, Harry Stanton, Laura Dern]
Day 3 Wrap-Up
Ask questions!
There was more here than anyone could get
or remember
Think of what you want your code to do, and
the best way to express that
Read Sun’s Java Documentation:
http://java.sun.com/j2se/1.5.0/docs/api
No one can keep | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
Documentation:
http://java.sun.com/j2se/1.5.0/docs/api
No one can keep Java in their head
Everytime you code, have this page open | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
6.776
High Speed Communication Circuits
Lecture 3
Wave Guides and Transmission Lines
Massachusetts Institute of
Technology
February 8, 2005
Copyright © 2005 by Hae-Seung Lee and Michael H. Perrott
Maxwell’s Equations in Free Space
Take Curl of (1):
E
×−∇=×∇×∇
⎛
⎜
⎝
µ
H
∂
t
∂
⎞
−=⎟
⎠
µ
∂
t
∂
(
×∇
H
)
From (2)
µ
∂
t
∂
(
×∇
H
)
=
µε
2
∂
t
∂
E
2
Vector identity + (3)
(5)
(6)
×∇×∇
E
(
⋅∇∇=
E
)
∇−
2
E
−∇=
2
E
(7)
H.-S. Lee & M.H. Perrott
MIT OCW
Simplified Maxwell’s Equations
Putting together (5), (6) and (7):
2
∇
E µε
+
2
∂
t
∂
E
2
0=
Similarly for H
2
∇
H µε
+
2
∂
t
∂
H
2
0=
For simplicity, assume only z-direction
2
∇
E
=
2
∂
z
∂
E
2
and
∇
2
H
=
2
∂
z
∂
H
2
(8)
(9)
(10)
H.-S. Lee & M.H. Perrott
MIT OCW
Solutions to Maxwell’s Equations
(10) reduces to
2
∂
z
∂
E
2
+
µε
2
∂
t
∂
E
2
0=
Similarly for H
2
∂
z
∂
H
2
+
µε
2
∂
t
∂
H
2
0 | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
∂
z
∂
H
2
+
µε
2
∂
t
∂
H
2
0=
(11)
(12)
(11) and (12) can be satisfied by any function in the form
(
zf
±
)vt
where
=v
1
µε
H.-S. Lee & M.H. Perrott
MIT OCW
Calculating Propagation Speed
(cid:131) The function f is a function of time AND position
(cid:131) Velocity calculation
vt
=
constant
±=
v
z
±
z
∂
t
∂
(cid:131) The solution propagates in the z or –z direction with a
velocity of
=v
1
µε
H.-S. Lee & M.H. Perrott
MIT OCW
Assume Sinusoidal Steady-State
E and H solutions are in the form
j
(
ω
t
±
z
v
)
Ae
j
(
t
ω
±
kz
)
=
Ae
Where
k
=
ω
v
=
µεω
H.-S. Lee & M.H. Perrott
MIT OCW
Assumptions
(cid:131) Orientation and direction
- E field is in x-direction and traveling in z-direction
- H field is in y-direction and traveling in z-direction
- In freespace:
Ex
x
y
z
Hy
direction
of travel
(cid:131) For transmission line (TEM mode)
x
y
b
direction
of travel
Hy
Ex
a
z
H.-S. Lee & M.H. Perrott
MIT OCW
Solutions
(cid:131) Fields change only in time and in z-direction
(cid:131) Implications:
H.-S. Lee & M.H. Perrott
MIT OCW
Evaluate Curl Operations in Maxwell’s Formula
(cid:131) Definition
H.-S. Lee & M.H. Perrott
MIT OCW
Evaluate Curl Operations in Maxwell’s Formula
(cid:131) Definition
(cid:131) Given the previous assumptions
H.-S. Lee & M.H. Perrott
MIT OCW
Now Put All the Pieces Together
(cid:131) Solve Maxwell’s Equation ( | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
.-S. Lee & M.H. Perrott
MIT OCW
Now Put All the Pieces Together
(cid:131) Solve Maxwell’s Equation (1)
H.-S. Lee & M.H. Perrott
MIT OCW
Now Put All the Pieces Together
(cid:131) Solve Maxwell’s Equations (1) and (2)
H.-S. Lee & M.H. Perrott
MIT OCW
Freespace Values
(cid:131) Constants
(cid:131) Impedance
(cid:131) Propagation speed
(cid:131) Wavelength of 30 GHz
signal
H.-S. Lee & M.H. Perrott
MIT OCW
Voltage and Current
(cid:131) Definitions:
b
x
Hy
Ex
y
z
I
x
t
E
y
H
w
a
b
a
H.-S. Lee & M.H. Perrott
MIT OCW
Parallel Plate Waveguide
(cid:131) E-field and H-field are influenced by plates
b
Hy
Ex
a
x
y
z
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Assume that (AC) current is flowing
I
b
Hy
Ex
I
x
y
z
a
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Current flowing down waveguide influences H-field
I
b
Hy
Ex
a
I
H
x
y
z
x
y
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Flux from one plate interacts with flux from the other
plate
I
b
Hy
Ex
a
I
x
y
z
x
y
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Approximate H-Field to be uniform and restricted to lie
between the plates
I
b
Hy
Ex
a
I
x
y
z
x
y
b
a
H.-S. Lee & M.H. Perrott
MIT OCW
Voltage and E-Field
(cid:131 | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
y
b
a
H.-S. Lee & M.H. Perrott
MIT OCW
Voltage and E-Field
(cid:131) Approximate E-field to be uniform and restricted to lie
between the plates
J
x
y
z
b
Hy
Ex
a
J
b
x
EV
a
y
H.-S. Lee & M.H. Perrott
MIT OCW
Back to Maxwell’s Equations
(cid:131) From previous analysis
(cid:131) These can be equivalently written as
(cid:131) Where
H.-S. Lee & M.H. Perrott
MIT OCW
Wave Equation for Transmission Line (TEM)
(cid:131) Key formulas
(cid:131) Substitute (2) into (1)
(cid:131) Characteristic impedance (use Equation (1))
H.-S. Lee & M.H. Perrott
MIT OCW
Connecting to the Real World
(cid:131) Typical of sinusoidal analysis usingphasors, the
solutions are complex
(cid:131) Take the real part of the solution to find the real-world
solution:
H.-S. Lee & M.H. Perrott
MIT OCW
Calculating Propagation Speed
(cid:131) The resulting cosine wave is a function of time AND
position
direction
of travel
t
Ex(z,t)
x
y
z
z
(cid:131) Consider “riding” one part of the wave
(cid:131) Velocity calculation
H.-S. Lee & M.H. Perrott
MIT OCW
Integrated Circuit Values
(cid:131) Constants
(cid:131) Impedance (geometry/material dependant)
H.-S. Lee & M.H. Perrott
MIT OCW
Integrated Circuit Values
(cid:131) Constants
(cid:131) Impedance (geometry/material dependant)
(cid:131) Propagation speed (geometry independent, material
dependent)
(cid:131) Wavelength of 30 GHz signal in silicon dioxide
H.-S. Lee & M.H. Perrott
MIT OCW
LC Network Analogy of Transmission Line (TEM)
(cid:131) LC network analogy
L
L
L
L
Zin
C
C
C
(cid:131) Calculate input impedance
H.-S. Lee & M.H. Perrott
MIT OCW | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
in
C
C
C
(cid:131) Calculate input impedance
H.-S. Lee & M.H. Perrott
MIT OCW
LC Network Analogy of Transmission Line (TEM)
(cid:131) LC network analogy
L
L
L
L
Zin
C
C
C
(cid:131) Calculate input impedance
H.-S. Lee & M.H. Perrott
MIT OCW
How are Lumped LC and Transmission Lines Different?
(cid:131) In transmission line, L and C values are infinitely
small
- It is always true that
L
L
L
L
Zin
C
C
C
(cid:131) For lumped LC, L and C have finite values
- Finite frequency range for
H.-S. Lee & M.H. Perrott
MIT OCW
Lossy Transmission Lines
(cid:131) Practical transmission lines have losses in their
conductor and dielectric material
- We model such loss by including resistors in the LC
model
R
L
R
L
R
L
R
L
Zin
1/G
C
1/G
C
1/G
C
(cid:131) The presence of such losses has two effects on
signals traveling through the line
- Attenuation
- Dispersion (i.e., bandwidth degradation)
(cid:131) See textbook for analysis
H.-S. Lee & M.H. Perrott
MIT OCW | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
18.354J Nonlinear Dynamics II: Continuum Systems Lecture 2
Spring 2015
2 Dimensional analysis
Before moving on to more ‘sophisticated things’, let us think a little about dimensional
analysis and scaling. On the one hand these are trivial, and on the other they give a simple
method for getting answers to problems that might otherwise be intractable. The idea
behind dimensional analysis is very simple: Any physical law must be expressible in any
system of units that you use. There are two consequences of this:
• One can often guess the answer just by thinking about what the dimensions of the
answer should be, and then expressing the answer in terms of quantities that are
known to have those dimensions1.
• The scientifically interesting results are always expressible in terms of quantities that
are dimensionless, not depending on the system of units that you are using.
One example of a dimensionless number relevant for fluid dynamics that we have already
encountered in the introductory class is the Reynolds number, which quantifies the relative
strength of viscous and inertial forces. Another example of dimensional analysis that we will
study in detail is the solution to the diffusion equation for the spreading of a point source.
The only relevant physical parameter is the diffusion constant D, which has dimensions of
L2T −1. We denote this by writing [D] = L2T −1. Therefore the characteristic scale over
which the solution varies after time t must be Dt. This might seem like a rather simple
result, but it expresses the essence of solutions to the diffusion equation. Of course, we will
be able to solve the diffusion equation exactly, so this argument wasn’t really necessary. In
practice, however, we will rarely find useful exact solutions to the Navier-Stokes equations,
and so dimensional analysis will often give us insight before diving into the mathematics
or numerical simulations. Before formalising our approach, let us consider a few examples
where simple dimensional arguments intuitively lead to interesting results.
√
2.1 The pendulum
This is a trivial problem that you know quite well. Consider a pendulum with length L and
mass m, hanging in a gravitational field of strength g. What is the period of the pendulum?
time involving these numbers. The only
We | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
mass m, hanging in a gravitational field of strength g. What is the period of the pendulum?
time involving these numbers. The only
We need a way to construct a quantity with units of
(cid:112)
possible way to do this is with the combination
L/g. Therefore, we know immediately
that
τ = c(cid:112)L/g.
This result might seem trivial to you, as you will probably remember (e.g., from a previous
course) that c = 2π, if one solves the full dynamical problem for for small amplitude
oscillations. However, the above formula works even for large amplitude oscillations.
(23)
2.2 Pythagorean theorem
Now we try to prove the Pythagorean theorem by dimensional analysis. Suppose you are
given a right triangle, with hypotenuse length L and smallest acute angle φ. The area of
1Be careful to distinguish between dimensions and units. For example mass (M ), length (L) and time
(T ) are dimensions, and they can have different units of measurement (e. g. length may be in feet or meters)
9
the triangle is clearly
A = A(L, φ).
Since φ is dimensionless, it must be that
A = L2f (φ),
(24)
(25)
where f is some function we don’t know.
Now the triangle can be divided into two little right triangles by dropping a line from
the right angle which is perpendicular to the hypotenuse. The two right triangles have
hypotenuses that happen to be the other two sides of our original right triangle, let’s call
them a and b. So we know that the areas of the two smaller triangles are a2f (φ) and b2f (φ)
(where elementary geometry shows that the acute angle φ is the same for the two little
triangles as the big triangle). Moreover, since these are all right triangles, the function f is
the same for each. Therefore, since the area of the big triangle is just the sum of the areas
of the little ones, we have
or
L2f = a2f + b2f,
L2 = a2 + b2.
(26)
2.3 The gravitational oscillation of a star
It is known that the sun, and many other stars undergo some mode of oscillation. The
question we might ask is how does the frequency of | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
star
It is known that the sun, and many other stars undergo some mode of oscillation. The
question we might ask is how does the frequency of oscillation ω depend on the properties
of that star? The first step is to identify the physically relevant variables. These are the
density ρ, the radius R and the gravitational constant G (as the oscillations are due to
gravitational forces). So we have
ω = ω(ρ, R, G).
(27)
The dimensions of the variables are [ω] = T −1, [ρ] = M L−3, [R] = L and [G] = M −1L3T −2.
The only way we can combine these to give as a quantity with the dimensions of time, is
through the relation
(cid:112)ω = c Gρ.
(28)
Thus, we see that the frequency of oscillation is proportional to the square root of the density
and independent of the radius. The determination of c requires a real stellar observation, but
we have already determined a lot of interesting details from dimensional analysis alone. For
the sun, ρ = 1400kg/m3, giving ω ∼ 3 × 10−4s−1. The period of oscillation is approximately
1 hour, which is reasonable. However, for a neutron star (ρ = 7 × 1011kgm−3) we predict
ω ∼ 7000s−1, corresponding to a period in the milli-second range.
2.4 The oscillation of a droplet
What happens if instead of considering a large body of fluid, such as a star, we consider a
smaller body of fluid, such as a raindrop. Well, in this case we argue that surface tension γ
10
provides the relevant restoring force and we can neglect gravity. γ has dimensions of en-
ergy/area, so that [γ] = M T −2. The only quantity we can now make with the dimensions
of T −1 using our physical variables is
(cid:114)
ω = c
γ
ρR3
,
(29)
which is not independent of the radius. For water γ = 0.07Nm−1 giving us a characteristic
frequency of 3Hz for a raindrop.
One final question we might ask ourselves before moving on is how big | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
−1 giving us a characteristic
frequency of 3Hz for a raindrop.
One final question we might ask ourselves before moving on is how big does the droplet
have to be for gravity to have an effect? We reason that the crossover will occur when the
two models give the same frequency of oscillation. Thus, when
we find that
(cid:112)ρG =
(cid:114) γ
ρR3
Rc ∼
(cid:19) 1
3
(cid:18) γ
ρ2G
(30)
(31)
This gives a crossover radius of about 10m for water.
2.5 Water waves
This is a subject we will deal with in greater detail towards the end of the course, but for
now we look to obtain a basic understanding of the motion of waves on the surface of water.
For example, how does the frequency of the wave depend on the wavelength λ? This is
called the dispersion relation.
If the wavelength is long, we expect gravity to provide the restoring force, and the
relevant physical variables in determining the frequency would appear to be the mass density
ρ, the gravitational acceleration g and the wave number k = 2π/λ. The dimensions of these
quantities are [ρ] = M L−3, [g] = LT −2 and [k] = L−1. We can construct a quantity with
the dimensions of T −1 through the relation
(cid:112)ω = c
gk.
(32)
We see that the frequency of water waves is proportional to the square root of the wavenum-
ber, in contrast to light waves for which the frequency is proportional to the wavenumber.
As with a droplet, we might worry about the effects of surface tension when the wave-
length gets small. In this case we replace g with γ in our list of physically relevant variables.
Given that [γ] = M T −2, the dispersion relation must be of the form
(cid:112)
ω = c γk3/ρ,
(33)
which is very different to that for gravity waves. If we look for a crossover, we find that the
frequencies of gravity waves and capillary waves are equal when
(cid:112)k ∼ ρg/γ.
(34)
This gives a wavelength of 1cm for | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
capillary waves are equal when
(cid:112)k ∼ ρg/γ.
(34)
This gives a wavelength of 1cm for water waves.
11
MIT OpenCourseWare
http://ocw.mit.edu
18.354J / 1.062J / 12.207J Nonlinear Dynamics II: Continuum Systems
Spring 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.