text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
y satisfy the
s 2 = 1, sx = −xs, sˆ
y = −ˆ yx − xˆ
ys, ˆ
y = 1 − 2cs.
The algebra generated by s, x, yˆ with these relations is the rational Cherednik algebra
H1,c(Z2, C) with the action of Z2 on C is given by z → −z.
7.12. Affine and extended affine Weyl groups. Let R =
be a root system
with respect to a nondegenerate symmetric bilinear form (·, ) on Rn . We will assume that
R is reduced. Let {αi}n
i=1 ⊂ R be the set of simple roots and R+ (respectively R−) be the
set of positive (respectively negative) roots. The coroots are denoted by α∨ = 2α/(α, α).
�
Let Q∨ =
∨ the coweight lattice, where
∨, αj ) = δij . Let θ be the maximal positive root,
ωi
and assume that the bilinear form is normalized by the condition (θ, θ) = 2. Let W be the
Weyl group which is generated by the reflections sα (α ∈ R).
∨’s are the fundamental coweights, i.e., (ωi
i be the coroot lattice and P ∨ =
Zα∨
Zωi
n
i=1
n
i=1
�
·
{α} ⊂ Rn
By definition, the affine root system is
Ra = {α˜ = [α, j] ∈ Rn × R| where α ∈ R, j ∈ Z}.
The set of positive affine | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
, j] ∈ Rn × R| where α ∈ R, j ∈ Z}.
The set of positive affine roots is Ra = {[α, j]
[−θ, 1]. We will identify α ∈ R with ˜α = [α, 0] ∈ Ra .
+
| j ∈ Z>0} ∪ {[α, 0] | α ∈ R+}.
Define α0 =
For an arbitrary affine root ˜α = [α, j] and a vector ˜z = [z, ζ] ∈ Rn × R, the corresponding
affine reflection is defined as follows:
sα˜(˜z) = z˜ − 2
(z, α)
(α, α)
α˜ = z˜ − (z, α∨) ˜α.
The affine Weyl group W a is generated by the affine reflections {sα˜ | α˜ ∈ R�+}, and we have
an isomorphism:
W a
∼
= W � Q∨,
where the translation α∨ ∈ Q∨ is naturally identified with the composition s[−α,1]sα ∈ W a.
Define the extended affine Weyl group to be W a = W � P ∨ acting on Rn+1 via b(˜z) =
ext
[z, ζ − (b, z)] for ˜z = [z, ζ], b ∈ P ∨. Then W a ⊂ W
. Moreover, W a is a normal subgroup of
a
ext
and W /W a = P ∨/Q∨. The latter group can be identified with the group Π = {πr} of
W
a
a permuting simple affine roots under their action in Rn+1 . It is a normal
the | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
W
a
a permuting simple affine roots under their action in Rn+1 . It is a normal
the elements of W
commutative subgroup of Aut = Aut(Dyna) (Dyna denotes the affine Dynkin diagram).
The quotient Aut/Π is isomorphic to the group of the automorphisms preserving α0, i.e. the
group AutDyn of automorphisms of the finite Dynkin diagram.
ext
a
ext
ext
7.13. Cherednik’s double affine Hecke algebra of a root system. In this subsection,
we will give an explicit presentation of Cherednik’s DAHA for a root system, defined in
Example 7.18. This is done by giving an explicit presentation of the corresponding braid
group (which is called the elliptic braid group), and then imposing quadratic relations on the
generators corresponding to reflections.
For a root system R, let m = 2 if R is of type D2k, m = 1 if R is of type B2k, Ck, and
otherwise m = Π . Let mij be the number of edges between vertex i and vertex j in the
58
|
|
affine Dynkin diagram of Ra . Let Xi (i = 1, . . . , n) be a family of pairwise commutative and
algebraically independent elements. Set
n
�
X[b,j] = Xi
�i qj, where b =
n
�
�iωi ∈ P, j ∈ Z/mZ.
i=1
i=1
ext
For an element wˆ ∈ W
a
, we can define an action on these X[b,j] by ˆwX[b,j] = Xwˆ[b,j | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
can define an action on these X[b,j] by ˆwX[b,j] = Xwˆ[b,j].
Definition 7.24 (Cherednik). The double affine Hecke algebra (DAHA) of the root system
R, denoted by HH, is an algebra defined over the field Cq,t = C(q1/m, t1/2), generated by
Ti, i = 0, . . . , n, Π, Xb, b ∈ P , subject to the following relations:
i
(1) TiTj Ti · · · = Tj TiTj · · · , mij factors each side;
(2) (Ti − ti)(Ti + t−1) = 0 for i = 0, . . . , n;
(3) πTiπ−1 = Tπ(i), for π ∈ Π and i = 0, . . . , n;
(4) πXbπ−1 = Xπ(b), for π ∈ Π, b ∈ P ;
1 , if i > 0 and (b, α∨
−
(5) TiXbTi = XbXα
i
(6) T0XbT0 = Xb−α0 if (b, θ) = −1; T0Xb = XbT0 if (b, θ) = 0.
Here ti are parameters attached to simple affine roots (so that roots of the same length
i ) = 1; TiXb = XbTi, if i > 0 and (b, αi
∨) = 0;
give rise to the same parameters).
The degenerate double affine Hecke algebra (trigonometric Cherednik algebra) HHtrig is
i=1(b, α∨)yi + u
ext
generated by the group algebra of W a , Π and pairwise commutative y˜b = | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
�)yi + u
ext
generated by the group algebra of W a , Π and pairwise commutative y˜b =
for ˜b = [b, u] ∈ P × Z, with the following relations:
� n
siyb − ysi(b)si = −ki(b, αi
∨), for i = 1, . . . , n,
s0yb − ys0(b)s0 = k0(b, θ),
πrybπr
−1 = yπr (b) for πr ∈ Π.
Remark 7.25. This degeneration can be obtained from the DAHA similarly to the case of
A1, which is described above.
7.14. Algebraic flatness of Hecke algebras of polygonal Fuchsian groups. Let W
be the Coxeter group of rank r corresponding to a Coxeter datum:
mij (i, j = 1, . . . , r, i =�
j), such that 2 ≤ mij ≤ ∞ and mij = mji.
So the group W has generators si i = 1, . . . , r, and defining relations
s 2
i = 1, (sisj )mij = 1 if mij =� ∞.
It has a sign character ξ : W → {±1} given by ξ(si) = −1. Denote by W+ the kernel of ξ
(the even subgroup of W ). It is generated by aij = sisj with relations:
= a−1
ji ,
aij ajkaki = 1,
We can deform the group algebra C[W ] as follows. Define the algebra A(W ) with invertible
generators si, and tij,k, i, j = 1, . . . , r, k ∈ Zmij for (i, j) such that mij < ∞ and defining
relations
mij
aij = | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
i, j) such that mij < ∞ and defining
relations
mij
aij = 1.
aij
tij,k = t−1
ji,−k,
2 = 1,
si
mij
�
[tij,k, ti�j�,k� ] = 0,
sptij,k = tji,ksp,
(sisj − tij,k) = 0 if mij < ∞.
k=1
Notice that if we set tij,k = exp(2πki/mij ), we get C[W ].
59
Define also the algebra A+(W ) over R := C[tij,k] (tij,k = t−1 ) by generators aij , i = j
ji,−k
(aij
= a−1
ji ), and relations
mij
�
(aij − tij,k) = 0 if mij < ∞,
aij ajpapi = 1.
k=1
If w is a word in letters si, let Tw be the corresponding element of A(W ). Choose a
function w(x) which attaches to every element x ∈ W , a reduced word w(x) representing x
in W .
Theorem 7.26 (Etingof, Rains, [ER]).
set in A(W ) as a left R-module.
(i) The elements Tw(x), x ∈ W , form a spanning
(ii) The elements Tw(x), x ∈ W+, form a spanning set in A+(W ) as a left R-module.
(iii) The elements Tw(x), x ∈ W , are linearly independent if W has no finite parabolic
subgroups of rank 3.
Proof. We only give the proof of (i). Statement (ii) follows from (i). Proof of (iii), which is
quite nontrivial, can be found in [ER] (it uses the geometry of constructible sheaves on the | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
Proof of (iii), which is
quite nontrivial, can be found in [ER] (it uses the geometry of constructible sheaves on the
Coxeter complex of W ).
Let us write the relation
as a deformed braid relation:
mij
�
(sisj − tij,k) = 0
k=1
sj sisj . . . + S.L.T. = tij sisj si . . . + S.L.T.,
where tij = (−1)mij +1tij,1 · · · tij,mij , S.L.T. mean “smaller length terms”, and the products
on both sides have length mij . This can be done by multiplying the relation by sisj · · · (mij
factors).
Now let us show that Tw(x) span A(W ) over R. Clearly, Tw for all words w span A(W ).
So we just need to take any word w and express Tw via Tw(x).
It is well known from the theory of Coxeter groups (see e.g. [B]) that using the braid
relations, one can turn any non-reduced word into a word that is not square free, and any
reduced expression of a given element of W into any other reduced expression of the same
element. Thus, if w is non-reduced, then by using the deformed braid relations we can reduce
Tw to a linear combination of Tu with words u of smaller length than w. On the other hand,
if w is a reduced expression for some element x ∈ W , then using the deformed braid relations
we can reduce Tw to a linear combination of Tu with u shorter than w, and Tw(x). Thus Tw(x)
�
are a spanning set. This proves (i).
Thus, A+(W ) is a “deformation” of C[W+] over R, and similarly A(W ) is a | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
is a “deformation” of C[W+] over R, and similarly A(W ) is a “twisted
deformation” of C[W ].
Now let Γ = Γ(m1, . . . , mr), r ≥ 3, be the Fuchsian group defined by generators cj ,
j = 1, . . . , r, with defining relations
c m
j
j = 1,
r
�
cj = 1.
Here 2 ≤ mj < ∞.
j=1
60
�
Suppose Γ acts on H where H is a simply connected complex Riemann surface as in
Section 7.7. We have the Hecke algebra of Γ, Hτ (H, Γ), defined by the same (invertible)
generators cj and relations
�
�
r
(cj − exp(2πik/nj )qjk) = 0,
cj = 1,
k
j=1
where qjk = exp(τjk).
We saw above (Theorem 7.15) that if τjk’s are formal, the algebra Hτ (Γ, H) is flat in τ if
|Γ| is infinite (i.e., H is Euclidean or hyperbolic). Here is a much stronger non-formal version
of this theorem.
Theorem 7.27. The algebra Hτ (Γ, H) is free as a left module over R := C[q±1] if and only
if
(1 − 1/mj ) ≥ 2 (i.e., H is Euclidean or hyperbolic).
j
�
jk
Proof. Let us consider the Coxeter datum: mij , i, j = 1, . . . , r, such that mi,i+1 := mi
(i ∈ Z/rZ), and mij = ∞ otherwise. Suppose the corresponding Coxeter group is | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
mi,i+1 := mi
(i ∈ Z/rZ), and mij = ∞ otherwise. Suppose the corresponding Coxeter group is W . Then
we can see that Γ = W+. Notice that the algebra Hτ (Γ, H) for genus 0 orbifolds is the
algebra A+(W ), i.e., we have Hτ (Γ, H) = A+(W ).
The condition
(1 − 1/mj ) ≥ 2 is equivalent to the condition that W has no finite
j
parabolic subgroups of rank 3. From Theorem 7.26 (ii) and Theorem 7.15, we can see that
�
A+(W ) is free as a left module over R. We are done.
�
7.15. Notes. Section 7.8 follows Section 6 of the paper [EOR]; Cherednik’s definition of the
double affine Hecke algebra of a root system is from Cherednik’s book [Ch]; Sections 7.7 and
7.14 follow the paper [ER]; The other parts of this section follow the paper [E1].
61
MIT OpenCourseWare
http://ocw.mit.edu
18.735 Double Affine Hecke Algebras in Representation Theory, Combinatorics, Geometry,
and Mathematical Physics
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
3.044 MATERIALS PROCESSING
LECTURE 4
General Heat Conduction Solutions:
∂T = ∇ · k∇T, T (¯x, t)
∂t
Trick one: steady state ∇2T = 0, T (x)
Trick two: low Biot number ∂T = α h(Ts − Tf ), T (t)
∂t
Low Biot Number Solutions: Newtonian Heating / Cooling
Global Heat Balance:
qconv = qlost
A h(T − Tf ) = −ρ cp
∂T
T − Tf
=
∂T
∂t
−hA
ρcpV
V
∂t
ln(T − Tf ) =
−hA
ρ cp V
@t = 0, T = Ts
t + C
Date: February 21st, 2012.
ln(Ts − Tf ) = C
1
2
LECTURE 4
ln
T − Tf
Ts − Tf
T − Tf
Ts − Tf
=
−hA
ρ cp V
t
−hA
ρ cp V
= e
t
Transient Heat Conduction: depends on position and time
∂T
∂t
= α ∇2 T
You should know:
1) Some common solutions for simple geometries
2) Where to find solutions
3) How to build up complex solutions using simple solutions
Semi-Infinite Solid
- constant T1 at surface
- initially T0 everywhere
T − T0
T1 − T0
x
= erfc √
2 αt
3.044 MATERIALS PROCESSING
3
erf(z) =
(cid:90) z
0
e−x2dx
erfc = 1 − erf
T (x) = (T1 − T0) erfc
(cid:18)
(cid:19)
x
√
α t
2
+ T0
T − T0
T1 − T0
T − T0
T1 − T0
(−1)
= erfc
(cid:18)
= er | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/02f79fb702ece65067f5827044029807_MIT3_044S13_Lec04.pdf |
1 − T0
T − T0
T1 − T0
(−1)
= erfc
(cid:18)
= erfc
(cid:18)
(cid:19)
x
√
α t
x
(cid:19)
√
α t
2
2
(−1)
4
LECTURE 4
− (T − T0)
T0 − T
T1 − T0
T − T1
T1 − T0
T − T1
T0 − T1
(cid:19)
(cid:18)
x
= erf √
2 α t
x
(cid:18)
= −erf √
2 α t
x
(cid:19)
(cid:18)
= erf √
2 α t
(cid:19)
(cid:18)
hx
k
(cid:19)
+ h2k α t
Semi-Infinite Solid
- convection at surface: qlost = h(T − Tf )
Θ = ERF C
(cid:19)
(cid:18)
x √
2 α t
− EXP
Where to find these solutions:
- Carslaw & Jaeger
- Crank
Dimensionless Numbers:
x
= erfc √
2 αt
T − T0
T1 − T0
T − T0 = Θ
T1 − T0
x
χ = L
x = Lχ
α t
τ = L2
L2T
t = α ⎛
Θ = erfc L
⎝
Lχ
α L2 τ
α
2
(cid:18)
χ
(cid:19)
Θ = erfc √
2 τ
· ERF C
(cid:18)
x √ +
2 α t
h √
k
(cid:19)
α t
⎞
⎠
3.044 MATERIALS PROCESSING
5
MIT OpenCourseWare
http://ocw.mit.edu
3.044 Materials Processing
Spring 2013
For information about citing these materials or our Terms of Use, visit | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/02f79fb702ece65067f5827044029807_MIT3_044S13_Lec04.pdf |
http://ocw.mit.edu
3.044 Materials Processing
Spring 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/02f79fb702ece65067f5827044029807_MIT3_044S13_Lec04.pdf |
Finite Element Modeling of the
Detachment of Soft Adhesives
Stick-slip phenomena and Schallamach waves
captured using reversible cohesive elements
1Evelyne Ringoot
•
•
•
•
BSc in Engineering Sciences at VUB Brussels 2018
Msc in Civil Engineering at VUB Brussels 2020
o
Specialization in geomechanics and numerical methods
Visiting Student at École polytechnique fédérale de Lausanne
Visiting Student at Massachusetts Institute of Technology
2Soft Adhesives
And the remarkable reversible capacities of natural adhesives
How to explain reattachment and reversible adhesion?
© National Geographic Society. All rights reserved. This content is excluded from
our Creative Commons license. For more information, see https://ocw.mit.edu/fairuse.
Stefan Sirucek (2014) How gecko’s turn their stickiness on and off in ‘National Geographic’.
3Medical: tissue repair,
wound scaffolds or drug patches
High-precision
non-damaging
soft grippers
© UC San Diego Jacobs School of Engineering. All rights reserved. This
content is excluded from our Creative Commons license. For more
information, see https://ocw.mit.edu/fairuse.
Soft Adhesive
Applications
© Karp Laboratory. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/fairuse.
© The European Space Agency. All rights reserved. This
content is excluded from our Creative Commons license. For
more information, see https://ocw.mit.edu/fairuse.
Climbing robots for
dangerous environments
UC San Diego Jacobs School of Engineering (2018), Tolley Gecko Gripper on Flickr, consulted in Sept 2020 on https://www.flickr.com/photos/jsoe/albums/72157695462669655/with/40449351705/
The European Space Agency (2014), Wall-crawling gecko robots can stick in space too, consulted in Sept 2020 http://www.esa.int/Enabling_Support/Space_Engineering_Technology/Wall-crawling_gecko_robots_can_stick_in_space_too
The Karplab (2014), Worm-Inspired Glue Mends Broken Hearts, consulted on Sept 2020 on https://www.karplab.net | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/03385313c1d67aae2fd49bdd6d040b32_MIT18_085Summer20_lec_ER.pdf |
plab (2014), Worm-Inspired Glue Mends Broken Hearts, consulted on Sept 2020 on https://www.karplab.net/portfolio-item/worm-inspired-glue-mends-broken-hearts
4Experimental
observations
Research questions in
mechanics of solids: how to
explain, predict and influence
physical realities?
Analytical theory
Cohesive elements represent
surface strength assumptions
Numerical
solutions
5Finite Element Models of solid deformation
Differential equations governing the conservation
of mass and momentum:
+ constitutive equations linking stress induced by
forces to strain encountered by the material
+Boundary conditions on the stress or strain state
applied on the borders of the material
6Commercial Finite Element Models software
ANSYS
ABAQUS
NX NASTRAN
Or code developed in research groups: Akantu
7The detachment and re-attachment of adhesive with
multiple layers when loaded parallel to their substrate
8Adapting a FEM framework allowed to numerically
replicate a physical phenomena that is still not fully
understood: Soft Adhesive detachment
Experiment
Simulation
Attached
Detached
Attached
Detached
9MIT OpenCourseWare
https://ocw.mit.edu
18.085 Computational Science and Engineering I
Summer 2020
For information about citing these materials or our Terms of Use,
visit: https://ocw.mit.edu/terms.
10 | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/03385313c1d67aae2fd49bdd6d040b32_MIT18_085Summer20_lec_ER.pdf |
MORE MATLAB INSTRUCTIONS
1. Plotting functions
In this section, the basics of plotting functions in MATLAB are described. Throughout we work
with the example of two functions, f (t) = (2t + 1)e−t sin(t), and g(t) = (t − 1)e−t cos(t).
Step 1, Specify the domain: Functions are defined on an interval called the domain. To plot
the function in MATLAB, you need to specify the domain. Every domain has a left endpoint, a,
and a right endpoint, b. Of course MATLAB does not plot the value of the function at every point
between the a and b, only finitely many points with a regular spacing h. The syntax to specify the
domain is,
>> t = a : h : b
For example, to plot our function on the interval [−1, 1] with step size 0.05, the syntax is,
>> t = 1:0.05:1
just the ordered list of the
One word about this. Technically x is a data type called an array:
numbers a, a + h, a + 2h, . . . . The syntax for arithmetic with an array in MATLAB is different than
the syntax for arithmetic with a number.
Step 2, Specify the function: Here is a list of common operations used to define functions, and
the corresponding syntax in MATLAB. In the list, y(t) | https://ocw.mit.edu/courses/18-034-honors-differential-equations-spring-2004/034106b06ab0a11998263a17e7eeb5d7_matlab2.pdf |
define functions, and
the corresponding syntax in MATLAB. In the list, y(t) and z(t) are names for functions or pieces
of functions that are already specified.
Operation
y(t) + z(t)
y(t)z(t)
y(t)n
y(t)/z(t)
sin(y(t))
cos(y(t))
ey(t)
ln(y(t))
log10(y(t))
MATLAB Syntax
y + z
y.* z
y.^n
y./z
sin(y)
cos(y)
exp(y)
log(y)
log10(y)
For example, if the range t has already been defined, the function (2t + 1)e−t sin(t) is specified by,
>> y = ( 2 .* t + 1 ).* exp( 1.* t ).* sin( t )
Similarly, the function (t − 1)e−t cos(t) is specified by,
>> z = ( t 1 ).* exp( 1.* t ).* cos( t )
Step 3, Plot the function: The syntax to produce a 2Dplot whose domain is t and whose function
is y is,
>> h = plot(t,y)
Note, you do not need to say “h =”, but this can be useful if you want to manipulate the plot later.
MATLAB will produce the plot in a new window.
Step 4, Plotting a parametrized curve; Several plots at once: MATLAB can plot a
parametrized figure. For instance, for the parametrized curve (y | https://ocw.mit.edu/courses/18-034-honors-differential-equations-spring-2004/034106b06ab0a11998263a17e7eeb5d7_matlab2.pdf |
plot a
parametrized figure. For instance, for the parametrized curve (y, z) where y(t) = (2t + 1)e−t sin(t),
z(t) = (t − 1)e−t cos(t), the syntax is,
1
>> i = plot(y,z)
where y and z are specified as above. Note that when plotting parametrized curves, it is still
necessary to specify the tdomain. But t doesn’t explicitly appear in the syntax of the plot.
Also, MATLAB can plot several graphs (or parametrized curves) simultaneously. For simplicity,
think of a graph as a parametrized curve (t, y(t)). For a number of parametrized curves, say
(y1(t), z1(t)), (y2(t), z2(t)), the syntax to plot both of these curves in a single figure is,
>> j = plot(y1,z1,y2,z2)
Any number of curves can be plotted in a single figure: just write plot(y1, z1, . . . , yn, zn), where
the functions y1, . . . , yn and z1, . . . , zn have already been specified. To simultaneously graph the
functions y and z above over the interval t, the syntax is,
>> k = plot(t,y,t,z)
Step 5, Print or export your plot: To either print your plot or to export it as a JPEG file, click
on the “File” button of the new window and | https://ocw.mit.edu/courses/18-034-honors-differential-equations-spring-2004/034106b06ab0a11998263a17e7eeb5d7_matlab2.pdf |
a JPEG file, click
on the “File” button of the new window and then click on “Print” or “Export” in the popup menu.
There are other extras that you can find out by experimenting (such as adding labels to your axes).
2 | https://ocw.mit.edu/courses/18-034-honors-differential-equations-spring-2004/034106b06ab0a11998263a17e7eeb5d7_matlab2.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
18.306 Advanced Partial Differential Equations with Applications
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Discrete to Continuum Modeling.
Rodolfo R. Rosales
.
(cid:3)
MIT, March, 2001.
Abstract
These notes give a few examples illustrating how continuum models can be derived from special
limits of discrete models. Only the simplest cases are considered, illustrating some of the most
basic ideas. These techniques are useful because continuum models are often much easier to deal
with than discrete models with very many variables, both conceptually and computationally.
Contents
1 Introduction.
2
2 Wave Equations from Mass-Spring Systems.
2
Longitudinal Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Nonlinear Elastic Wave Equation (for a Rod) . . . . . . . . . . . . . . . . . . . . . .
4
Example: Uniform Case
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
| Sound Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Example: Small Disturbances
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
| Linear Wave Equation, and Solutions
. . . . . . . . . . . . . . . . . . . . . . . .
5
Fast Vibrations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
| Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
| Long Wave Limit
. . . . . . . . . . . . . . . . . . . | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
. . . . . .
6
| Long Wave Limit
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Transversal Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Stability of the Equilibrium Solutions . . . . . . . . . . . . . . . . . . . . . . . . . .
7
Nonlinear Elastic Wave Equation (for a String) . . . . . . . . . . . . . . . . . . . . .
7
Example: Uniform String with Small Disturbances . . . . . . . . . . . . . . . . . . .
7
| Uniform String Nonlinear Wave Equation. . . . . . . . . . . . . . . . . . . . . . .
7
| Linear Wave Equation.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
| Stability and Laplace’s Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
| Ill-posed Time Evolution.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
General Motion: Strings and Rods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
3 Torsion Coupled Pendulums: Sine-Gordon Equation.
8
Hooke’s Law for Torsional Forces
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
Equations for N torsion coupled equal pendulums . . . . . . . . . . . . . . . . . . .
10
Continuum Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
| S | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
| Sine-Gordon Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
| Boundary Conditions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Kinks and Breathers for the Sine Gordon Equation . . . . . . . . . . . . . . . . . . . .
11
Example: Kink and Anti-Kink Solutions
. . . . . . . . . . . . . . . . . . . . . . . .
12
Example: Breather Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Pseudo-spectral Numerical Method for the Sine-Gordon Equation . . . . . . . . . . . .
14
4 Suggested problems.
14
(cid:3)
MIT, Department of Mathematics, room 2-337, Cambridge, MA 02139.
1
Discrete to Continuum Modeling.
2
MIT, March, 2001 | Rosales.
1 Introduction.
Continuum approximations are useful in describing discrete systems with a large number of degrees
of freedom. In general, a continuum approximation will not describe all possible solutions of the
discrete system, but some special class that will depend on the approximations and assumptions
made in deriving the continuum model. Whether or not the approximation is useful in describing
a particular situation, will depend on the appropriate approximations being made. The most
successful models arise in situations where most solutions of the discrete model evolve rapidly in
time towards con(cid:12)gurations where the assumptions behind the continuum model apply.
The basic step in obtaining a continuum model from a discrete system, is to identify some basic
con(cid:12)guration (solution of the discrete model) that can be described by a few parameters. Then one
assumes that the full solution of the system can be described, near every point in space and at every
time, by this con(cid:12)guration | for some value of the parameters. The parameters are | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
, near every point in space and at every
time, by this con(cid:12)guration | for some value of the parameters. The parameters are then assumed
to vary in space and time, but on scales (macro-scales) that are much larger than the ones associated
with the basic con(cid:12)guration (micro-scales). Then one attempts to derive equations describing the
evolution of these parameters in the macro-scales, thus averaging out of the problem the micro-
scales. There is a close connection between this approach, and the"quasi-equilibrium" approximations
that are often invoked to "close" continuum sets of equations derived using conservation laws.
For example, when deriving the equations for Gas Dynamics in Statistical Mechanics, it is assumed
that the local particle interactions rapidly exchange energy and momentum between the molecules
| so that the local probability distributions for velocities take a standard form (equivalent to local
thermodynamic equilibrium). What exactly makes these assumptions work (in terms of properties
of the governing, micro-scale, equations) is rather poorly understood. But that they work rather
well cannot be denied. In these notes we will consider examples that are rather simpler than these
ones, however, where the "local con(cid:12)gurations" tend to be rather trivial.
2 Wave Equations from Mass-Spring Systems.
Consider an array of bodies/particles, connected by springs, and restricted
to move on a straight
Longitudinal Motion.
1
line. Let the positions of the bodies be given by x
= x
(t), with n = 0;
1;
2; : : :, and let M
n
n
n
be the mass of the n
particle. Furthermore, let the force law for the spring between particles
th
(cid:6)
(cid:6)
n and n + 1 be given by: force = f
((cid:1)x), where (cid:1)x is the distance between the particles, and
1
1
f
is positive when the spring is under tension.
+
n
2
+
n
2
2
If there are no other forces involved (e.g. no friction), the governing equations for the system are:
2
d
M
x
= f
(x
x
)
f
(x
x
) ;
(2.1)
+1
1
n
n
+
n
n
n
n | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
)
f
(x
x
) ;
(2.1)
+1
1
n
n
+
n
n
n
n(cid:0)
2
n
n(cid:0)
2
2
1
1
dt
(cid:0)
(cid:0)
(cid:0)
for n = 0;
1;
2; : : : The simplest solution for this system of equations is equilibrium.
In this
case all the accelerations vanish, so that the particle positions are given by the series of algebraic
(cid:6)
(cid:6)
By some device: say the bodies are sliding inside a hollow tube.
1
2
If the spring obeys Hooke’s law, then
((cid:1)
) =
(cid:1)
, where
0 and
0 are the
n
n
n
n
n
f
x
k
x
L
k
>
L
>
+
+
+
+
+
2
2
2
2
2
1
1
1
1
1
(cid:0)
spring constant and equilibrium length, respectively.
(cid:16)
(cid:17)
Discrete to Continuum Modeling.
3
MIT, March, 2001 | Rosales.
equations
0 = f
(x
x
)
f
(x
x
) :
(2.2)
+1
1
+
n
n
n
n(cid:0)
n
n(cid:0)
2
2
1
1
(cid:0)
(cid:0)
(cid:0)
This is the basic con(cid:12)guration (solution) that we will use in obtaining a continuum approximation.
Note that this is a one parameter family: if the forces are monotone functions of the displacements
(cid:1)x, then once any one of them is given, the others follow from (2.2).
Before proceeding any further, it is a good idea to non-dimensionalize the equations. We will
assume that:
A. All the springs are roughly similar, so that we can talk of a typical spring force f , and a typical
spring length L. Thus we can write
1
1
(cid:1)x
f
((cid:1)x) = f F
;
(2.3)
+
+
n
n
2
2
L
(cid:18)
(cid:19)
where F
is a | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
;
(2.3)
+
+
n
n
2
2
L
(cid:18)
(cid:19)
where F
is a non-dimensional mathematical function, of O(1) size, and with O(1) deriva-
+
n
2
1
tives. A further assumption is that F
changes slowly with n, so that two nearby springs
+
n
2
1
are nearly equal. Mathematically, this is speci(cid:12)ed by stating that:
1
F
((cid:17) ) = F ((cid:15)(n + 1=2); (cid:17) ) ;
(2.4)
+
n
2
where 0 < (cid:15)
1, and F is a "nice" (mathematical) function of its two variables.
(cid:28)
B. All the particles have roughly the same mass m, and their masses change slowly
with n, so that we can write:
M
= m M ((cid:15) n) ;
(2.5)
n
where M is a nice mathematical function, with O(1) size, and with O(1) derivatives.
Remark 2.1 Why do we need these assumptions? This has to do with the questions of validity,
discussed in the introduction. Suppose that these hypothesis are violated, with the masses and
springs jumping wild ly in characteristics. Then the basic con(cid:12)guration described by (2.2) wil l stil l
be a solution. However, as soon as there is any signi(cid:12)cant motion, neighboring parts of the chain
wil l respond very di(cid:11)erently, and the solution wil l move away from the local equilibrium implied by
(2.2). There is no known method to, generical ly, deal with these sort of problems | which turn out
to be very important: see remark 2.2.
From the assumptions in A and B above, we see that:
Changes in the mass-spring system occur over length scales
‘ = L=(cid:15):
(2.6)
Using this scale to non-dimensionalize space, namely: x
= ‘ X
; and a yet to be speci(cid:12)ed time
n
n
scale (cid:28) to non-dimensionalize time, namely:
t = (cid:28) T ; | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
12)ed time
n
n
scale (cid:28) to non-dimensionalize time, namely:
t = (cid:28) T ; the equations become:
2
2
d
(cid:15) f (cid:28)
X
X
X
X
+1
1
n
n
n
n(cid:0)
1
1
M ((cid:15)n)
X
=
F
F
:
(2.7)
n
+
(cid:0)
(cid:0)
2
2
2
n
n(cid:0)
dT
m L
(cid:15)
(cid:15)
(cid:0)
(cid:18)
(cid:18)
(cid:19)
(cid:18)
(cid:19)(cid:19)
A and B above also imply that, for the solution in (2.2), the inter-particle distance x
x
varies
+1
n
n
(cid:0)
slowly | an O((cid:15)) fractional amount per step in n. Thus we propose solutions for (2.7) of the form:
and X = X (s; t) is some smooth function of its arguments.
X
(t) = X (s
; t) ; where
s
= n (cid:15) ;
(2.8)
n
n
n
Discrete to Continuum Modeling.
4
MIT, March, 2001 | Rosales.
Substituting (2.8) into (2.7), and using (2.4) and (2.5), we obtain
2
2
2
@
(cid:15)
f (cid:28)
@
@
2
M (s)
X =
F
s;
X
+ O((cid:15)
)
:
(2.9)
2
@T
m L
@ s
@ s
!
!
Here we have used that:
X
X
@
1
X
X
@
1
+1
1
n
n
2
n
n(cid:0)
2
(cid:0)
(cid:0)
=
X (s +
(cid:15); t) + O((cid:15)
)
and
=
X (s
(cid:15); t) + O((cid:15)
) ;
(cid:15)
@ s
2
(cid:15)
@ s
2
(cid: | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
); t) + O((cid:15)
) ;
(cid:15)
@ s
2
(cid:15)
@ s
2
(cid:0)
with a similar formula applying to the di(cid:11)erence F
F
.
1
1
+
n
n(cid:0)
2
2
(cid:0)
Equation (2.9) suggests that we should take
m L
(cid:28) =
;
(2.10)
2
s
(cid:15)
f
for the un-speci(cid:12)ed time scale in (2.7). Then equation (2.9) leads to the continuum limit ap-
proximations (valid for 0 < (cid:15)
1)
(cid:28)
2
@
@
@
M (s)
X =
F
s;
X
:
(2.11)
2
@T
@ s
@ s
!
The mass-spring system introduced in equation (2.1) can be thought of as a simple model for an
elastic rod under (only) longitudinal forces. Then we see that (2.11) is a model (nonlinear wave)
equation for the longitudinal vibrations of an elastic rod, with s a lagrangian coordinate
for the points in the rod, M = M (s) the mass density along the rod, and X giving the position of
the point s as a function of time, and F a function characterizing the elastic response of the rod.
Of course, in practice F must be obtained from laboratory measurements.
Remark 2.2 The way in which the equations for nonlinear elasticity can be derived for a crystal line
solid is not too di(cid:11)erent
from the derivation of the wave equation (2.11) for longitudinal vibrations.
3
Then a very important question arises (see (cid:12)rst paragraph in section 1): What important behaviors
are missed due to the assumptions in the derivation? How can they be modeled? In particular,
what happens if there are "defects" in the crystal structure (see remark 2.1)? These are al l very
important, and open, problems of current research interest.
Example 2.1 Uniform Rod.
If al l the springs and al l the particles are equal, then we can take M
1 and F is independent of
s. Furthermore, if we take L to be the (common) equilibrium length of | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
equal, then we can take M
1 and F is independent of
s. Furthermore, if we take L to be the (common) equilibrium length of the springs, we then have
(cid:17)
2
2
@
@
@
@
@
2
X =
F
X
= c
X
X ;
(2.12)
2
2
@T
@ s
@ s
@ s
@ s
!
!
2
2
where c
= c
((cid:17) ) = dF =d(cid:17) ((cid:17) ) > 0, and F (1) = 0 (equilibrium length). The unperturbed "rod" cor-
responds to X
s, while X
(cid:11) s corresponds to the rod under uniform tension ((cid:11) > 1), or com-
pression ((cid:11) < 1). Also, note that c is a (non-dimensional) speed | the speed at which elastic
(cid:17)
(cid:17)
disturbances along the rod propagate: i.e. the sound speed.
3
At least qualitatively, though it is technically far more challenging.
Discrete to Continuum Modeling.
5
MIT, March, 2001 | Rosales.
Example 2.2 Small Disturbances.
Consider a uniform rod in a situation where the departures from uniform equilibrium are smal l.
That is @X=@ s
(cid:11), where (cid:11) is a constant. Then equation (2.12) can be approximated by the
linear wave equation
(cid:25)
2
X
= c
X
;
(2.13)
T T
ss
where c = c((cid:11)) is a constant. The general solution to this equation has the form
where g and h are arbitrary functions. This solution clearly shows that c is the wave propagation
X = g (s
c T ) + h(s + c T ) ;
(2.14)
(cid:0)
velocity.
Remark 2.3 Fast vibrations.
The vibration frequency for a typical mass m, attached to a typical spring in the chain, is:
f
1
! =
=
:
(2.15)
s
m L
(cid:15)(cid:28)
This corresponds to a time scale much shorter than the one involved in the solution in (2.8 | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
m L
(cid:15)(cid:28)
This corresponds to a time scale much shorter than the one involved in the solution in (2.8{2.11).
What role do the motions in these scales play in the behavior of the solutions of (2.1), under the
assumptions made earlier in A and B?
For real crystal lattices, which are de(cid:12)nitely not one dimensional (as the one in (2.1)) these fast time
scales correspond to thermal energy (energy stored in the local vibrations of the atoms, relative to
their equilibrium positions). It is believed that the nonlinearities in the lattice act so as to randomize
these vibrations, so that the energy they contain propagates as heat (di(cid:11)uses). In one dimension,
however, this does not general ly happen, with the vibrations remaining coherent enough to propagate
with a strong wave component. The actual processes involved are very poorly understood, and the
statements just made result, mainly, from numerical experiments with nonlinear lattices.
Just to be a bit more precise: consider the situation where al l the masses are equal | M
= m
n
for al l n, and al l the springs are equal and satisfy Hooke’s law (linear elasticity):
1
(cid:1)x
f
((cid:1)x) = k((cid:1)x
L) = f
1
;
(2.16)
+
n
2
(cid:0)
(cid:0)
L
(cid:18)
(cid:19)
where k is the spring constant, L is the equilibrium length, and f = k L: Then equation (2.1) takes
the form
2
d
2
x
= !
(x
2x
+ x
) ;
(2.17)
+1
1
2
n
n
n
n(cid:0)
dt
(cid:0)
where ! is as in (2.15). Because this system is linear, we can write its general solution as a linear
superposition of eigenmodes, which are solutions of the form
4
x
= exp(i (cid:20) n
i (cid:27) t) ; where (cid:27) =
2 ! sin
and
< (cid:20) <
is a constant.
(2.18)
n
(cid:20)
(cid:0)
(cid | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
sin
and
< (cid:20) <
is a constant.
(2.18)
n
(cid:20)
(cid:0)
(cid:6)
(cid:0) 1
1
2
(cid:18)
(cid:19)
These must be added to an equilibrium solution x
= (cid:11) L n = s
, where (cid:11) > 0 is a constant.
n
n
4
Check that these are solutions.
Discrete to Continuum Modeling.
6
MIT, March, 2001 | Rosales.
Relative to the mean position s
along the lattice, each solution in (2.18) can be written as
n
(cid:20)
x
= exp(i
s
i (cid:27) t) :
n
n
(cid:11)L
(cid:0)
Thus we see that it represents a wave of wavelength (cid:21) = 2(cid:25)(cid:11)L=(cid:20), and speed
(cid:11)L(cid:27)
2(cid:11)L!
(cid:20)
2c
(cid:20)
c
=
=
sin
=
sin
(2.19)
w
(cid:20)
(cid:20)
2
(cid:20)
2
(cid:6)
(cid:18)
(cid:19)
(cid:18)
(cid:19)
propagating along the lattice | where c = (cid:11)L! is a speed. Note that the speed of propagation is a
function of the wave-length | this phenomenon is know by the name of dispersion. We also note
that the maximum frequency these eigenmodes can have is (cid:27) = 2!, and corresponds to wavelengths
of the order of the lattice separation.
5
In the case of equations (2.16 { 2.17) there is no intrinsic (cid:15) in the equations: it must arise from the
initial conditions. That is to say: assume that the wavelength ‘ with which the lattice is excited is
much larger than the lattice equilibrium separation L, i.e. ‘
L, with (cid:15) = L=‘. This corresponds to
solutions (2.18) with (cid:20) smal l. In this long wave limit we see that (2.19) implies that the solutions
(cid:29)
have the same wave speed c
=
c. This | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
this long wave limit we see that (2.19) implies that the solutions
(cid:29)
have the same wave speed c
=
c. This corresponds to the situation in (2.13 { 2.14).
w
(cid:6)
It is clear that, in the linear lattice situation described above, we cannot dismiss the fast vibration
excitations (with frequencies of the order of !) as constituting some sort of energy "bath" to be
interpreted as heat. The energy in these vibrations propagates as waves through the media, with
speeds which are of the same order of magnitude as the sound waves equation (2.13) describes.
Before the advent of computers it was believed that nonlinearity would destroy the coherence of
these fast vibrations. Numerical experiments, however, have shown that this is not (general ly) true
for one dimensional lattices,
though it seems to be true in higher dimensions. Exactly why, and
6
how, this happens is a subject of some current interest.
Transversal Motion.
We consider now a slightly di(cid:11)erent situation, in which the masses are allowed to move only in
the direction perpendicular to the x axis. To be precise: consider a sequence of masses M
in
n
the plane, whose x coordinates are given by x
= n L. Each mass is restricted to move only in the
n
orthogonal coordinate direction, with y
= y
(t) giving its y position. The masses are connected by
n
n
springs, with f
((cid:1)r
) the force law, where (cid:1)r
=
L
+ (y
y
)
is the distance between
+1
+
+
+
n
n
n
n
n
2
2
2
1
1
1
2
2
(cid:0)
masses. Assuming that there are no other forces involved, the governing equations for the system
q
are:
2
d
y
y
y
y
+1
1
n
n
n
n(cid:0)
1
1
1
1
M
y
=
f
((cid:1)r
)
f
((cid:1)r
) ;
(2.20)
n
n
+
+
(cid:0)
(cid:0)
2
n
n
n(cid:0)
n | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
;
(2.20)
n
n
+
+
(cid:0)
(cid:0)
2
n
n
n(cid:0)
n(cid:0)
2
2
2
2
1
1
dt
(cid:1)r
(cid:0)
(cid:1)r
+
n
2
2
n(cid:0)
for n = 0;
1;
2; : : : (you should convince yourself that this is the case).
(cid:6)
(cid:6)
The simplest solution for this system of equations is equilibrium, with all the masses lined up
horizontally y
= y
;
so that all the accelerations vanish. Again, one can use this (one
+1
n
n
parameter) family of solutions to obtain a continuum approximation for the system in (2.20) |
under the same assumptions earlier in A and B.
The reason for the 2 relative to (2.15) is that the masses are coupled, and not attached to a single spring.
5
6
The (cid:12)rst observation of this general phenomena was reported by E. Fermi, J. Pasta and S. Ulam, in 1955:
Studies
of Non Linear Problems
Collected Papers of Enrico Fermi.
II
, Los Alamos Report LA-1940 (1955), pp. 978-988 in
,
The University of Chicago Press, Chicago, (1965).
Discrete to Continuum Modeling.
7
MIT, March, 2001 | Rosales.
Remark 2.4 Stability of the Equilibrium Solutions.
It should be intuitively obvious that the equilibrium solutions described above wil l be stable only if the
equilibrium lengths of the springs
are smaller than the horizontal separation L between the masses,
1
1
+
n
2
L
namely:
< L. This so that none of the springs is under compression in the solution, since any
+
n
2
L
mass in a situation where its springs are under compression wil l easily "pop" out of alignment with
the others | see example 2.3.
Introduce now the non-dimensional variables Y = (cid:15) y=L, X = (cid:15) x=L (note that, since x
= nL, in
n
fact X plays here the same role that s played in the prior derivation
), and T = t=(cid | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
x
= nL, in
n
fact X plays here the same role that s played in the prior derivation
), and T = t=(cid:28) , where (cid:28) is as
7
in (2.10). Then the continuum limit for the equations in (2.20) is given by
2
@
Y
@
F (X;
)
@Y
M (X )
=
(2.21)
S
2
@T
@X
@X
!
where Y = Y (X; T ) and
S
2
@Y
=
1 +
:
v
S
u
@X
!
u
t
The derivation of this equation is left as an exercise to the reader.
The mass-spring system introduced in (2.20) can be thought of as a simple model for an elastic
string restricted to move in the transversal direction only. Then we see that (2.21) is a model
(nonlinear wave) equation for the transversal vibrations of a string, where X is the
longitudinal coordinate along the string position, Y is the transversal coordinate, M = M (X ) is
the mass density along the string, and F = F (X;
) describes the elastic properties of the string.
In
8
the non-dimensional coordinates, the (local) equilibrium length for the string is given by e
=
=L.
S
‘
L
That is, the elastic forces vanish for this length:
F (X; e
(X ))
0 ; where
e
< 1 (for stability, see remark 2.4).
(2.22)
‘
‘
(cid:17)
@
We also assume that
F (X;
) > 0:
@
S
S
Example 2.3 Uniform String with Small Disturbances.
Consider now a uniform string (neither M , nor F , depend on X ) in a situation where the departures
from equilibrium are smal l (@Y =@X is smal l).
For a uniform string we can assume M
1, and F is independent of X . Thus equation (2.21)
reduces to
(cid:17)
2
@
Y
@
F (
)
@Y
=
:
(2.23)
S
2
@T
@X
@X
!
S
Next, for smal l disturbances we have | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
:
(2.23)
S
2
@T
@X
@X
!
S
Next, for smal l disturbances we have
1, and (2.23) can be approximated by the linear wave
equation
2
S (cid:25)
2
Y
= c
Y
;
(2.24)
T T
XX
where c
= F (1) is a constant (see equations (2.13 { 2.14).
7
The coordinate
is simply a label for the masses. Since in this case the masses do not move horizontally,
can
s
X
be used as the label.
8
S
Notice that
is the local stretching of the string, due to its inclination relative to the horizontal position (actual
length divided by horizontal length).
Discrete to Continuum Modeling.
8
MIT, March, 2001 | Rosales.
Notice how the stability condition e
< 1 in (2.22) guarantees that c
> 0 in (2.23). If this were not
‘
2
the case, instead of the linear wave equation, the linearized equation would have been of the form
2
Y
+ d
Y
= 0 ;
(2.25)
T T
XX
with d > 0. This is Laplace Equation, which is ill-posed as an evolution in time problem.
To see this, it is enough to notice that (2.25) has the fol lowing solutions:
d jkj t
Y = e
sin(kX ) ;
for any
< k <
:
(2.26)
(cid:0) 1
1
These solutions grow arbitrarily fast in time, the fastest the shortest the wave-length (
k
larger).
This is just the mathematical form of the obvious physical fact that a straight string (with no bending
j
j
strength) is not a very stable object when under compression.
General Motion: Strings and Rods.
If no restrictions to longitudinal (as in (2.1)) or transversal (as in (2.20)) motion are imposed on the
mass-spring chain, then (in the continuum limit) general equations including both longitudinal and
transversal modes of vibration for a string are obtained. Since strings have no bending strength,
these equations will be well behaved only as long as the string is under tension everywhere.
B | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
vibration for a string are obtained. Since strings have no bending strength,
these equations will be well behaved only as long as the string is under tension everywhere.
Bending strength is easily incorporated into the mass-spring chain model. Basically, what we need
to do is to incorporate, at the location of each mass point, a bending spring. These springs apply
a torque when their ends are bent, and will exert a force when-ever the chain is not straight. The
continuum limit of a model like this will be equations describing the vibrations of a rod.
We will not develop these model equations here.
3 Torsion Coupled Pendulums: Sine-Gordon Equation.
Consider an horizontal axle A, of total length ‘, suspended at its ends by "frictionless" bearings. Along
this axle, at equally spaced intervals, there are N equal pendulums. Each pendulum consists of a rigid
rod, attached perpendicularly to the axle, with a mass at the end. When at rest, all the pendulums
point down the vertical. We now make the following assumptions and approximations:
1. Each pendulum has a mass
. The distance from its center of mass to the axle center is L.
M
(cid:15)
N
(cid:15)
(cid:15)
(cid:15)
2. The axle A is free to rotate, and we can ignore any frictional forces (i.e.: they are small). In
fact, the only forces that we will consider are gravity, and the torsional forces induced on the
axle when the pendulums are not all aligned.
3. Any deformations to the axle and rod shapes are small enough that we can ignore them. Thus
the axle and rod are assumed straight at all times.
4. The mass of the axle is small compared to M , so we ignore it (this assumption is not strictly
needed, but we make it to keep matters simple).
Our aim is to produce a continuum approximation for this system, as N
, with everything else (cid:12)xed.
! 1
Discrete to Continuum Modeling.
9
MIT, March, 2001 | Rosales.
Each one of the pendulums can be characterized by the angle (cid:18)
= (cid:18)
(t) that its suspending
n
n
rod makes with the vertical direction. Each pendulum is then sub ject to three forces:
(a) Gravity, | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
that its suspending
n
n
rod makes with the vertical direction. Each pendulum is then sub ject to three forces:
(a) Gravity, for which only the component perpendicular to the pendulum rod is considered.
9
(b) Axle torsional force due to the twist (cid:18)
(cid:18)
. This couples each pendulum to the next one.
+1
n
n
(c) Axle torsional force due to the twist (cid:18)
(cid:18)
. This couples each pendulum to the prior one.
(cid:0)
1
n
n(cid:0)
(cid:0)
We will assume that the amount of twist per unit length in the axle is small, so that Hooke’s law applies.
Remark 3.1 Hooke’s Law for Torsional Forces.
In the Hooke’s law regime, for a given (cid:12)xed bar, the torque generated is directly proportional to the
angle of twist, and inversely proportional to the distance over which the twist occurs.
To be speci(cid:12)c: in the problem here, imagine that a section of length (cid:1)‘ of the axle has been twisted
by an amount (angle) (cid:9). Then, if T is the torque generated by this twist, one can write
(cid:20) (cid:9)
T =
;
(3.1)
(cid:1)‘
where (cid:20) is a constant that depends on the axle material and the area of its cross-section | assume
that the axle is an homogeneous cylinder. The dimensions of (cid:20) are given by:
mass
length
force
area
3
[(cid:20)] =
=
:
(3.2)
(cid:2)
(cid:2)
2
time
angle
angle
(cid:2)
This torque then translates onto a tangential force of magnitude F = T =L, on a mass attached to
the axle at a distance L. The sign of the force is such that it opposes the twist.
Let us now go back to our problem, and write the equations of motion for the N pendulums. We
will assume that:
The horizontal separation between pendulums is
.
‘
(cid:15)
N + 1
‘
The (cid:12)rst and last pendulum are at a distance
from the respective ends of the axle.
(cid:15) | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
1
‘
The (cid:12)rst and last pendulum are at a distance
from the respective ends of the axle.
(cid:15)
2(N + 1)
The tangential force (perpendicular to the pendulum rod) due to gravity on each of the masses is
1
F
=
M g sin (cid:18)
; where n = 1; : : : ; N :
(3.3)
g
n
(cid:0)
N
For any two successive masses, there is also a torque whenever (cid:18)
= (cid:18)
. This is generated by the
+1
n
n
twist in the axle, of magnitude (cid:18)
(cid:18)
, over the segment of length ‘=(N + 1) connecting the two
+1
n
n
(cid:0)
rods. Thus each of the masses experiences a force (equal in magnitude and opposite in sign)
F
=
(N + 1)
((cid:18)
(cid:18)
) ;
(3.4)
+1
T
n
n
(cid:6)
(cid:0)
‘L
(cid:20)
where the signs are such that the forces tend to make (cid:18)
= (cid:18)
. Putting all this together, we obtain
+1
n
n
the following set of equations for the angles:
1
d
(cid:18)
1
(N + 1) (cid:20)
1
2
M L
=
M g sin (cid:18)
+
((cid:18)
(cid:18)
) ;
(3.5)
1
2
1
2
N
dt
(cid:0)
N
‘L
(cid:0)
9
The component along the rod is balanced by the rod itself, which we approximate as being rigid.
6
Discrete to Continuum Modeling.
10
MIT, March, 2001 | Rosales.
2
1
d
(cid:18)
1
n
M L
=
M g sin (cid:18)
n
2
N
dt
(cid:0)
N
(N + 1) (cid:20)
(N + 1) (cid:20)
+
((cid:18)
(cid:18)
)
((cid:18)
(cid:18)
) ;
(3.6)
+1
1 | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
)
+
((cid:18)
(cid:18)
)
((cid:18)
(cid:18)
) ;
(3.6)
+1
1
n
n
n
n(cid:0)
‘L
‘L
(cid:0)
(cid:0)
(cid:0)
for n = 2; : : : ; N
1; and
(cid:0)
2
1
d
(cid:18)
1
(N + 1) (cid:20)
N
M L
=
M g sin (cid:18)
((cid:18)
(cid:18)
) :
(3.7)
1
N
N
N (cid:0)
2
N
dt
N
‘L
(cid:0)
(cid:0)
(cid:0)
These are the equations for N torsion coupled equal pendulums.
Remark 3.2 To check that the signs for the torsion forces selected in these equations are correct,
take the di(cid:11)erence between the n
and (n + 1)
equation. Then you should see that the torsion
th
th
force (due to the portion of the axle connecting the n
and (n + 1)
pendulums) is acting so as to
th
th
make the angles equal.
Remark 3.3 Note that the equations for the (cid:12)rst and last angle are di(cid:11)erent, because the (cid:12)rst and
last pendulum experience a torsion force from only one side. How would you modify these
equations to account for having one (or both) ends of the axle (cid:12)xed?
Continuum Limit.
Now we consider the continuum limit, in which we let N
and assume that the n
angle
th
can be written in the form:
! 1
where (cid:18) = (cid:18)(x; t) is a \nice" function (with derivatives) and x
=
‘ is the position of the
n
N + 1
pendulum along the axle. In particular, note that:
(cid:18)
(t) = (cid:18)(x
; t) ;
(3.8)
n
n
n +
1
2
(cid:1)x = x
x
=
:
(3.9)
+1
n
n
(cid: | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
n +
1
2
(cid:1)x = x
x
=
:
(3.9)
+1
n
n
(cid:0)
N + 1
‘
Take equation (3.6), and multiply it by N=‘. Then we obtain
2
d
(cid:18)
N (N + 1)(cid:20)
n
(cid:26) L
=
(cid:26) g sin (cid:18)
+
((cid:18)
2(cid:18)
+ (cid:18)
) ;
+1
1
n
n
n
n(cid:0)
2
2
dt
‘
L
(cid:0)
(cid:0)
where (cid:26) = M =‘
is the mass density per unit length in the N
limit. Using equation
(3.9), this can be written in the form:
! 1
2
d
(cid:18)
N
(cid:20)
(cid:18)
2(cid:18)
+ (cid:18)
+1
1
n
n
n
n(cid:0)
(cid:26) L
=
(cid:26) g sin (cid:18)
+
:
(3.10)
n
(cid:0)
2
2
dt
(N + 1)
L
((cid:1)x)
(cid:0)
From equation (3.8) we see that | in the limit N
(where (cid:1)
0) | we have:
! 1
!
(cid:18)
2(cid:18)
+ (cid:18)
@
(cid:18)
+1
1
n
n
n(cid:0)
2
(cid:0)
(x
; t) :
n
2
2
((cid:1)x)
@x
!
Discrete to Continuum Modeling.
11
MIT, March, 2001 | Rosales.
Thus, (cid:12)nally, we obtain (for the continuum limit) the nonlinear wave equation (the \Sine{Gordon"
equation):
2
2
(cid:18)
c
(cid:18)
=
!
sin (cid:18) ;
(3.11)
tt
xx
(cid:0)
(cid:0)
g
(cid:20)
where ! =
is the pendulum angular frequency, and c =
is a wave propagation speed
2
L | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
(cid:0)
g
(cid:20)
where ! =
is the pendulum angular frequency, and c =
is a wave propagation speed
2
L
(cid:26)L
s
r
(check that the dimensions are correct).
Remark 3.4 Boundary Conditions.
What happens with the (cid:12)rst (3.5) and last (3.7) equations in the limit N
?
! 1
As above, multiply (3.5) by 1=‘. Then the equation becomes:
2
(cid:26) L
d
(cid:18)
(cid:26) g
(N + 1)(cid:20)
(cid:26) g
(cid:20)
(cid:18)
(cid:18)
1
2
1
=
sin (cid:18)
+
((cid:18)
(cid:18)
) =
sin (cid:18)
+
:
1
2
1
1
(cid:0)
2
2
N
dt
N
‘
L
N
‘L
(cid:1)x
(cid:0)
(cid:0)
(cid:0)
Thus, as N
one obtains
! 1
(cid:18)
(0; t) = 0:
x
This is just the statement that there are no torsion forces at the x = 0 end (since the axle is free to
rotate there). Similarly, one obtains:
(cid:18)
(‘; t) = 0;
x
at the other end of the axle. How would these boundary conditions be modi(cid:12)ed if the axle
where (cid:12)xed at one (or both) ends?
Kinks and Breathers for the Sine Gordon Equation.
Equation (3.11), whose non-dimensional form is
(cid:18)
(cid:18)
=
sin (cid:18) ;
(3.12)
tt
xx
(cid:0)
(cid:0)
has a rather interesting history. Its (cid:12)rst appearance is not in the context of a physical context at all,
but in the study of the geometry of surfaces with constant negative Gaussian curvature. Physical
problems for which it has been used include: Josephson junction transmission lines, dislocation in
crystals, propagation in ferromagnetic materials of waves carrying rotations in the magnetization
direction, etc.
Mathematically, it is a very interesting because it is one of the | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
ferromagnetic materials of waves carrying rotations in the magnetization
direction, etc.
Mathematically, it is a very interesting because it is one of the few physically
10
important nonlinear partial di(cid:11)erential equations that can be solved explicitly (by a
technique known as Inverse Scattering, which we will not describe here).
An important consequence of equation (3.12) exact solvability, is that it possesses particle-like
solutions, known as kinks, anti-kinks, and breathers. These are localized traveling distur-
bances, which preserve their identity when they interact. In fact, the only e(cid:11)ect of an interaction
is a phase shift in the particle positions after the interaction: e(cid:11)ectively, the "particles" approach
each other, stay together brie(cid:13)y while they interact (this causes the "phase shift") and then depart,
preserving their identities and original velocities. This can all be shown analytically, but here we
will only illustrate the process, using some computational examples.
10
For reviews see:
A. C. Scott, 1970,
, Wiley Interscience, New York (page 250).
Active and Nonlinear Wave Propagation in Electronics
Barone, A. F. Esposito, C. J. Magee, and A. C. Scott, 1971,
,
Theory and Applications of the Sine Gordon Equation
Rivista del Nuovo Cimento
, pp. 227{267.
vol. 1
Discrete to Continuum Modeling.
12
MIT, March, 2001 | Rosales.
The (cid:12)rst step is to present analytical expressions for the various particle-like solutions of
equation (3.12). These turn out to be relatively simple to write.
Example 3.1 Kinks and Anti-Kinks.
Equation (3.12) has some interesting solutions, that correspond to giving the pendulums a ful l 2(cid:25)
twist (e.g.: take one end pendulum, and give it a ful l 2(cid:25) rotation). This generates a 2(cid:25) twist wave
that propagates along the pendulum chain. These waves are known as kinks or anti-kinks (depending
on the sign of the rotation), and can be written explicitly. In fact, they are steady wave solutions, | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
as kinks or anti-kinks (depending
on the sign of the rotation), and can be written explicitly. In fact, they are steady wave solutions,
11
for which the equation reduces to an O.D.E., which can be explicitly solved.
Let
1 < c < 1 be a constant (kink, or anti-kink speed), and let z = (x
c t
x
) be a moving
0
(cid:0)
(cid:0)
(cid:0)
coordinate, where the solution is steady | the "twist" wil l be centered at x = c t + x
, where x
is
0
0
the position at time t = 0. Then the kink solution is given by
2
z=(cid:12)
e
1
z
(cid:18) = 2 arccos
= 4 arctan
exp
;
(3.13)
2
z=(cid:12)
(cid:0)
e
+ 1
(cid:0)
(cid:12)
!
!!
where (cid:12) =
1
c
is the kink width. This solution represents a propagating clock-wise 2(cid:25) rotation,
p
2
(cid:0)
from (cid:18) = 2 m (cid:25) as x
(where m is an integer) to (cid:18) = 2 (m
1) (cid:25) as x
, with most of the
rotation concentrated in a region of width O((cid:12) ) near x = c t + x
. The parameter c is determined
0
! (cid:0)1
(cid:0)
! 1
(for example) by how fast the initial twist is introduced when the kink is generated.
We note now that:
2c
(cid:18)
From (3.13) it fol lows that (cid:18)
=
c (cid:18)
=
sin
. Using this, it is easy to show that (3.13)
t
x
(cid:15)
(cid:0)
(cid:12)
2
!
is a solution of equation (3.12).
The Sine-Gordon equation is the simplest of a "class" of models proposed for nuclear inter-
(cid:15)
actions. In this interpretation, the kinks are nuclear particles. Since (in the non-dimensional
version (3. | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
for nuclear inter-
(cid:15)
actions. In this interpretation, the kinks are nuclear particles. Since (in the non-dimensional
version (3.12)) the speed of light is 1, the restriction
1 < c < 1 is the relativistic restriction,
and the factor (cid:12) incorporates the usual relativistic contraction.
(cid:0)
The anti-kink solution fol lows by replacing x
x and t
t in (3.13). It corresponds to a
propagating counter-clock-wise 2(cid:25) rotation, and it is given by
! (cid:0)
! (cid:0)
2
z=(cid:12)
1
e
z
(cid:18) = 2 arccos
= 4 arctan
exp
:
(3.14)
(cid:0)
2
z=(cid:12)
1 + e
(cid:12)
!
!!
The kinks and anti-kinks are very non-linear solutions. Thus, it is of some interest to study how
they interact with each other. Because they are very localized solutions (non-trivial only in a smal l
region), when their centers are far enough they can be added. Thus, numerical ly it is rather easy to
study their interactions, by setting up initial conditions that correspond to kinks and anti-kinks far
enough that they do not initial ly interact. Then they are fol lowed until they col lide. In the lectures
the results of numerical experiments of this type wil l be shown (the numerical method used in the
experiments is is a "pseudo-spectral" method).
11
(cid:0)
Solutions of the form
=
(
), where
is a constant: the speed of propagation.
(cid:18)
(cid:18)
x
c t
c
Discrete to Continuum Modeling.
13
MIT, March, 2001 | Rosales.
Example 3.2 Breathers.
A di(cid:11)erent kind of interesting solution is provided by the "breathers" | which we hand le next. A
breather is a wave-package kind of solution (an oscil latory wave, with an envelope that limits
the wave to reside in a bounded region of space. These solutions vanish (exponential ly) as x
.
This last property al lows for easy numerical simulations of interactions of breathers (and kinks | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
space. These solutions vanish (exponential ly) as x
.
This last property al lows for easy numerical simulations of interactions of breathers (and kinks).
! (cid:6)1
One can setup initial conditions corresponding to the interaction of as many kinks and/or breathers
as one may wish (limited only be the numerical resolution of the computation), simply by separating
them in space.
A breather solution is characterized by two arbitrary constants
1 < d; V < 1. Then de(cid:12)ne
A = d=
1
d
;
p
2
B = 1=
1
V
;
p
(cid:0)
2
C =
1
d
;
p
(cid:0)
2
(cid:0)
9
>
>
>
>
(cid:0)
>
p = C B (V x
t + t
) ;
0
q = d B (x
V t
x
) ;
0
(cid:0)
(cid:0)
(cid:0)
Q = A sin(p)= cosh(q ) ;
>
>
>
>
>
>
>
>
=
>
>
>
>
>
>
>
>
>
>
>
>
>
;
(3.15)
where x
and t
are constants, centering the envelope and the phase, respectively. Notice that the
0
0
partial derivatives of Q (with respect to p and q) are given by
Q
= A cos(p)= cosh(q ) and Q
=
Q tanh(q ) :
(3.16)
p
q
(cid:0)
The breather solution (and its time derivative) is then given by:
(cid:18) = 4 arctan(Q) ;
(cid:18)
=
4 (1 + Q
) (C B Q
+ d B V Q
) :
t
p
q
2
(cid:0)
9
>
=
>
;
(3.17)
The breather solution is a wave-package type of solution, with the phase control led by p, and the
envelope (causing the exponential vanishing of the solution) by q). The wave-package details are
given by:
speed . . . . . . . . . . . . . . c
= 1=V ,
p
period . . . . . . . . . . . . . . T
= 2 (cid:25)=(B C ) ;
(3 | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
period . . . . . . . . . . . . . . T
= 2 (cid:25)=(B C ) ;
(3.18)
p
Phase.
wave-length . . . . . . . . . (cid:21)
= 2 (cid:25)=(B C V ) ;
p
9
>
=
>
;
speed . . . . . . . . . . . . . . c
=
V ,
e
Envelope.
(3.19)
width . . . . . . . . . . . . . .
e
)
(cid:21)
= 2 (cid:25)=(d B ) ;
Notice that, while the phase moves faster than the speed of "light" (i.e.: 1), the envelope always
moves with a speed
1 < V < 1, and has width proportional to
1
V
.
p
2
(cid:0)
(cid:0)
Final ly, in case you are familiar with the notion of group speed, notice that (for the linearized Sine-
Gordon equation: (cid:18)
(cid:18)
+ (cid:18) = 0) we have: (group speed) = 1/(phase speed) | which is exactly
tt
xx
(cid:0)
the relationship satis(cid:12)ed by c
= V and c
= 1=V for a breather. This is because, for
x
large, the
e
p
breathers must satisfy the linearized equation. Thus the envelope must move at the group velocity
j
j
corresponding to the oscil lations wave-length.
Discrete to Continuum Modeling.
14
MIT, March, 2001 | Rosales.
Remark 3.5 Pseudo-spectral Numerical Method for the Sine-Gordon Equation.
Here we wil l give a rough idea of a numerical method that can be used to solve the Sine-Gordon
equation. This remark wil l only make sense to you if you have some familiarity with Fourier Series
for periodic functions.
The basic idea in spectral methods is that the numerical di(cid:11)erentiation of a (smooth) periodic func-
tions can be done much more eÆciently (and accurately) on the "Fourier Side" | since there it
amounts to term by term multiplication of the n
Fourier coe | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
ly (and accurately) on the "Fourier Side" | since there it
amounts to term by term multiplication of the n
Fourier coeÆcient by in. On the other hand,
th
non-linear operations (such as calculating the square, point by point, of the solution) can be done
eÆciently on the "Physical Side".
Thus, in a numerical computation using a pseudo-spectral method, al l the operations involving taking
derivatives are done using the Fourier Side, while al l the non-linear operations are done directly
on the numerical solution. The back-and-forth calculation of Fourier Series and their inverses is
carried by the FFT (Fast Fourier Transform) algorithm | which is a very eÆcient algorithm for
doing Fourier calculations.
Unfortunately, a naive implementation of a spectral scheme to solve the Sine-Gordon equation would
require periodic in space, solutions. But we need to be able to solve for solutions that are mod-2(cid:25)
periodic (such as the kinks and anti-kinks), since the solutions to the equation are angles. Thus,
we need to get around this problem.
In a naive implementation of a spectral method, we would write the equation as
u
= v ;
t
v
= u
sinu ;
t
xx
)
(cid:0)
(3.20)
where u = (cid:18) and v = (cid:18)
. Next we would discretize space using a periodic uniform mesh (with a large
t
enough period), and would evaluate the right hand side using FFT’s to calculate derivatives. This
would reduce the P.D.E. to some large O.D.E., involving al l the values of the solution (and its time
derivative) at the nodes in the space grid. This O.D.E. could then be solved using a standard O.D.E.
solver | say, ode45 in MatLab.
In order to use the idea above in a way that al lows us to solve the equation with mod-2(cid:25) periodicity
in space, we need to be able to evaluate the derivative u
in a way that ignores jumps by multiples
xx
of 2(cid:25) in u. The following trick works in doing this:
Introduce U = e
: Then
iu
gives a formula for u
that ignores 2(cid:25) jumps in u. Warning: In the | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
U = e
: Then
iu
gives a formula for u
that ignores 2(cid:25) jumps in u. Warning: In the actual implementation one
xx
2
(U
)
U U
x
xx
u
= i
(3.21)
xx
(cid:0)
2
U
must use
2
(U
)
U U
x
xx
u
=
imag
xx
(cid:0)
2
(cid:0)
U
!
to avoid smal l imaginary parts in the answer (caused by numerical errors).
4 Suggested problems.
A list of suggested problems that go along with these notes follow:
Discrete to Continuum Modeling.
15
MIT, March, 2001 | Rosales.
1. Check the derivation of the system of equations (2.20).
2. Derive the continuum equation in (2.21).
3. Look at the end of section 2, under the title "General Motion: String and Rods". Derive
continuum equations describing the motion (in the plane) of a string without constraints.
4. Look at the end of section 2, under the title "General Motion: String and Rods". Add bending
springs to the model, and derive continuum equations describing the motion (in the plane) of
a rod without constraints.
5. Do the check stated in remark 3.2.
6. Answer the question in remark 3.3.
7. Do the dimensions check stated below equation (3.11).
8. Answer the question in remark 3.4.
9. Show that (3.13) is a solution (there is a hint about how to do this a few lines below the
equation).
10. Use a computer to plot the solution in (3.13), as a function of z , for a few choices of c.
11. Show that (3.17) is a solution.
12. Use a computer to plot the solution in (3.17), as a function of x, for various times and choices
of parameters.
13. Implement a numerical code to calculate interactions of kinks, breathers, etc., using the ideas
sketched in remark 3.5. | https://ocw.mit.edu/courses/18-306-advanced-partial-differential-equations-with-applications-fall-2009/0356741cb05f4800faa412e2d9bc4b84_MIT18_306f09_lec25_Discrete_to_Contin.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
8.512 Theory of Solids II
Spring 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 6: Scaling Theory of Localization
6.1 Notion of dimensionless conductance.
g =
G
e2/π�
G = σLd−2
where G is (resistance)−1
L = dimension of system
Recall Einstein’s formula which relates conductivity to diffusion:
where
σ = 2e 2
N (0)
Ω
· D
N (0) = # of states/energy
Ω = unit volume
D = diffusion coefficient
Way to derive above equation is to calculate charge current density in two ways.
j = −eD� · n
j = −σ�V
dV
dn
= −σ
· �n
· · ·
· · ·
current as diffusion of charge carriers
V is local electrical potential
Now, the local chemical potential is linearly related to V , i.e.
µ = eV + constant
∴ =
dn
dv
dn
= e = e ·
dµ
2N (0)
Ω
∴ j = −
σ Ω
e 2N (0)
· �n
Comparing this to Einstein’s formula, one gets
· · ·
2 is for spin.
N (0)
σ = 2e 2
Ω
⇒ G = e 2 2N (0)
Ω
· D
· D · Ld−2 =
2e2 �D
·
�
L2
· N (0)
1
(1) | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
d−2 =
2e2 �D
·
�
L2
· N (0)
1
(1)
(2)
2
(3)
6.2 Thouless Energy
The last equation suggests to define a quantity called Thouless Energy, ET
ET =
�D
L2
=
�
τT
N (0) ∼
1
Δ
· · ·
Δ is level spacing
⇒
G ∼
2e2 ET
� Δ
⇒
g ∼
ET
Δ
Now,
Physical interpretation
Assume the one box problem is solved and we want to study the behavior of the system
as it gets bigger. Each box has its own distribution of energy levels. As the two boxes
are in contact, the wavefunctions that were initially localized in separate boxes would mix.
For weak coupling, one can do perturbation theory if mixing is smaller than typical level
spacing Δ. For weak mixing, the wavefunctions resemble the initial states and so one gets
localized distribution. For strong coupling, the final state would be a complicated mixture of
all initial states and we get extended states. Physically one can think of diffusion as lifetime
to leak particle out of box. And so it’s reasonable to think of Thouless energy as effective
coupling. Thus the ratio ET /Δ, which also represents the dimensionless conductance ’g,’
characterizes the further scaling properties.
3
6.3 Sensitivity of energy eigenvalues to boundary conditions and its relation | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
properties.
3
6.3 Sensitivity of energy eigenvalues to boundary conditions and its relation to ET .
Here we derive another form of ET . Usually one uses periodic boundary conditions
ψα(x + L) = ψα(x)
· · ·
ψα is an eigenstate of the Hamiltonian.
One can as well use the twisted boundary condition
ψα(x + L) = e
iφψα(x)
· · ·
φ is same for all α.
Such boundary conditions can arise if a magnetic field is present. Consider applying a
uniform magnetic field (magnetic field is confined in flux tube) along the z direction. And
we study the behavior of the eigenstates sitting on a cylindrical surface whose axis is parallel
to the magnetic field.
1 �
2m i
�
+
eA
c
2
�
ψα + V (r)ψα = Eαψα
x
ψ → ψ� exp −i A · d�
�
�
o
�
⇒
1
2m
−�2
ψ�
α(r) + v(r)ψ�
α(r) = Eαψ�
α(r)
Thus the gauge transformation has removed A from the Schrodinger equation, but now
�
�
�
ψα(r) has different boundary conditions.
4
ψ�(x + L) = ψ�(x) exp
L
i A · d�
o
� �
= ψ�(x) exp [iφB]
�
where φB is magnetic � | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
o
� �
= ψ�(x) exp [iφB]
�
where φB is magnetic flux. We will compute ΔEα using perturbation theory.
φ
L
To first order in H �, ΔEα = 0 as < H � >∼< Vx >= 0.
H = VxA = Vx
e
c
�
For perturbation theory second order in A, we have two terms: the diamagnetic term
which gives a constant shift to all energy levels
e2 A2
c2 2m
The paramagnetic term gives, to second order in A (or φ),
1 1
2 m
= φ2
| < β|VxA|α > |2
Eα − Eβ
β±α
�
=
φ2
L2
∴
∂2Eα
∂φ2
=
1
2
mL
+
2
L2
| < β|Vx|α > |2
Eα − Eβ
β�=α
�
| < β|Vx|α > |2
Eα − Eβ
α�=β
�
(4)
Thus one gets the same matrix element < β|Vx|α > that appeared in derivation of Kubo
formula.
Typical variation in energy levels can look like this. As first order shift is zero, all curves
have zero slope at α = 0. All low lying states don’t undergo much variation as they occur
in potential valleys and hence localized.
One can do a sanity check on Eq.(4). On general grounds, there is equal probability
for any level to go | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
on Eq.(4). On general grounds, there is equal probability
for any level to go up or down and so average variation in Eα’s should be zero. By using
Thomas-Reich-Kunz F -sum rule, one can indeed show that
∂2Eα = 0
2∂φ
α
�
5
Now, | < β|Vx|α > |2 ∼ V 2 . Fluctuation of such terms is dominated by the term with
maximum weight, i.e., by smallest Eα − Eβ
variance
�
β�=α
�
| < β|Vx|α > |2
Eα − Eβ
∼
V 2
Δ
�
This is the average fluctuation in the curvature.
Thus typically,
∼ Et
∂2Eα
∂φ2
�
�
�
�
Et ∼
1 V 2
�
L2 Δ
�
�
�
�
1 V 2
L2 Δ
∼
1
2 V 2 · N(0)
L
Now,
σ = 2e
2π
N 2(0)
Ω
· V 2 = 2e 2
N(0)
Ω
D
By comparing Boltzmann & Einstein formula for σ
⇒ D = πV 2N(0) ⇒ Et =
D
2
L
∼ ET
Thus for large ET /Δ, one gets extended states and hence large conductance, while for small
E
T /Δ, one gets localized states and hence small conductance. Thus we have established the
relevance of Thouless energy for | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
states and hence small conductance. Thus we have established the
relevance of Thouless energy for conductance.
6.4 Application to 1 − Δ System
6
One expects Ohm’s low behavior for small length.
g(L) ∼
1
L
as L increases, g(L) decreases
as
and so the states are localized.
g(L) < 1 ⇒
ET
Δ
< 1
⇒
For large L, g(L) ∼ e −L/ξ
· · · ξ is the localization length
Due to inelastic scattering the eigenvalues become ill-defined because of dephasing. Let
us define a quantity τφ as the time over which a state looses the information about its phase.
τ
φ for various scattering mechanisms can be given by
1
τee
∼
T 2
�F
1
τe−phonon
∼
T 3
ω2
D
Let us define Lφ as the length over which the dephasing occurs.
So our argument works if
Lφ = Doτφ
�
Lφ >> width of the sample.
7
Otherwise we are considering a system to which our derivation based on Quantum Mechanics
ideas doesn’t apply and the system should rather be treated classically. And in such a
situation, one used Ohm’s law. So to observe non-metallic behavior of copper wire, one
needs to have a very small wire thickness or very low temperature as Lφ increases with
decrease in temperature.
Thus one expects the following behavior | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
low temperature as Lφ increases with
decrease in temperature.
Thus one expects the following behavior of conductivity of a one-dimensional system.
For Lφ > ξ, we can think of conductivity in terms of hopping of electrons over a distance
scale ξ in time τφ. This gives rise to a diffusion constant of ξ2/τφ and resistivity which is
proportional to τφ � T −p. [reference, book by Y. Imry] | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/039ae328a78eb6921c16239e587e69d0_MIT8_512s09_lec06.pdf |
Lecture 3
Properties of MLE: consistency,
asymptotic normality.
Fisher information.
In this section we will try to understand why MLEs are ’good’.
Let us recall two facts from probability that we be used often throughout this course.
•
Law of Large Numbers (LLN):
If the distribution of the i.i.d. sample X1, . . . , Xn is such that X1 has finite expectation,
i.e. EX1 < , then the sample average
|
| →
X1 + . . . + Xn
n
converges to its expectation in probability , which means that for any arbitrarily small
α > 0,
¯
Xn =
EX1
�
¯P(
|
Xn −
EX1|
> θ)
0 as n
.
� →
�
Note. Whenever we will use the LLN below we will simply say that the average
converges to its expectation and will not mention in what sense. More mathematically
inclined students are welcome to carry out these steps more rigorously, especially when
we use LLN in combination with the Central Limit Theorem.
Central Limit Theorem (CLT):
If the distribution of the i.i.d. sample X1, . . . , Xn is such that X1 has finite expectation
and variance, i.e.
, then
<
EX1|
|
and π2 = Var(X) <
→
≥n(X¯n −
EX1)
�
→
d N (0, π2)
converges in distribution to normal distribution with zero mean and variance π2 , which
means that for any interval [a, b],
≥n(X¯n −
EX1)
∞
P
�
[a, b]
�
16
�
a
�
b
1
≥2
∂π
e−
2 x
2�2 dx.
•
In other words, the random variable ≥n(X¯n −
from normal distribution when n gets large.
EX1) will behave like a random variable
Exercise | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
words, the random variable ≥n(X¯n −
from normal distribution when n gets large.
EX1) will behave like a random variable
Exercise. Illustrate CLT by generating 100 Bernoulli random varibles B(p) (or one
EX1). Repeat this many times
Binomial r.v. B(100, p)) and then computing ≥n(X¯n −
and use ’dfittool’ to see that this random quantity will be well approximated by normal
distribution.
We will prove that MLE satisfies (usually) the following two properties called consistency
and asymptotic normality.
1. Consistency. We say that an estimate ϕˆ is consistent if ϕˆ
ϕ0 in probability as
, where ϕ0 is the ’true’ unknown parameter of the distribution of the sample.
�
n
� →
2. Asymptotic Normality. We say that ϕˆ is asymptotically normal if
≥n(ϕˆ
2
d N(0, π�
0 )
0 is called the asymptotic variance of the estimate ϕˆ. Asymptotic normality
2
where π�
says that the estimator not only converges to the unknown parameter, but it converges
fast enough, at a rate 1/≥n.
ϕ0)
�
−
Consistency of MLE.
To make our discussion as simple as possible, let us assume that a likelihood function
is smooth and behaves in a nice way like shown in figure 3.1, i.e. its maximum is achieved
at a unique point ϕ. ˆ
)
ϕ
(
�
1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
�(ϕ)
PSfrag replacements
0
0.5
1 | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
−0.6
−0.8
−1
�(ϕ)
PSfrag replacements
0
0.5
1
1.5
2
ϕˆ
2.5
3
3.5
4
ϕ
Figure 3.1: Maximum Likelihood Estimator (MLE)
Suppose that the data X1, . . . , Xn is generated from a distribution with unknown pa
rameter ϕ0 and ϕˆ is a MLE. Why ϕˆ converges to the unknown parameter ϕ0? This is not
immediately obvious and in this section we will give a sketch of why this happens.
17
First of all, MLE ϕˆ is the maximizer of
Ln(ϕ) =
n1
n
i=1
�
log f (Xi|
ϕ)
1 (of course, this does not affect maxi
which is a log-likelihood function normalized by n
mization). Notice that function Ln(ϕ) depends on data. Let us consider a function l(X ϕ) =
log f (X ϕ) and define
L(ϕ) = E�0 l(X ϕ),
where E�0 denotes the expectation with respect to the true uknown parameter ϕ0 of the
sample X1, . . . , Xn. If we deal with continuous distributions then
|
|
|
By law of large numbers, for any ϕ,
L(ϕ) =
�
(log f (x ϕ))f (x ϕ0)dx.
|
|
Ln(ϕ)
�
E�0 l(X
ϕ) = L(ϕ).
|
Note that L(ϕ) does not depend on the sample, it only depends on ϕ. We will need the
following
Lemma. We have that for any ϕ,
Moreover, the inequality is strict, | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
need the
following
Lemma. We have that for any ϕ,
Moreover, the inequality is strict, L(ϕ) < L(ϕ0), unless
L(ϕ)
≡
L(ϕ0).
P�0 (f (X ϕ) = f (X ϕ0)) = 1.
|
|
which means that P� = P�0 .
Proof. Let us consider the difference
L(ϕ)
−
L(ϕ0) = E�0 (log f (X
log f (X
ϕ)
|
−
ϕ0)) = E�0 log
|
Since log t
t
−
≡
1, we can write
X
f
(
X
f
(
)
.
)
ϕ
|
ϕ
|
0
E�0
log
X
f
(
X
f
(
ϕ
|
ϕ
|
0
)
) ≡
)
) −
E�0
�
f (x
X
f
(
X
f
(
ϕ
|
ϕ
|
ϕ)dx
|
0
=
−
�
�
1
=
x
f
(
x
f
(
ϕ
|
ϕ
|
0
)
) −
1
f (x ϕ0)dx
|
�
f (x
� �
ϕ0)dx = 1
|
−
�
1 = 0.
Both integrals are equal to 1 because we are integrating the probability density functions.
This proves that L(ϕ)
0. The second statement of Lemma is also clear.
L(ϕ0)
−
≡
18
We will use this Lemma to sketch the consistency of the MLE.
Theorem: Under some regularity conditions on the family of distributions, MLE ϕˆ is
consistent, i.e. ϕˆ
ϕ0 as n
.
� →
�
The statement of this Theorem is | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
consistent, i.e. ϕˆ
ϕ0 as n
.
� →
�
The statement of this Theorem is not very precise but but rather than proving a rigorous
mathematical statement our goal here is to illustrate the main idea. Mathematically inclined
students are welcome to come up with some precise statement.
Ln(ϕ)
L(ϕ)
PSfrag replacements
ˆϕ
ϕ0
ϕ
Figure 3.2: Illustration to Theorem.
Proof. We have the following facts:
1. ϕˆ is the maximizer of Ln(ϕ) (by definition).
2. ϕ0 is the maximizer of L(ϕ) (by Lemma).
3.
This situation is illustrated in figure 3.2. Therefore, since two functions Ln and L are
getting closer, the points of maximum should also get closer which exactly means that
ϕˆ
ϕ we have Ln(ϕ)
L(ϕ) by LLN.
�
�
ϕ0.
�
Asymptotic normality of MLE. Fisher information.
We want to show the asymptotic normality of MLE, i.e. to show that
≥n(ϕˆ
ϕ0)
−
�
d N(0, π2
M LE
) for some π2
M LE
and compute π2
First, we need to introduce the notion called Fisher Information.
M LE . This asymptotic variance in some sense measures the quality of MLE.
Let us recall that above we defined the function l(X ϕ) = log f (X ϕ). To simplify the
notations we will denote by l�(X ϕ), l��(X ϕ), etc. the derivatives of l(X ϕ) with respect to
ϕ.
|
|
|
|
|
Definition. (Fisher information.) Fisher information of a | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
to
ϕ.
|
|
|
|
|
Definition. (Fisher information.) Fisher information of a random variable X with
distribution P�0 from the family
I(ϕ0) = E�0 (l�(X
P� : ϕ
{
∞
ϕ0))2
|
is defined by
�
}
�
E�0 �ϕ
�
�
log f (X
19
2
.
ϕ)
|
�=�0
�
�
�
�
Remark. Let us give a very informal interpretation of Fisher information. The derivative
l�(X
ϕ0) = (log f (X
|
|
ϕ0))�
=
X
f
(
�
X
f
(
)
0
)
0
ϕ
|
ϕ
|
can be interpreted as a measure of how quickly the distribution density or p.f. will change
when we slightly change the parameter ϕ near ϕ0. When we square this and take expectation,
i.e. average over X, we get an averaged version of this measure. So if Fisher information is
large, this means that the distribution will change quickly when we move the parameter, so
the distribution with parameter ϕ0 is ’quite different’ and ’can be well distinguished’ from
the distributions with parameters not so close to ϕ0. This means that we should be able to
estimate ϕ0 well based on the data. On the other hand, if Fisher information is small, this
means that the distribution is ’very similar’ to distributions with parameter not so close
to ϕ0 and, thus, more difficult to distinguish, so our estimation will be worse. We will see
precisely | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
��cult to distinguish, so our estimation will be worse. We will see
precisely this behavior in Theorem below.
Next lemma gives another often convenient way to compute Fisher information.
Lemma. We have,
E�0 l��(X
ϕ0)
|
�
�2
E�0 �ϕ2 log f (X
ϕ0) =
|
I(ϕ0).
−
Proof. First of all, we have
ϕ) = (log f (X
l�(X
|
=
ϕ))�
|
X
f
(
�
X
f
(
)
ϕ
|
ϕ
)
|
and
(log f (X
=
ϕ))��
|
Also, since p.d.f. integrates to 1,
X
f
(
��
X
f
(
)
ϕ
|
ϕ
)
|
−
(f
f
2
.
ϕ
X
))
(
�
|
2
)
ϕ
X
(
|
if we take derivatives of this equation with respect to ϕ (and interchange derivative and
integral, which can usually be done) we will get,
f (x ϕ)dx = 1,
|
�
�
�ϕ
f (x
ϕ)dx = 0 and
|
�2
�ϕ2 f (x
ϕ)dx =
|
�
�
�
ϕ)dx = 0.
f ��(x
|
To finish the proof we write the following computation
E�0 l��(X ϕ0) = E�0
|
�2
�ϕ2
x
f
(
��
x
f
(
�
f ��(x
|
log f (X ϕ0) =
|
)
0
)
ϕ
|
ϕ
|
ϕ0)dx
0
−
�
)
ϕ
f (x
�
0
|
ϕ
(x
f
)
|
�
E�0 (l�(X
0
−
=
=
�
�
(log f | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
(x
f
)
|
�
E�0 (l�(X
0
−
=
=
�
�
(log f (x ϕ0))��f (x ϕ0)dx
|
|
2
f (x
ϕ0)dx
|
�
�
ϕ0))2 = 0
|
I(ϕ0 =
I(ϕ0).
−
−
20
We are now ready to prove the main result of this section.
Theorem. (Asymptotic normality of MLE.) We have,
≥n(ϕˆ
ϕ0)
−
�
N
0,
�
1
I(ϕ0)
�
.
As we can see, the asymptotic variance/dispersion of the estimate around true parameter
will be smaller when Fisher information is larger.
Proof. Since MLE ϕˆ is maximizer of Ln(ϕ) = n
L� (ϕˆ) = 0.
1
n
�
Let us use the Mean Value Theorem
n
i
=1 log f (Xi|
ϕ), we have
f (a)
a
f (b)
b
−
−
= f �(c) or f (a) = f (b) + f �(c)(a
−
b) for c
[a, b]
∞
with f (ϕ) = L�n(ϕ), a = ϕˆ and b = ϕ0. Then we can write,
1)(ϕˆ
0 = L� (ϕˆ) = L�n(ϕ0) + L��(ϕˆ
n
n
ϕ0)
−
for some ˆ
ϕ1 ∞
[ˆ
ϕ, ϕ0]. From here we get that
ˆ
ϕ
−
ϕ0 =
0)
L�
(ϕ
n
ˆ
L��n ϕ1)
(
−
and ≥n(ϕˆ
ϕ0) =
−
≥nL
(ϕ0)
�
n | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
and ≥n(ϕˆ
ϕ0) =
−
≥nL
(ϕ0)
�
n
.
ˆ
Ln�� ϕ1)
(
−
(3.0.1)
Since by Lemma in the previous section we know that ϕ0 is the maximizer of L(ϕ), we have
L�(ϕ0) = E�0 l�(X ϕ0) = 0.
(3.0.2)
|
Therefore, the numerator in (3.0.1)
≥nLn� (ϕ0) = ≥n
�
= ≥n
1
n
1
n
n
i=1
�
n
l�(Xi|
ϕ0)
−
l�(Xi|
ϕ0)
−
�
E�0 l�(X1|
�
converges in distribution by Central Limit Theorem.
i=1
�
0
(3.0.3)
ϕ0)
�
�
�
N 0, Var�0 (l�(X1|
ϕ0))
�
Next, let us consider the denominator in (3.0.1). First of all, we have that for all ϕ,
L��n(ϕ) =
1
n
l��(Xi|
ϕ)
�
E�0 l��(X1|
ϕ) by LLN.
(3.0.4)
ϕ, ϕ0] and by consistency result of previous section, ˆ
[ˆ
ϕ
ϕ0, we have ˆ
ϕ1 �
�
ϕ0.
�
1 ∞
Also, since ϕˆ
Using this together with (10.0.3) we get
E�0 l��(X1|
L��n(ϕˆ
1)
�
ϕ0) =
I(ϕ0) by Lemma above.
−
21
Combining this with (3.0.3) we get
≥nL�n(ϕ0)
−
L��n(ϕˆ1 | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
3.0.3) we get
≥nL�n(ϕ0)
−
L��n(ϕˆ1) �
d N
0,
�
Var�0 (l�(X1|
(I(ϕ0))2
ϕ0))
.
�
Finally, the variance,
Var�0 (l�(X1|
ϕ0)) = E�0 (l�(X
ϕ0))2
|
(E�0 l�(x
ϕ0))2 = I(ϕ0)
|
−
−
0
where in the last equality we used the definition of Fisher information and (3.0.2).
Let us compute Fisher information for some particular distributions.
Example 1. The family of Bernoulli distributions B(p) has p.f.
f (x
p) = p x(1
|
−
p)1
x
−
and taking the logarithm
The second derivative with respect to parameter p is
log f (x
p) = x log p + (1
|
−
x) log(1
−
p).
�
�p
log f (x p) =
|
x
p −
1
1
�2
x
p �p2
,
log f (x p) =
|
−
x
2
p −
1
(1
x
.
p)2
−
−
Then the Fisher information can be computed as
I(p) =
E
−
�2
�p2
¯
log f (X p) =
|
EX
p2
+
EX
p)2
1
−
(1
−
=
p
p2
+
1
(1
p
p)2
−
−
1
p(1
−
.
p)
The MLE of p is ˆp = X and the asymptotic normality result states that
−
−
=
which, of course, also follows directly from the CLT.
Example. The family of exponential distributions E(�) has p.d.f.
≥n(ˆp
−
p0)
�
N(0, p0(1 | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
) has p.d.f.
≥n(ˆp
−
p0)
�
N(0, p0(1
−
p0))
f (x �) =
|
�
�e−
0,
�x , x
0
∀
x < 0
and, therefore,
−
This does not depend on X and we get
log f (x
�) = log �
|
�x
≤
�2
��2 log f (x
|
�) =
1
− �2 .
1
.
�2
Therefore, the MLE ˆ� = 1/X¯ is asymptotically normal and
�) =
|
log f (X
I(�) =
�2
��2
E
−
≥n(ˆ�
−
�0)
�
N(0, �2).0
22 | https://ocw.mit.edu/courses/18-443-statistics-for-applications-fall-2006/03b407da8a94b3fe22d987453807ca46_lecture3.pdf |
8.701
0. Introduction
0.6 Particles
Introduction to Nuclear
and Particle Physics
Markus Klute - MIT
1
Introduction
10-10m
a few 10-15m
10-15m
1fm
2
Force Particles
© Mattson Rosenbaum (on PBworks). All rights
reserved. This content is excluded from our
Creative Commons license. For more
information, see https://ocw.mit.edu/fairuse.
3
Matter Particles
© Mattson Rosenbaum (on PBworks). All rights reserved. This content is excluded from our
Creative Commons license. For more information, see https://ocw.mit.edu/fairuse.
4
The Higgs Boson
© CERN. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/fairuse.
© Nature Publishing Group. All rights reserved. This content is
excluded from our Creative Commons license. For more information,
see https://ocw.mit.edu/fairuse.
5
Elementary Particle
© CERN. All rights reserved. This content is
excluded from our Creative Commons license. For
more information, see https://ocw.mit.edu/fairuse.
6
Timeline of Discoveries
© The Economist Newspaper Limited. All rights
reserved. This content is excluded from our
Creative Commons license. For more information,
see https://ocw.mit.edu/fairuse.
7
Composite Particles and Hadrons
Mesons: quark-antiquark states; bosons
Pion
Courtesy of Arpad Horvath on Wikimedia. License: CC BY-
SA. This content is excluded from our Creative Commons
license. For more information, see https://ocw.mit.edu/fairuse.
© CERN. All rights reserved. This content is
excluded from our Creative Commons license. For
more information, see https://ocw.mit.edu/fairuse.
Baryons: three-quark states; fermions
These images are in the public domain.
8
Proton
Courtesy of Arpad Horvath on Wikimedia. License: CC BY-
SA. This content is excluded from our Creative Commons
license. For more information, see https://ocw.mit.edu/fairuse.
Nuclei
Bound state of protons | https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/03b975f5fefb9265404c165115fa4206_MIT8_701f20_lec0.6.pdf |
from our Creative Commons
license. For more information, see https://ocw.mit.edu/fairuse.
Nuclei
Bound state of protons and neutrons
through the strong force.
Can be described by number of protons,
Z, (atomic number) and number of
neutrons, N. The sum Z+N is denoted
atomic mass A
Courtesy of Sjlegg on Wikimedia. License: CC BY-SA. This content is excluded from
our Creative Commons license. For more information, see https://ocw.mit.edu/fairuse.
9
MIT OpenCourseWare
https://ocw.mit.edu
8.701 Introduction to Nuclear and Particle Physics
Fall 2020
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/03b975f5fefb9265404c165115fa4206_MIT8_701f20_lec0.6.pdf |
Turbulent Flow and Transport
1
Review of Fundamental Laws and Constitutive
Equations
1.1
Fundamental laws governing continuum flow, expressed in terms of (i) material
volumes (closed systems) and (ii) control volumes.
1.2 Mass conservation equation; integral and differential forms.
The equation of motion in terms of the stress tensor. Integral and differential
1.3
forms. Stress tensor for a Newtonian fluid. The Navier−Stokes equation.
1.4
The energy equation (First Law) in integral and differential forms. The equations
for the kinetic, potential, and internal energies. Physical significance of the
various terms. The viscous dissipation function.
1.5
The Second Law of thermodynamics.
1.6
1.7
1.8
1.9
The equation for entropy production (Gibbs’ equation). The Second Law as a
statement that the viscous dissipation function is greater than or equal to zero.
The thermodynamic equations of state: differential expressions for all
the
thermodynamic properties of a fluid in terms of three measurable properties
which are functions of pressure and temperature−the coefficient of thermal
expansion,
the isothermal compressibility, and the specific heat at constant
pressure.
The differential equation for temperature and heat flux. Example: thermal effects
due to viscous heating in Couette flow.
Some low−speed approximations: (i) "incompressible flow," (ii) the neglect of the
isentropic compression term in the temperature equation. Criteria for validity.
1.10
The conservation equation for a molecular species. Mass transfer.
1.11
Introduction to the molecular basis of viscosity, heat conduction and diffusion in
terms of a simplified kinetic theory of gases. (This serves as an important but
relatively simple analogy for the random−walk transport that also occurs in
turbulence.)
References:
Sec. 1.1−1.4: Sonin. Fundamental Laws of Motion: Particles, Material Volumes,
And Control Volumes . Sonin. The Equation of Motion for Viscous Fluids.
available at http://web.mit.edu/2.25/www/ ; White.Viscous Fluid Flow ,
2nd ed. (1991): 59−89, 96−100; Pope. Ch. 2; or other books.
Secs 1 | https://ocw.mit.edu/courses/2-27-turbulent-flow-and-transport-spring-2002/03ca7e90be656ad8f628a6400036aa62_Fundamentals.pdf |
nd ed. (1991): 59−89, 96−100; Pope. Ch. 2; or other books.
Secs 1.4 − 1.6: Class notes plus summaries handed out by Sonin.
Sec. 1.10:
Sec. 1.11:
For Sec. 1.7−1.8: Handout: Sonin."The Thermodynamic Constitutive
Equations and the Equation for Temperature."
Class notes.
See for example Bird, Curtis, and Hirschfelder. Molecular Theory of Gases
and Liquid, (1954): 8−16,orVincenti&Kruger. Introduction to
Physical Gas Dynamics. 1965:15−20. | https://ocw.mit.edu/courses/2-27-turbulent-flow-and-transport-spring-2002/03ca7e90be656ad8f628a6400036aa62_Fundamentals.pdf |
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
6.001 Notes: Section 7.1
Slide 7.1.1
In the past few lectures, we have seen a series of tools for
helping us create procedures to compute a variety of
computational processes. Before we move on to more complex
issues in computation, it is useful to step back and look at more
general issues in the process of creating procedures.
In particular, we want to spend a little bit of time talking about
good programming practices. This sounds a little bit like
lecturing about "motherhood and apple pie", that is, a bit like
talking about things that seem obvious, apparent, and boring in
that everyone understands and accepts them. However, it is
surprising how many "experienced" programmers don't execute
good programming practices, and we want to get you started on
the right track.
Slide 7.1.2
Thus, in this lecture we are going to look briefly at several
methodological aspects of creating procedures: designing the
components of our code, debugging our code when it doesn't
run correctly, writing documentation for our code, and testing
our code. We will highlight some standard practices for each
stage, and indicate why these practices lead to efficient and
effective generation of code.
Slide 7.1.3
Let’s start with the issue of how to design code, given a
problem statement. There are many ways to do this, but most
of them involve some combination of the following steps:
•
•
•
Design of data structures
Design of computational modules
Design of interfaces between modules
Once we have laid out the general design of these stages, we
follow by creating specific instantiations of the actual
components. We have not yet talked about data structures in
Scheme, and will return to this issue in a few lectures. For our
purposes here, the key thing to note is that when designing a
computational system, it is extremely valuable to decide what kinds of information naturally should be grouped
together, and to then create structures that perform that grouping, while maintaining interfaces to the structures that
hide the details. For example, one thinks naturally of a vector as a pairing of an x and y coordinate. One wants to
6.001 Structure and Interpretation of Computer Programs. | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
thinks naturally of a vector as a pairing of an x and y coordinate. One wants to
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
be able to get out the coordinates when needed, but in many cases, one thinks naturally of manipulating a vector as
a unit. Similarly, one can imagine aggregating together a set of vectors, to form a polygon, and again one can think
of manipulating the polygon as a unit. Thus, a key stage in designing a computational system is determining the
natural data structures of the system.
Slide 7.1.4
A second stage in designing a computational system is deciding
how best to break the computation into modules or pieces. This
is often as much art as science, but there are some general
guidelines that help us separate out modules in our design. For
example, is there part of the problem that defines a computation
that is likely to be used many times? Are there parts of the
problem that can be conceptualized in terms of their behavior,
e.g. how they convert certain inputs into certain types of
outputs, without worrying about the details of how that is done.
Does this help us focus on other parts of the computation? Or
said a bit differently, can one identify parts of the computation
in terms of their role, and think about that role in the overall computation, without having to know details of the
computation?
If one can, these parts of the computation are good candidates for separate modules, since we can focus on their
use while ignoring the details of how they achieve that computation.
Slide 7.1.5
Finally, given that one can identify data structures, whose
information is to be manipulated; and stages of computation, in
which that information is transformed; one wants to decide the
overall flow of information between the modules. What types
of inputs does each module need? What types of data does each
module return? How does one ensure that the correct types are
provided, in the correct order?
These kinds of questions need to be addressed in designing the
overall flow between the computational modules.
Slide 7.1.6
This is perhaps more easily seen by thinking about an example
– and in fact you have already seen one such example, our
implementation ofsqrt. When we implemented our
method for square roots, we actually engaged | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
– and in fact you have already seen one such example, our
implementation ofsqrt. When we implemented our
method for square roots, we actually engaged in many of these
stages. We didn’t worry about data structures, since we were
simply interested in numbers. We did, however, spend some
effort in separating out modules. Remember our basic
computation: we start with a guess; if it is good enough, we
stop; otherwise we make a new guess by averaging the current
guess, and the ratio of the target number and the guess, and
continue.
To design this system, we separated out several modules: the notion of averaging, the notion of measuring “good
enough”. We saw that some of these modules might themselves rely on other procedural abstractions; for example,
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
our particular version of “good enough” needed to use the absolute value procedure, though other versions might
not.
Slide 7.1.7
Once we had separated out these notions of different
computations: average and good-enough, we considered the
overall flow of information through the modules. Note by the
way that we can consider each of theses processes as a black
box abstraction, meaning that we can focus on using these
procedures without having to have already designed the specific
implementation of each.
Now what about the flow between these modules? In our case,
we began with a guess, and tested to see if it was good enough.
If it was, we could then stop, and just return the value of the
guess.
Slide 7.1.8
If it was not, then we needed to average the current guess and
the ratio of our target number to the guess.
Slide 7.1.9
And then we need to repeat the entire process, with this new
value as our new guess.
The point of laying out these modules, or black boxes, is that
we can use them to decide how to divide up the code, and how
to isolate details of a procedure from its use. As we saw when
we implemented our sqrt procedure, we can change details
of a procedure, such as average, without having to
change any of the procedures that use that particular
component. As well, the flow of | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
of a procedure, such as average, without having to
change any of the procedures that use that particular
component. As well, the flow of information between the
modules helps guide us in the creation of the overall set of
procedures.
Thus, when faced with any new computational problem, we want to try to engage in the same exercise: block out
chunks of the computation that can be easily isolated; identify the inputs and outputs from each chunk; and lay out
the overall flow of information through the system. Then we can turn to implementing each of the units separately,
and testing the entire system while isolating the effects of each unit.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.1.10
A second key element to good programming practice is code
documentation. Unfortunately, this is one of the least well-
practiced elements – far too often programmers are in such a
hurry to get things written that they skip by the documentation
stage. While this may seem reasonable at the time of code
creation, when the design choices are fresh in the program
creator’s mind, six months later when one is trying to read the
code (even one’s own), it may be very difficult to reconstruct
why certain choices were made. Indeed, in many commercial
programming settings, more time is spent on code maintenance
and modification than on code generation, yet without good
documentation it can be very difficult or inefficient to
understand existing code and change it.
As well, good documentation can serve as a valuable source of information about the behavior of each module,
enabling a programmer to maintain the isolation of the details of the procedural abstraction from the use of that
abstraction. This information can be of help when debugging procedures.
Slide 7.1.11
As with designing procedural modules, the creation of good
documentation is as much art as science. Nonetheless, here are
some standard elements of well-documented code. We are
going to illustrate each of these with an example.
Slide 7.1.12
First, describe the goal of the procedure. Is it intended to part
of some other computation (as this helper function is)? If so,
what is the rough description of the process? Note that here we
have been a bit cryptic (in order to fit things on the slide | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
,
what is the rough description of the process? Note that here we
have been a bit cryptic (in order to fit things on the slide) and
we might well want to say more about “successive refinement”
(though we could defer that to the documentation under the
improve procedure). We also identify the role of each
argument to the procedure.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.1.13
Second, describe the types of values used in the computation.
In this case, the inputs or parameters are both numbers, and the
returned value is also a number. Actually, if we were more
careful here, we would require that X be a positive number, and
we would place a check somewhere to ensure that this is true.
Slide 7.1.14
Third, describe constraints, either desired or required, on the
computation. Here, we know that squaring the guess should get
us something close to the target value, although we really don’t
guarantee this until we reach the termination stage.
Slide 7.1.15
And fourth, describe the expected state of the computation and
the goal at each stage in the process. For example, here we
indicate what good-enuf? should do, namely test if
our approximation is sufficiently accurate. Then we indicate
that if this is the case, we can stop and what value to return to
satisfy the contract of the entire procedure. And we indicate
how to continue the process, though we could probably say a
bit more about what improve should do.
Notice how we can use the documentation to check some
aspects of our procedure’s “contract”. Here, we have indicated
that the procedure should return a number. By examining the
if expression, we can see that in the consequent clause, if the input parameter guess is a number, then we
are guaranteed to return a number. For the alternative clause, we can use induction to reason that given numbers as
input, we also return a number, and hence the entire procedure returns a value of the correct type.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.1.16
In general, taking care | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.1.16
In general, taking care to meet each of the stages when you
create code will often ensure an easier time when you have to
refine or replace code. Getting into the habit of doing this
every time you write something, even if you are only minutes
away from some problem set deadline, will greatly improve
your productivity!
6.001 Notes: Section 7.2
Slide 7.2.1
While we would like to believe that the code we write will
always run correctly, the first time we try it, experience shows
that this is a fortunate happenstance. Typically, especially with
complex code, things will not work right, and we need to debug
our code. Debugging is in part an acquired skill – with lots of
practice you will develop your own preferred approach. Here,
we are going to describe some of the common sources of errors
in code, and standard tools for finding the causes of the errors
and fixing them.
Slide 7.2.2
A common and simple bug in code arises when we use an
unbound variable. From the perspective of Scheme, this
means that somewhere in our code we try to reference (or look
up the value of) a variable that does not have one. This can
occur for several reasons. The simplest is that we mistyped – a
spelling error. The solution in this case is pretty
straightforward – simply search through the code file using
editor tools to find the offending instance and correct it.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.2.3
Sometimes, however, we are using a legal variable (that is, one
that we intended to hold some value) but the evaluator still
complains that this variable is unbound. How can that be?
Remember that in Scheme a variable gets bound to a value in
one of several ways. We may define it at “top level”, that is, we
may directly tell the interpreter to give a variable some value.
We may define it internally within some procedure. Or, we
may use it as a formal parameter of a procedure, in which case
it gets locally bound to a value when the | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
some procedure. Or, we
may use it as a formal parameter of a procedure, in which case
it gets locally bound to a value when the procedure is applied.
In the last two cases, if we attempt to reference the variable
outside the scope of the binding, that is, somewhere outside the
bounds of the lambda expression in which the variable is being
used, we will get an unbound variable error. This means that we have tried to use a variable outside its legal
domain, and we need to correct this. This probably means we have a coding error, but we can isolate the problem
either by searching for instances of the variable in the code file, or by using the debugger.
Slide 7.2.4
So what does a debugger do to help us find errors? Each
programming language will have its own flavor of debugger;
for an interpreted language like Scheme, the debugger actually
places us inside the state of the computation. That is, when an
error occurs, the debugger provides us access to the state of the
computation at the time of the error, including access to the
values of the variables within the computation. Moreover, we
can step around inside the environment of the computation: we
can work back up the chain of computational steps, examining
what values were produced during reductions (where
computation is reduced to a simpler expression), and examining
what values were produced during substitutions (where the computation was converted to a simpler version of
itself).
Slide 7.2.5
For example, here is a simple procedure, which we have called
with argument 2. Notice what happens when we hit the
unbound variable error and enter the debugger. We are placed
at the spot in the computation at which the error occurred. If
we choose to step back through the chain of evaluations, we can
see what expressions were reduced to get to this point, and what
recursive versions of the same problem were invoked in
reaching this stage.
In this case, we note that foo was initially called with
argument 2, and after a reduction through an if expression,
we arrived at an expression that contained within it a simpler
version of the same problem. This reduction stage repeated again, until we apparently reached the base case of the
if expression, where we hit the unbound variable. We can see in this simple case that our unbound | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
we apparently reached the base case of the
if expression, where we hit the unbound variable. We can see in this simple case that our unbound error is
coming from within the body of foo and is in the base case of the decision process.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.2.6
A second class of errors deals with mistakes in syntax –
creating expressions that do not satisfy the programming
language’s rules for creating legal expressions. A simple one
of these is an expression in which the wrong number of
arguments is provided to the procedure. If this occurs while
attempting to evaluate the offending expression, we will usually
be thrown into the debugger – a system intended to help us
determine the source of the error. In Scheme, the debugger
provides us with information about the environment in which
the offending expression occurred. It supplies tools for
examining the values associated with variable names, and for
examining the sequence of expressions that have been
evaluated leading up to this error. By stepping through the frames of the debugger, we can often isolate where in
our code the incorrect expression resides.
Slide 7.2.7
A more insidious syntax error occurs when we use an
expression of the wrong type somewhere in our code.
If we use an expression whose value is not a procedure as the
first subexpression of a combination, we will get an error that
indicates we have tried to apply a non-procedure object. As
before, the debugger can often help us isolate the location of
this error, though it may not provide much insight into why an
incorrect object was used as a procedure. For that, we may
have to trace back through our code, to determine how this
value was supplied to the offending expression.
The harder error to isolate is one in which one of the argument
expressions to a combination is of the wrong type. The reason
this is harder to track down is that the cause of the creation of an incorrect object type may have occurred far
upstream, that is, some other part of our code may have created an incorrect object, which has been passed through
several levels of procedure calls before causing an error. Tracking down the original source of this error can be
difficult, as we need to chase our way back through | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
of procedure calls before causing an error. Tracking down the original source of this error can be
difficult, as we need to chase our way back through the sequence of expression evaluations to find where we
accidentally created the wrong type of argument.
Slide 7.2.8
The most common sorts of errors, though, are structural ones.
This means that our code is syntactically valid – composed of
correctly phrased expressions, but the code does not compute
what we intended, because we have made an error somewhere
in the code design. This could be for a variety of reasons: we
started a recursive process with the wrong initial values, or we
are ending at the wrong place, or we are updating parameters
incorrectly, or we are using the wrong procedure somewhere,
and so on. Finding these errors is tougher, since the code may
run without causing a language error, but the results we get are
erroneous.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.2.9
This is where having good test cases is important. For
example, when testing a recursive procedure, it is valuable to
try it using the base case values of the parameters, to ensure
that the procedure is terminating at the right place, and
returning the right value. It is also valuable to select input
parameter values that sample or span the range of legal values –
does it work with small values, with large values; does
changing the input value by a small increment cause the
expected change in output value?
Slide 7.2.10
And what do we do if we find we have one of these structure
errors? Well, our goal is to isolate the location of our
misconception within the code, and to do this, there are two
standard tools.
The most common one is to use a print or display expression –
that is, to insert into our code, expressions that will print out for
us useful information at different stages of the computation.
For example, we might insert a display expression within the
recursive loop of a procedure, which will print out information
about the values of parameters. This will allow us to check that
parameters are being updated correctly, and that end cases are
correctly seeking the right termination point. We might
similarly print | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
us to check that
parameters are being updated correctly, and that end cases are
correctly seeking the right termination point. We might
similarly print out the values of intermediate computations within recursive loops, again to ascertain that the
computation is operating with the values we expect, and is computing the values we expect.
A related tool, supplied for example with Scheme, is a tracer. This allows us to ask the evaluator to inform us
about the calling conventions of procedures – that is, to print out the values of the parameters supplied before each
application of the procedure we designate, and the value returned by each such procedure call. This is similar to
our use of display expressions, but is handled automatically for us. It applies only to parameters of procedure calls,
however, so that if we want to examine for detailed states of the computation, we need to fall back on the display
tactic.
In some cases, it may help to actually walk through the substitution model, that is, to see each step of the
evaluation. Many languages, including Scheme, provide a means for doing this – in our case called the stepper.
This is a mechanism that lets us control each step of the substitution model in the evaluation of the expression. It is
obviously tedious, but works best when we need to isolate a very specific spot at which an error is occurring, and
we don’t want to insert a ton of display expressions.
Perhaps the best way to see the role of these tools is to look at an example, which we do next.
6.001 Notes: Section 7.3
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.3.1
Let’s use an example of a debugging session to highlight these
ideas. This will be primarily to fix a structural error, but we
will see how the other tools come into play as we do this.
Suppose we want to compute an approximation to the sine
function. Here is a mathematical approximation that will give
us a pretty good solution. So let’s try coding this up.
Slide 7.3.2
So here is a first attempt at some code to do this. We will
assume that fact and small-enuf? already
exist. The basic idea behind this procedure is quite similar to
what we did for square roots. | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
that fact and small-enuf? already
exist. The basic idea behind this procedure is quite similar to
what we did for square roots. We start with a guess. We then
see how to improve the guess, in this case by computing the
next term in the approximation, which we would like to add in.
If this improvement is small enough, we are done and can
return the desired value. If not, we repeat the process with a
better guess, by adding in the improvement to the current guess.
Slide 7.3.3
Now, let’s try it out on some test cases. One nice test case is
the base case, of x equal to 0. That clearly works. Another
nice test case is when x is equal to pi, where we know the result
should also be close to 0. Oops! That didn’t work. Nor does
the code work for x equal to pi half. Both of these latter cases
give results that are much too large.
Slide 7.3.4
Okay, we need to figure out where our conceptual error lies.
Let’s try to isolate this by tracing through the computation. In
particular, we will add some display expressions that will show
us the state of the computation each time through the recursion.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.3.5
And let’s try this again. Here we have used the test case of x
equal to pi. And we can see the trace of the computation. If we
compare this to the mathematical equation we can see one
problem. We really only want terms where n is odd, but clearly
we are getting all terms for n. So we need to fix this. Most
likely this is because we are not changing our parameters
properly.
Slide 7.3.6
So here is the correction. We will need to increment our
parameter by 2 each time, not by 1 – an easy mistake to make,
and to miss!
Slide 7.3.7
So let’s try this again. Hmm. We have gotten better as we are
only computing the odd terms for n, but we are still not right. If
we look again at | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
. We have gotten better as we are
only computing the odd terms for n, but we are still not right. If
we look again at the mathematical equation, we can see that we
should be alternating signs on each term. Or said another way,
the successive approximations should go up, then down, then
up, then down, and so on. Note that we could have also spotted
this if we had chosen to display the value of next at each
step.
So we need to keep track of some additional information, in this
case whether the term should be added or subtracted from the
current guess.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.3.8
Well, we can handle that. We add another parameter to our
helper procedure, which keeps track of whether to add the term
(if the value is 1) or whether to subtract the term (if the value is
1). And of course we will need to change how we update the
guess, and how we update the value of this parameter.
Slide 7.3.9
Oops! We blew it somewhere! We could enter the debugger to
locate the problem, but we can already guess that since we
changed the aux procedure, that must be the cause.
Slide 7.3.10
And clearly the solution is to make sure we call this procedure
with the right number of arguments. Notice that in this case it
is easy to spot this error, but in general, we should get into the
habit of checking all calls to a procedure when we alter its set
of parameters.
Slide 7.3.11
Now, if we try this on the test case of x equal pi, this works!
But if we try it on the test case of pi half, it doesn’t! The
answer should be close to 1, but we are getting something close
to -1. Note that this reinforces why we want to try a range of
test cases – if we had stopped with x equal pi, we would not
have spotted this problem.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.3.12
Here is the bug. We started with the wrong initial value – | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
2004 by Massachusetts Institute of Technology.
Slide 7.3.12
Here is the bug. We started with the wrong initial value – a
common error. By fixing this, we can try again and …
Slide 7.3.13
… finally we get correct performance. Note how we have used
printing of values to isolate changes, as well as using the
debugger to find syntax errors.
Slide 7.3.14
In general, we want you to get into the habit of doing the same
things. Developing good programming methodology habits
now will greatly help you when you have to deal with large,
complex, bodies of code. Good programming discipline means
being careful and thorough in the creation and refinement of
code of all sizes and forms, so start exercising your
“programming muscles” now!
6.001 Notes: Section 7.4
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.1
One other tool that we have in our armamentarium of
debugging is the use of types. In particular, type specifications,
that is, constraints on what types of objects are passed as
arguments to procedures and what types of objects are returned
as values by procedures, can help us both in planning and
designing code, and in debugging existing code.
Here, we are going to briefly explore both of these ideas, both
to demonstrate why careful program practice can lead to
efficient generation of robust code; and to illustrate why
thinking about types of procedures and objects is a valuable
practice.
Slide 7.4.2
To motivate the idea of types as a tool in designing code, let's
consider an example. Suppose we want to create a procedure,
let's call it repeated, that will apply any other procedure
some specified number of times. Since this is a vague
description, let's look at a specific motivating example.
We saw earlier the idea that we could implement multiplication
as a successive set of additions, and that we could implement
exponentiation as a successive set of multiplications. If we look
at these two procedures, we can see that there is a general
pattern here. There is a base case value to return (0 in one case,
1 in the other). And there is the idea of applying an operation to
an input value | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
a base case value to return (0 in one case,
1 in the other). And there is the idea of applying an operation to
an input value and the result of repeating that process one fewer times. Repeated is intended to capture that
common pattern of operation.
Slide 7.4.3
So here is what we envision: we want our repeated
procedure to take a procedure to repeat, and the number of
times to repeat it. It should return a procedure that will actually
do that, when applied to some value. Here we can see that the
procedure being applied would change in each case, and the
initial value to which to apply it would change, but otherwise
the overall operation is the same.
The question is: how to we create repeated, and why
does the call to repeated have that funny structure, with
two open parens?
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.4
First, what is the type of repeated?
Well, from the previous slide, we know that as given, it should
take two arguments. The first should be a procedure of one
argument. We don't necessarily know what type of argument
this procedure should take (in the two examples shown, the
input was a number, but we might want to be more general than
this). What we do know, is that whatever type of argument this
procedure takes, it needs to return a value of the same type,
since it is going to apply that procedure again to that value.
Hence the first argument to repeated must be a procedure
of type A to A.
The second argument to repeated must be an integer, since we can only apply an operation an integer
number of times. Actually, it should probably be a non-negative integer.
And as we argued, the returned object needs to be a procedure of the same type: A to A because the idea is to
use repeated recursively on itself.
Slide 7.4.5
Okay, now how does this help us in designing the actual
procedure?
We know the rough form that repeated should take. It
should have a test for the base case, which is when there are no
more repetitions to make. In the base case, it needs to do
something, which we have to figure out. And in the recursive
case, we | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
more repetitions to make. In the base case, it needs to do
something, which we have to figure out. And in the recursive
case, we expect to use repeated to solve the smaller
problem of repetition, plus some additional operations, which
we need to figure out.
Slide 7.4.6
For the base case, what do we know?
We know that by the type information, this must return a
procedure of a single argument that returns a value of the same
type.
We also know that if we are in the base case, there is really
nothing to do. We don't want to apply our procedure any more
times. Hence, we can deduce that we need to return a procedure
that serves as the identity: it simply returns whatever value was
passed in.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.7
Now, what about the recursive case?
Well, the idea is to apply the input procedure to the result of
repeating the operation n-1 times. How do we use this idea to
figure out the correct code?
First, we know that whatever we write must have type A to
A by the specification of repeated.
Slide 7.4.8
Next we know that we want to apply the input procedure
proc to the result of solving the same problem, n-1 times.
So we ought to have something that has these pieces in it.
Slide 7.4.9
But let's check the types. We know that repeated has type
A to A, and the proc expects only an argument of type
A. So clearly we need to apply repeated to an argument
before passing the result on to proc. Hence we have the form
shown.
Note how this fairly complex piece of code can be easily
deduced by using types of procedures to determine interactions.
Of course, to be sure we did it right, we should now test this on
some test cases, for example, by running mul or exp on
known cases.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.10
A second way that types can help us is in debugging code. In
particular, we can use the information about types of arguments
and types of | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
A second way that types can help us is in debugging code. In
particular, we can use the information about types of arguments
and types of return values explicitly to check that procedures
are interacting correctly. And in some cases, where there are
constraints on the actual values being returned, we can also
enforce a check.
Slide 7.4.11
As an example, here is our sqrt code from before. One of
the conditions we have is that the input arguments need to be
numbers. And we could check that numbers are being correctly
passed in by inserting an explicit check. In this case, it is
probably redundant since the only code that calls sqrt
helper is itself, but in general, when multiple procedures
might be involved, you can see how this check is valuable.
Note that one can insert this check only when debugging, as a
tool for deducing what procedure is incorrectly supplying
arguments. But one can also use it regularly, if you want to
ensure robust operation of the code.
Clearly one could add a check on the return value in a similar fashion.
Slide 7.4.12
But there are other things one could use to ensure correct
operation. For example, the number whose square root we are
seeking should be a positive number, and we could check that
as shown.
Thus we see that types also serve as a useful tool on good
programming methodologies.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 7.4.13
To summarize, we have seen a set of tools for good
programming practices: ways of designing code, debugging
code, evaluating code, and using knowledge of code structure
to guide the design. | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/03e0268a995c2575b1039dd867c8068b_lecture7webhand.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.641 Electromagnetic Fields, Forces, and Motion, Spring 2005
Please use the following citation format:
Markus Zahn, 6.641 Electromagnetic Fields, Forces, and Motion, Spring
2005. (Massachusetts Institute of Technology: MIT OpenCourseWare).
http://ocw.mit.edu (accessed MM DD, YYYY). License: Creative
Commons Attribution-Noncommercial-Share Alike.
Note: Please use the actual date you accessed this material in your citation.
For more information about citing these materials or our Terms of Use, visit:
http://ocw.mit.edu/terms
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1: Integral Form of Maxwell’s Equations
I. Maxwell’s Equations in Integral Form in Free Space
1. Faraday’s Law
d
∫(cid:118) E ds = -
dt ∫ µ0 H i da
i
C
S
Circulation
of E
Magnetic Flux
0
µ = 4π ×10-7 henries/meter
[magnetic permeability of
free space]
(Kirchoff’s Voltage Law, conservative electric
EQS form: E ds
= 0
i(cid:118)
∫
C
field)
MQS circuit form: v = L
(Inductor)
di
dt
2.
Ampère’s Law (with displacement current)
(cid:118)
∫
H ds
i
=
C
i
∫
J da
+
S
d
dt
ε
0
∫
S
i
E
da
Circulation Conduction Displacement
of H
Current
Current
MQS form: H ds = ∫ J da
i
(cid:118)∫
i
C
EQS circuit form: i = C
(capacitor)
S
dv
dt
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 1 of 6
3. Gauss’ Law for Electric Field
(cid:118)∫ ε | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/03f46e6626e7e6ac0268ed2e50f6da29_lecture1.pdf |
Lecture 1
Page 1 of 6
3. Gauss’ Law for Electric Field
(cid:118)∫ ε 0E da = ρ dV
∫
i
S
V
≈ 8.854 ×10-12 farads/meter
ε 0 ≈
10-9
36π
1
ε µ 0 0
free space)
c =
3≈
× 108 meters/second (Speed of electromagnetic waves in
4. Gauss’ Law for Magnetic Field
(cid:118)∫ µ 0H da = 0
i
S
In free space:
B = µ H0
magnetic
flux
density
(Teslas)
magnetic
field
intensity
(amperes/meter)
5. Conservation of Charge
Take Ampère’s Law with displacement current and let contour C → 0
lim (cid:118)∫ H i ds = 0 = (cid:118)∫ J i da +
C 0→
C
S
d
dt S
(cid:118)∫ ε 0E i da
dVρ ∫
V
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 2 of 6
J i da +
(cid:118)∫
S
d
dt ∫
V
ρ dV = 0
Total current
leaving volume inside volume
through surface
Total charge
6. Lorentz Force Law
f = q E + v × µ H)
(
0
II. Electric Field from Point Charge
(cid:118)∫
S
ε0E i da = ε0E 4 π r = q
2
r
E =
r
q
4π ε0r
2
T sin θ = f =
c
2
q
4π ε0r
2
T cos θ = Mg
tan θ =
2
q
4π ε0r
2
Mg
=
r
2l
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Z | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/03f46e6626e7e6ac0268ed2e50f6da29_lecture1.pdf |
r
2
Mg
=
r
2l
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 3 of 6
3
Mg
⎡2π ε0r
q = ⎢
l
⎣
2
1
⎤
⎥
⎦
III. Faraday Cage
J da = i = -
i
∫(cid:118)
S
d
dt
∫ ρ dV = -
d
dt
(
) =
-q
dq
dt
∫
idt = q
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 4 of 6
IV. Boundary Conditions
1. Gauss’ Continuity Condition
Courtesy of Krieger Publishing. Used with permission.
ε0E i da = ∫σsdS ⇒ ε0 (E2n
(cid:118)∫
- E1n
) dS = σ dS
s
S
S
ε0 (E - E ) = σ ⇒ n i ⎡ε0 (E - E1 )⎤ = σ
1n
2n
2
s
s
⎦
⎣
2. Continuity of Tangential E
Courtesy of Krieger Publishing. Used with permission.
E i ds = (E - E2t ) dl = 0 ⇒ E - E2t = 0
1t
1t
(cid:118)∫
C
n× E1 - E2 ) = 0
(
Equivalent to = Φ2 along boundary
Φ1
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 5 of 6
3. Normal H
∇ µ H = 0 ⇒
i 0
(cid:118)∫ µ0
S
H i da = 0
4. Tangential H
µ (H - Hbn
an
0
) A = 0
H = Hbn | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/03f46e6626e7e6ac0268ed2e50f6da29_lecture1.pdf |
Tangential H
µ (H - Hbn
an
0
) A = 0
H = Hbn
an
n i ⎡
⎣
H a - H b
⎤ = 0
⎦
(cid:118)
∇ × H = J ⇒ ∫ H i ds = J i da
∫
C
S
H ds - Hatds = Kds
bt
H - Hat = K
bt
n × ⎡
⎣
H a - H b
⎤ = K
⎦
∂ρ
∇ i J + = 0
∂t
5. Conservation of Charge Boundary Condition
i(cid:118)∫
d
∫
J da + ρdV = 0
dt V
S
n i ⎣
⎡ J a - Jb ⎦
⎤ +
∂
t
∂
σ = 0
s
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 1
Page 6 of 6 | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/03f46e6626e7e6ac0268ed2e50f6da29_lecture1.pdf |
Kepler’s Second Law
By studying the Danish astronomer Tycho Brahe’s data about the motion of the planets,
Kepler formulated three empirical laws; two of them can be stated as follows:
Second Law A planet moves in a plane, and the radius vector (from the sun to the
planet) sweeps out equal areas in equal times.
First Law The planet’s orbit in that plane is an ellipse, with the sun at one focus.
From these laws, Newton deduced that the force keeping the planets in their orbits had
magnitude 1/d2, where d was the distance to the sun; moreover, it was directed toward the
sun, or as was said, central, since the sun was placed at the origin.
Using a little vector analysis (without coordinates), this section is devoted to showing
that the Second Law is equivalent to the force being central.
It is harder to show that an elliptical orbit implies the magnitude of the force
is of the form K/d2, and vice-versa; this uses vector analysis in polar coordinates
and requires the solution of non-linear differential equations.
1. Differentiation of products of vectors
Let r(t) and s(t) be two differentiable vector functions in 2- or 3-space. Then
(1)
d
dt
(r · s) =
dr
dt
· s + r ·
ds
;
dt
d
dt
(r × s) =
dr
dt
× s + r × | https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/040b72c08d4f0ef67faf085066e1cc71_MIT18_02SC_MNotes_k.pdf |
ds
;
dt
d
dt
(r × s) =
dr
dt
× s + r ×
ds
.
dt
These rules are just like the product rule for differentiation. Be careful in the second
6 b × a in general.
rule to get the multiplication order correct on the right, since a × b
=
The two rules can be proved by writing everything out in terms of i , j , k components and
differentiating. They can also be proved directly from the definition of derivative, without
resorting to components, as follows:
Let t increase by Δt. Then r increases by Δr, and s by Δs, and the corresponding change
in r · s is given by
Δ(r · s) = (r + Δr) · (s + Δs) − r · s ,
so if we expand the right side out and divide all terms by Δt, we get
Δ(r · s)
Δt
=
Δr
Δt
· s + r ·
Δs
Δt
+
Δr
Δt
· Δs .
Now let Δt → 0; then Δs → 0 since s(t) is continuous, and we get the first equation in (1).
The second equation in (1) is proved the same way, replacing · by × everywhere.
2. Kepler’s second law and the central force. To show that the force being
central (i.e., directed toward the sun) is equivalent to Kepler’s second law, | https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/040b72c08d4f0ef67faf085066e1cc71_MIT18_02SC_MNotes_k.pdf |
being
central (i.e., directed toward the sun) is equivalent to Kepler’s second law, we need to
translate that law into calculus. “Sweeps out equal areas in equal times” means:
the radius vector sweeps out area at a constant rate .
1
2
KEPLER’S SECOND LAW
The first thing therefore is to obtain a mathematical expression for this rate. Referring
to the picture, we see that as the time increases from t to t + Δt, the corresponding change
in the area A is given approximately by
ΔA ≈ area of the triangle =
1
2
|r × Δr| ,
since the triangle has half the area of the parallelogram formed by r and Δr; thus,
2
ΔA
Δt
�
�
� Δr �
�
� �
≈ r ×
,
�
Δt
and as Δt → 0, this becomes
(2)
2
dA
dt
�
�
�
= r ×
�
�
dr �
� =
�
dt
|r × v|.
where v =
dr
dt
.
r + Δ r
Δ r
r
Using (2), we can interpret Kepler’s second law mathematically. Since the area is swept
out at a constant rate, dA/dt is constant, so according to (2),
(3)
|r × v|
is a constant.
Moreover, since Kepler’s law says r lies in a plane, the velocity vector v also lies in the same
plane, and therefore
(4)
r × v has constant direction (perpendicular to the plane of motion). | https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/040b72c08d4f0ef67faf085066e1cc71_MIT18_02SC_MNotes_k.pdf |
same
plane, and therefore
(4)
r × v has constant direction (perpendicular to the plane of motion).
Since the direction and magnitude of r × v are both constant,
(5)
r × v = K, a constant vector,
and from this we see that
(6)
d
dt
(r × v) = 0 .
But according to the rule (1) for differentiating a vector product,
(7)
d
dt
(r × v) = v × v + r × a,
where a =
dv
,
dt
= r × a,
since s × s = 0 for any vector s.
Now (6) and (7) together imply
(8)
r × a = 0,
which shows that the acceleration vector a is parallel to r, but in the opposite direction,
since the planets do go around the sun, not shoot off to infinity.
Thus a is directed toward the center (i.e., the sun), and since F = ma, the force F is
also directed toward the sun. (Note that “center” does not mean the center of the elliptical
orbits, but the mathematical origin, i.e., the tail of the radius vector r, which we are taking
to be the sun’s position.)
The reasoning is reversible, so for motion under any type of central force, the path of
motion will lie in a plane and area will be swept out by the radius vector at a constant rate.
MIT Open | https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/040b72c08d4f0ef67faf085066e1cc71_MIT18_02SC_MNotes_k.pdf |
in a plane and area will be swept out by the radius vector at a constant rate.
MIT OpenCourseWare
http://ocw.mit.edu
18.02SC Multivariable Calculus
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-02sc-multivariable-calculus-fall-2010/040b72c08d4f0ef67faf085066e1cc71_MIT18_02SC_MNotes_k.pdf |
55
1.25. The Quantum Group Uq (sl2). Let us consider the Lie algebra
sl2. Recall that there is a basis h, e, f ∈ sl2 such that [h, e] = 2e, [h, f] =
−2f, [e, f] = h. This motivates the following definition.
Definition 1.25.1. Let q ∈ k, q =� ±1. The quantum group U (sl2) is
generated by elements E, F and an invertible element K with defining
relations
q
KEK −1 = q 2E, KFK −1 = q−2F, [E, F] =
K − K −1
.
q − q−1
Theorem 1.25.2. There exists a unique Hopf algebra structure on
Uq (sl2), given by
• Δ(K) = K ⊗ K
• Δ(E) = E ⊗ K + 1 ⊗ E;
• Δ(F) = F ⊗ 1 + K −1 ⊗ F
(thus K is a grouplike element);
(thus E, F are skew-primitive ele
ments).
Exercise 1.25.3. Prove Theorem 1.25.2.
Remark 1.25.4. Heuristically, K = qh, and thus
K − K −1
lim
1→ q − q−1
q
= h.
→
1, the relations of Uq (sl2) degenerate into the
So in the limit q
relations of U (sl2), and thus Uq (sl2) should be viewed as a Hopf algebra
deformation of the enveloping algebra U (sl2). In fact, one can make
this heuristic idea into a precise statement, see e.g. [K].
If q is a root of unity, one can also define a finite dimensional version
of Uq (sl2). | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
K].
If q is a root of unity, one can also define a finite dimensional version
of Uq (sl2). Namely, assume that the order of q is an odd number �. Let
uq (sl2) be the quotient of Uq (sl2) by the additional relations
E� = F� = K � − 1 = 0.
Then it is easy to show that uq (sl2) is a Hopf algebra (with the co
product inherited from Uq (sl2)). This Hopf algebra is called the small
quantum group attached to sl2.
1.26. The quantum group Uq (g). The example of the previous sub
section can be generalized to the case of any simple Lie algebra. Namely,
let g be a simple Lie algebra of rank r, and let A = (aij ) be its Cartan
matrix. Recall that there exist unique relatively prime positive integers
di, i = 1, . . . r such that diaij = dj aji. Let q ∈ k, q =� ±1.
Definition 1.26.1.
• The q-analog of n is
nq − q−n
.
q − q−1
[n]q =
56
• The q-analog of the factorial is
[n]q ! =
n
�
[l]q =
l=1
(q − q−1) · · · (qn − q−n)
.
(q − q−1)n
Definition 1.26.2. The quantum group Uq(g) is generated by elements
Ei, Fi and invertible elements Ki, with defining relations
KiKj = Kj Ki, KiEj Ki
−1 = q aij Ej , KiFj Ki
−1 = q−aij Fj ,
[Ei, Fj ] = δij
K di − K −di
i
qdi − q−di
i
, and the q-Serre relations:
(1.26.1)
and
(1.26 | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
− q−di
i
, and the q-Serre relations:
(1.26.1)
and
(1.26.2)
1−aij
�
[l]q ![1
i
l=0
l
1)
−
(
a
−
ij
1−aij −lEj Ei
Ei
l = 0, i =�
− l]q !
i
1−aij
�
[l]q ![1
i
l=0
l
1)
−
(
a
−
ij
1−aij −lFj Fl
Fi
i = 0, i =�
− l]q !
i
j
j.
More generally, the same definition can be made for any symmetriz
able Kac-Moody algebra g.
Theorem 1.26.3. (see e.g. [CP]) There exists a unique Hopf algebra
structure on Uq (g), given by
• Δ(Ki) = Ki ⊗ Ki;
• Δ(Ei) = Ei ⊗ Ki + 1 ⊗ Ei;
•
Δ(Fi) = Fi ⊗ 1 + Ki
−1 ⊗ Fi.
Remark 1.26.4. Similarly to the case of sl2, in the limit q
1, these
relations degenerate into the relations for U (g), so Uq(g) should be
viewed as a Hopf algebra deformation of the enveloping algebra U (g).
→
1.27. Categorical meaning of skew-primitive elements. We have
seen that many interesting Hopf algebras contain nontrivial skew-primitive
elements. In fact, the notion of a skew-primitive element has a cate
gorical meaning. Namely, we have the following proposition.
Proposition 1.27.1. Let g, h be grouplike elements of a coalgebra
C, and Primh,g(C) be the space of skew-primitive elements of type | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
coalgebra
C, and Primh,g(C) be the space of skew-primitive elements of type
h, g. Then the space Primh,g(H)/k(h − g) is naturally isomorphic to
Ext1(g, h), where g, h are regarded as 1-dimensional right C-comodules.
Proof. Let V be a 2-dimensional H-comodule, such that we have an
exact sequence
0
→ → → →
V
g
h
0.
57
Then V has a basis v0, v1 such that
π(v0) = v0 ⊗ h, π(v1) = v1 ⊗ x + v0 ⊗ g.
The condition that this is a comodule yields that x is a skew-primitive
element of type (h, g). So any extension defines a skew-primitive el
v0,→
ement and vice versa. Also, we can change the basis by v0
v1 + λv0, which modifies x by adding a trivial skew-primitive
v1
�
element. This implies the result.
→
Example 1.27.2. The category C of finite dimensional comodules over
uq (sl2) is an example of a finite tensor category in which there are
objects V such that V ∗∗ is not isomorphic to V . Namely, in this
category, the functor V �→ V ∗∗ is defined by the squared antipode
S2 , which is conjugation by K: S2(x) = KxK −1 . Now, we have
Ext1(K, 1) = Y = �E, F K�, a 2-dimensional space. The set of iso
morphism | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
Y = �E, F K�, a 2-dimensional space. The set of iso
morphism classes of nontrivial extensions of K by 1 is therefore the
projective line PY . The operator of conjugation by K acts on Y with
eigenvalues q2, q−2, hence nontrivially on PY . Thus for a generic ex
tension V , the object V ∗∗ is not isomorphic to V .
However, note that some power of the functor ∗∗ on C is isomorphic
(in fact, monoidally) to the identity functor (namely, this power is the
order of q). We will later show that this property holds in any finite
tensor category.
Note also that in the category C, V ∗∗ ∼= V if V is simple. This clearly
has to be the case in any tensor category where all simple objects
are invertible. We will also show (see Proposition 1.41.1 below) that
this is the case in any semisimple tensor category. An example of a
tensor category in which V ∗∗ is not always isomorphic to V even for
simple V is the category of finite dimensional representations of the
the Yangian H = Y (g) of a simple complex Lie algebra g, see [CP,
12.1]. Namely, for any finite dimensional representation V of H and
any complex number z one can define the shifted representation V (z)
(such that V (0) = V ). Then V ∗∗ ∼ V (2h∨), where h∨ is the dual
Coxeter number of g, | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
�∗ ∼ V (2h∨), where h∨ is the dual
Coxeter number of g, see [CP, p.384]. If V is a non-trivial irreducible
finite dimensional representation then V (z) = V for z = 0. Thus,
V ∗∗ ∼�
= V . Moreover, we see that the functor ∗∗ has infinite order even
when restricted to simple objects of C.
=
∼
However, the representation category of the Yangian is infinite, and
the answer to the following question is unknown to us.
Question 1.27.3. Does there exist a finite tensor category, in which
there is a simple object V such that V ∗∗ is not isomorphic to V ? (The
answer is unknown to the authors).
�
�
58
Theorem 1.27.4. Assume that k has characteristic 0. Let C be a finite
ring category over k with simple object 1. Then Ext1(1, 1) = 0.
Proof. Assume the contrary, and suppose that V is a nontrivial exten
sion of 1 by itself. Let P be the projective cover of 1. Then Hom(P, V )
is a 2-dimensional space, with a filtration induced by the filtration on
V , and both quotients naturally isomorphic to E := Hom(P, 1). Let
v0, v1 be a basis of Hom(P, V ) compatible to the filtration, i.e. v0 spans
the 1-dimensional subspace defined by the filtration. Let A = End(P )
(this is a finite dimensional algebra). Let ε : A
k be the character
defined by the (right) action of A on E. Then the matrix of a ∈ A in
the basis v0, v1 has the form
→ | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
(right) action of A on E. Then the matrix of a ∈ A in
the basis v0, v1 has the form
→
[a]1 =
�
ε(a) χ1(a)
ε(a)
�
(1.27.1)
0
where χ1 ∈ A∗ is nonzero. Since a → [a]1 is a homomorphism, χ1 is a
derivation: χ1(xy) = χ1(x)ε(y) + ε(x)χ1(y).
Now consider the representation V ⊗ V . Using the exactness of
tensor products, we see that the space Hom(P, V ⊗V ) is 4-dimensional,
and has a 3-step filtration, with successive quotients E, E ⊕ E, E, and
basis v00; v01, v10; v11, consistent with this filtration. The matrix of
a ∈ End(P ) in this basis is
(1.27.2)
⎛
[a]2 = ⎜
⎜
⎝
⎞
ε(a) χ1(a) χ1(a) χ2(a)
χ1(a)
⎟
⎟
⎠
ε(a) χ1(a)
ε(a)
ε(a)
0
0
0
0
0
0
0
Since a
→
[a]2 is a homomorphism, we find
χ2(ab) = ε(a)χ2(b) + χ2(a)ε(b) + 2χ1(a)χ1(b).
We can now proceed further (i.e. consider V ⊗V ⊗V etc.) and define for
every positive n, a linear function χn ∈ A∗ which satisfies the equation
� �
�
n
j
χj (a)χn−j (b),
χn(ab) =
n
j=0
ε.
�
χm(a)s
Thus for any s ∈ k, we can | https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/041cee4c86b5e5af04423e900bcb36fc_MIT18_769S09_lec06.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.