text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
is called the exceptional locus.
n 1
n 1
˜
˜
˜
˜
)
n
1
Next, observe that An is covered by n affine charts. More explicitly, An
(cid:99)
n). On there, the defining equation becomes x
ordinates (ti
A(cid:99)n =∼ An
P (t1xi, . . . , ti−1xi, xi, . . .) ⊆ I ˜
1, . . . , ti
i 1, ti
−
with coordinates
i
i
i
(cid:99)
i+1, . . . , ti
i
(t1, . . . , ti
−1
i
.
X∩A
(cid:99)n
i
, x , ti
i
i+1
, . . . , ti ).
n
In other words, if P (x , . . . , x )
i ⊆ An−1
× An has co-
i
j = ti
jxi for j = i, so
I , then
n ⊆ X
1
1. Let X = (y2 = x3 + x2) ⊆ An. Suppose y = tx, then t2x2 = x3 + x2 =
⇒ t2 = x + 1, so the
Example
preimage of (0, 0) is {(t = ±1, x = 0)}. Thus X is not normal because the map X → X is not 1-to-1, though
deg(X → X) = 1 (recall that a finite birational morphism to a normal variety is isomorphism).
˜
˜
Definition 1. Let X an affine variety, x ∈ X, we write Blx(X) = Xx to denote X for an embedding
X ⊆ An where x (cid:55)→ 0.
˜
˜
Remark 1. Blx(X) contains X \ x as an open set, so this generalizes to any variety X.
Proposition 1. Suppose X embeds via two embeddings i1, i
exists some x such that i1(x) = i2(x) = 0, then X1 = X2 for two blowups | https://ocw.mit.edu/courses/18-725-algebraic-geometry-fall-2015/0c3f5291cb00434b36fd840f150e5143_MIT18_725F15_lec09.pdf |
1, i
exists some x such that i1(x) = i2(x) = 0, then X1 = X2 for two blowups at x.
˜
˜
2 to A and Am respectively, such that there
n
In particular, this tells us that blowup is an intrinsic operation that does not depend on the embedding.
1
(cid:54)
18.725 Algebraic Geometry I
Lecture
9
Proof. First consider the special case X = An, i1 = id, and i2 given by (x1, . . . , xn) (cid:55)→ (x1, . . . , xn, f ) for
some polynomial f . Write A(cid:91)n+1 =
(cid:91) An+1 and observe that
\ {(0 : 0 : . . . : 0 : 1)
= A(cid:91)n+1
∈ Pn}.
n+1
,
i
i
=1
n
(cid:91) A
n+1
i
i=1
˜
˜
∼= An
i
i ⊆ An (Locally write
Call that point ∞, then one can check that ∞ ∈/ An. Now note that An ∩ An+1
would be of
it as tn+1xi = f (t1xi, . . . , xi, . . . , tnxi), and observe we have a xi on both sides so the closure
shape tn+1 = f (cid:48)(t1, . . . , xi, . . . , tn), which gives an entire An), so together we see that the blo
wup is nothing
being a graph of a morphism An → Am.
but A(cid:99)n. Second, consider X = An, i = id, i
2
This can be reduced to the first case by induction on m (or really, just the exactly same argument applied
several times). Now consider the general case of arbitrary i1, i2. First extend the embedding i2 : X → Am
to a map An → Am by lifting each generator (one can switch to the algebraic side, suppose X = Spec A,
then we get two surjective maps ψ1 : k[x1, . . . , xm] → A and ψ2 : k[y2 | https://ocw.mit.edu/courses/18-725-algebraic-geometry-fall-2015/0c3f5291cb00434b36fd840f150e5143_MIT18_725F15_lec09.pdf |
,
then we get two surjective maps ψ1 : k[x1, . . . , xm] → A and ψ2 : k[y2, . . . , yn] → A, lift ψ1 to ψ2 ◦ φ
for φ : k[x1, . . . , xm] → k[y1, . . . , yn] where we map each xi into A then lift), then one can use part 2.
(x (cid:55)→ i1(x) (cid:55)→ i1(x) has the same blowup as x (cid:55)→ i1(x) (cid:55)→ (i1(x), i2(x)), which has the same blowup as
x (cid:55)→ i2(x) (cid:55)→ i2(x) by the same argument applied on the other direction.)
→ An
(cid:44)
: An
(cid:99)
m
+
1
As an application, consider an example of a complete non-projective surface: start with P1 × P1, blow it
up at (0, 0), consider the projection to the second factor. For any x = 0, the preimage of x is a projective
line; for x = 0, the preimage is the union of two projective lines (one can see this by passing to affine chart
then consider closure). Consider two copies of this blow up, call them X, Y , and call the two exceptional
lines L1, L2 for both of them, Now consider the disjoint union of X and Y where we identify L1 of X with
the fiber of ∞ of Y , and vise versa.
References
[SH77]
Igor Rostislavovich Shafarevich and Kurt Augustus Hirsch. Basic algebraic geometry. Vol. 1. Springer,
1977.
2
(cid:54)
MIT OpenCourseWare
http://ocw.mit.edu
18.725 Algebraic Geometry
Fall 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-725-algebraic-geometry-fall-2015/0c3f5291cb00434b36fd840f150e5143_MIT18_725F15_lec09.pdf |
18.175: Lecture 4
Integration
Scott Sheffield
MIT
18.175 Lecture 4
1Outline
Integration
Expectation
18.175 Lecture 4
2Outline
Integration
Expectation
18.175 Lecture 4
3Recall definitions
� Probability space is triple (Ω, F, P) where Ω is sample
space, F is set of events (the σ-algebra) and P : F → [0, 1] is
the probability function.
� σ-algebra is collection of subsets closed under
complementation and countable unions. Call (Ω, F) a
measure space.
� Measure is function µ : F → R satisfying µ(A) ≥ µ(∅) = 0
i µ(Ai )
for all A ∈ F and countable additivity: µ(∪i Ai ) =
for disjoint Ai .
J
� Measure µ is probability measure if µ(Ω) = 1.
� The Borel σ-algebra B on a topological space is the smallest
σ-algebra containing all open sets.
18.175 Lecture 4
4Recall definitions
(cid:73)
�
�
(cid:73)
�
(cid:73)
Real random variable is function X : Ω → R such that the
preimage of every Borel set is in F.
Note: to prove X is measurable, it is enough to show that the
pre-image of every open set is in F.
Can talk about σ-algebra generated by random variable(s):
smallest σ-algebra that makes a random variable (or a
collection of random variables) measurable.
18.175 Lecture 4
5Lebesgue integration
(cid:73)
�
�
(cid:73)
�
(cid:73)
<
Lebesgue: If you can measure, you can integrate.
In more words: if (Ω, F) is a measure space with a measure µ
with µ(Ω) < ∞) and f : Ω → R is F-measurable, then we
can define
fdµ ( | https://ocw.mit.edu/courses/18-175-theory-of-probability-spring-2014/0c6228b4a4638d5a34b149ec89bc7d19_MIT18_175S14_Lecture4.pdf |
Ω) < ∞) and f : Ω → R is F-measurable, then we
can define
fdµ (for non-negative f , also if both f ∨ 0 and
−f ∧ 0 and have finite integrals...)
Idea: define integral, verify linearity and positivity (a.e.
non-negative functions have non-negative integrals) in 4
cases:
� f takes only finitely many values.
� f is bounded (hint: reduce to previous case by rounding down
or up to nearest multiple of E for E → 0).
� f is non-negative (hint: reduce to previous case by taking
f ∧ N for N → ∞).
� f is any measurable function (hint: treat positive/negative
parts separately, difference makes sense if both integrals finite).
18.175 Lecture 4
6Lebesgue integration
(cid:73)
�
�
(cid:73)
(cid:73)
�
�
�
Can we extend previous discussion to case µ(Ω) = ∞?
Theorem: if f and g are integrable then:
<
fdµ ≥ 0.
If f ≥ 0 a.s. then
<
<
For a, b ∈ R, have (af + bg )dµ = a
<
<
If g ≤ f a.s. then
< fdµ.
< gdµ ≤
fdµ.
If g = f a.e. then gdµ =
<
|
<
fdµ + b gdµ.
|f |dµ.
When (Ω, F, µ) = (Rd , Rd , λ), write
fdµ| ≤
<
�
�
�
<
f (x)dx = 1E fdλ.
<
E
18.175 Lecture 4
7Outline
Integration
Expectation
18.175 Lecture 4
8Outline
Integration
Expectation
18.175 Lecture 4
9MIT OpenCourseWare
http://ocw.mit.edu
18.175 Theory of Probability
Spring 2014 | https://ocw.mit.edu/courses/18-175-theory-of-probability-spring-2014/0c6228b4a4638d5a34b149ec89bc7d19_MIT18_175S14_Lecture4.pdf |
Lecture 4
9MIT OpenCourseWare
http://ocw.mit.edu
18.175 Theory of Probability
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .
MIT OpenCourseWare
http://ocw.mit.edu
18.175 Theory of Probability
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . | https://ocw.mit.edu/courses/18-175-theory-of-probability-spring-2014/0c6228b4a4638d5a34b149ec89bc7d19_MIT18_175S14_Lecture4.pdf |
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
The Adventures of Malloc and New
Lecture 1: The Abstract Memory Machine
Eunsuk Kang and Jean Yang
MIT CSAIL
January 19, 2010
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
C: outdated, old, antiquated. . .
Photograph removed due to copyright restrictions. Please see
http://www.psych.usyd.edu.au/pdp-11/Images/ken-den_s.jpeg.
Figure: Dennis Ritche and Ken Thompson in 1972.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
C: fast, faster, fastest
Figure: Benchmark times from the Debian language shootout.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Congratulations on choosing to spend your time wisely!
Figure: XKCD knows that tools are important.
Courtesy of xkcd.com. Original comic is available here: http://xkcd.com/519/
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Lecture plan
1. Course goals and prerequisites.
2. Administrative details (syllabus, homework, grading).
3. High-level introduction to C.
4. C philosophy: “the abstract memory machine.”
5. How to get started with C.
6. Wrap-up and homework.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C? | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
6.088: a language (rather than programming) course
Images of Wonder Woman and circuit boards removed due to copyright restrictions.
Course goal: to help proficient programmers understand how and
when to use C and C++.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Background check
Expected knowledge
• Basic data structures (linked lists, binary search trees, etc.)?
• Familiarity with basic imperative programming concepts.
• Variables (scoping, global/local).
• Loops.
• Functions and function abstraction.
Other knowledge
• Functional programming?
• Systems programming?
• Hardware?
• OOP with another language?
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Course syllabus
Day Date
1/19
1
1/20
2
1/21
3
1/22
4
1/23
5
1/24
6
Topic
Meet C and memory management
Memory management logistics
More advanced memory management
Meet C++ and OOP
More advanced OOP
Tricks of the trade, Q & A
Lecturer
Jean
Jean
Jean
Eunsuk
Eunsuk
Eunsuk
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Administrivia
Homework
• Daily homework to be submitted via the Stellar site.
• Graded �+, �, or �−.
• Homework i will be due 11:59 PM the day after Lecture i;
late submissions up to one day (with deductions). | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
.
• Homework i will be due 11:59 PM the day after Lecture i;
late submissions up to one day (with deductions).
• Solutions will be released one day following the due date.
Requirements for passing
• Attend lectures–sign in at back.
• Complete all 5 homework assignments with a � average.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Recommended references
Books
Cover images of the following books removed due to copyright restrictions:
Kernighan, Brian, and Dennis Ritchie. The C Programming Language.
Upper Saddle River, NJ: Prentice Hall, 1988. ISBN: 9780131103627.
Roberts, Eric. The Art and Science of C. Reading, MA: Addison-Wesley,
1994. ISBN: 9780201543223.
Online resources
http://www.cprogramming.com
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
The C family
C
• Developed in 1972 by Dennis Ritchie at Bell Labs.
• Imperative systems language.
C++
• Developed in 1979 by Bjarne Stroustrup at Bell Labs.
• Imperative, object-oriented language with generics.
C� (outside scope of course)
• Multi-paradigm language with support for imperative,
function, generic, and OO programming and memory
management.
• Developed at Microsoft, release circa 2001.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Vocabulary check
• Imperative, declarative, functional
• Compiled, interpreted
• Static, dynamic
• Memory-managed
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
unsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Typically, C is. . .
• Compiled.
• Imperative.
• Manually memory-managed.
• Used when at least one of the following matters:
• Speed.
• Memory.
• Low-level features (moving the stack pointer, etc.).
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Thinking about C in terms of memory. . .
Figure: Women operating the ENIAC.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Layers of abstraction over memory
Level of abstraction
Directly manipulate memory Assembly (x86, MIPS)
Access to memory
Memory managed
C, C++
Java, C(cid:2), Scheme/Lisp, ML
Languages
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
It’s a memory world
Controller
ALU
Control/Status
IR
PC
Registers
I/O
Memory
Figure: Processors read from memory, do things, and write to memory.
Figure by MIT OpenCourseWare.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
C access to memory: the heap
The heap is a chunk of memory
for the C program to use.
• Can think of it as a giant
array.
• Access heap using special
pointer syntax.
• The whole program has
access to | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
Can think of it as a giant
array.
• Access heap using special
pointer syntax.
• The whole program has
access to the heapa .
aDepending on what the operating
system allows
Addr. Contents
.
.
.
.
.
.
0xbee 0xbeef
0xfeed
0xbf4
.
.
.
.
.
.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Manual memory management
Goals
• Want to allow the program to be able to designate chunks of
memory as currently in use.
• Want to be able to re-designate a piece of memory as “freed”
when the program is done with it.
C support
Standard library (stlib.h) has malloc and free functions.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
The other C memory: the stack
C functions get allocated on the stack.
• Functions are “pushed on” to the stack when called.
• Functions are “popped off” the stack when they return.
• Functions can access any memory below the current top of
the stack.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Memory layout: process context
High
Stack
Heap
Bss
Data
Text
0
Uninitialized variables
Initialized variables
Instruction
Figure by MIT OpenCourseWare.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Getting started with C
Photograph removed due to copyright restrictions.
Please see http://www-03.ibm.com/ib | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
Logistics
Goodbye
Getting started with C
Photograph removed due to copyright restrictions.
Please see http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV4002.html.
Figure: IBM 29 card punch, introduced late 1964.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Using C
1. Obtain a C compiler (GCC recommended–more instructions
on site for downloading GCC or using it on MIT servers.)
2. Write a simple C program.
#i n c l u d e < s t d i o . h>
/∗ H e a d e r s t o
i n c l u d e . ∗/
i n t main ( ) {
p r i n t f ( ” H e l l o w o r l d ! ” ) ;
}
3. Compile: gcc -o run hello hello.c
4. Run: ./run hello
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Functions
v o i d p r i n t s u m ( i n t arg1 ,
i n t sum = a r g 1 + a r g 2 ;
i n t a r g 2 ) {
/∗ P r i n t f
i s a s p e c i a l
f u n c t i o n t a k i n g v a r i a b l e
number o f a r g u m e n t s . ∗/
p r i n t f ( ”The sum
i s %d\n” , sum ) ;
/∗ The r e t u r n
r e t u r n ;
i s o p t i o n a l . ∗/
} | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
t u r n
r e t u r n ;
i s o p t i o n a l . ∗/
}
/∗ Each e x e c u t a b l e n e e d s t o h a v e a main f u n c t i o n w i t h
t y p e
i n t main ( ) {
i n t . ∗/
p r i n t s u m ( 3 , 4 ) ;
r e t u r n 0 ;
}
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Local and global variables
i n t x ;
i n t y ,
x = 1 ;
z ;
/∗ F u n c t i o n s
v o i d f o o ( ) {
i n t x ;
x = 2 ;
}
can h a v e
l o c a l v a r i a b l e s . ∗/
/∗ Arguments
v o i d b a r ( i n t
a r e
x ) {
l o c a l l y
s c o p e d . ∗/
x = 3 ;
}
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Conditionals
i n t
f o o ( i n t x ) {
/∗ C h a s t h e u s u a l b o o l e a n o p e r a t o r s . ∗/
i f ( 3 == x ) {
r e t u r n 0 ;
}
}
i n t
/∗
b a r ( )
Note
t r u e ! ∗/ | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
0 ;
}
}
i n t
/∗
b a r ( )
Note
t r u e ! ∗/
{
t h a t c o n d i t i o n s a r e
i n t e g e r t y p e , where 1
i s
i f ( 1 ) {
r e t u r n 0 ;
}
}
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Loops
For loops
v o i d f o o ( ) {
i ;
i n t
f o r ( i = 1 ;
p r i n t f ( ”%d\n” ,
i < 1 0 ; ++i ) {
i ) ;
}
}
While loops
v o i d b a r ( ) {
l c v = 0 ;
i n t
w h i l e ( l c v < 1 0 ) {
p r i n t f ( ”%d\n” ,
++l c v ;
l c v ) ;
}
}
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
When can we call what?
Each function needs to be declared (but not necessarily defined)
before we call it.
/∗ D e c l a r a t i o n . ∗/
v o i d p r i n t s u m ( i n t ,
i n t ) ;
/∗ Each e x e c u t a b l e n e e d s t o h a v e a main f u n c t i o n w i t h
t y p e
i n t main ( ) {
i n t . ∗/
p r i n t s | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
h
t y p e
i n t main ( ) {
i n t . ∗/
p r i n t s u m ( 3 , 4 ) ;
r e t u r n 0 ;
}
/∗ D e f i n i t i o n . ∗/
v o i d p r i n t s u m ( i n t arg1 ,
i n t a r g 2 ) {
/∗ Body d e f i n e d h e r e . ∗/
}
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Including headers
Header definitions allow us to use things defined elsewhere.
• Header files (.h files) typically contain declarations
(variables, types, functions). Declarations tell the compiler
“these functions are defined somewhere.”
• Function definitions typically go in .c files.
• Angle brackets indicate library header files; quotes indicate
local header files.
#i n c l u d e < s t d i o . h> /∗ L i b r a r y
#i n c l u d e ” m y l i b . h”
/∗ L o c a l
f i l e . ∗/
f i l e . ∗/
• The compiler’s -I flag indicates where to look for library files
(gcc -I [libdir] -o [output] [file]).
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
C/C++ empowerment
What is C?
The C memory machine
Logistics
Goodbye
Until tomorrow. . .
Homework (due tomorrow)
• Get a C compiler up and running.
• Compile and run “Hello world.” Make a small extension to
print the system | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
)
• Get a C compiler up and running.
• Compile and run “Hello world.” Make a small extension to
print the system time.
• Play around with gdb and valgrind.
• More details on the course website.
Questions?
• The course staff will be available after class.
Eunsuk Kang and Jean Yang
The Adventures of Malloc and New
MIT OpenCourseWare
http://ocw.mit.edu
6.088 Introduction to C Memory Management and C++ Object-Oriented Programming
January IAP 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-088-introduction-to-c-memory-management-and-c-object-oriented-programming-january-iap-2010/0c7443993e151c95e88811f0c5c5bbc5_MIT6_088IAP10_lec01.pdf |
15. Basic Properties of Rings
We first prove some standard results about rings.
Lemma 15.1. Let R be a ring and let a and b be elements of R.
Then
(1) a0 = 0a = 0.
(2) a(−b) = (−a)b = −(ab).
Proof. Let x = a0. We have
x = a0
= a(0 + 0)
= a0 + a0
= x + x.
Adding −x to both sides, we get x = 0, which is (1).
Let y = a(−b). We want to show that y is the additive inverse of ab,
that is we want to show that y + ab = 0. We have
y + ab = a(−b) + ab
= a(−b + b)
= a0
= 0,
by (1). Hence (2).
D
Lemma 15.2. Let R be a set that satisfies all the axioms of a ring,
except possibly a + b = b + a.
Then R is a ring.
Proof. It suffices to prove that addition is commutative. We compute
(a + b)(1 + 1), in two different ways. Distributing on the right,
(a + b)(1 + 1) = (a + b)1 + (a + b)1
= a + b + a + b
= a + (b + a) + b.
On the other hand, distributing this product on the left we get
(a + b)(1 + 1) = a(1 + 1) + b(1 + 1)
= a + a + b + b.
Thus
a + (b + a) + a = (a + b)(1 + 1) = a + a + b + b.
1
MIT OCW: 18.703 Modern AlgebraProf. James McKernanCancelling an a on the left and a b on the right, we get
b + a = a + b | https://ocw.mit.edu/courses/18-703-modern-algebra-spring-2013/0c771e6ff658b800f22aa90016880028_MIT18_703S13_pra_l_15.pdf |
Prof. James McKernanCancelling an a on the left and a b on the right, we get
b + a = a + b,
which is what we want.
Note the following identity.
D
Lemma 15.3. Let R be a ring and let a and b be any two elements of
R.
Then
(a + b)2 = a 2 + ab + ba + b2 .
Proof. Easy application of the distributive laws.
D
Definition 15.4. Let R be a ring. We say that R is commutative if
multiplication is commutative, that is
a · b = b · a.
Note that most of the rings introduced in the the first section are not
commutative. Nevertheless it turns out that there are many interest
ing commutative rings. Compare this with the study of groups, when
abelian groups are not considered very interesting.
Definition-Lemma 15.5. Let R be a ring. We say that R is boolean
if for every a ∈ R, a2 = a.
Every boolean ring is commutative.
Proof. We compute (a + b)2 .
a + b = (a + b)2
= a 2 + ba + ab + b2
= a + ba + ab + b.
Cancelling we get ab = −ba. If we take b = 1, then a = −a, so that
D
−(ba) = (−b)a = ba. Thus ab = ba.
Definition 15.6. Let R be a ring. We say that R is a division ring
if R − {0} is a group under multiplication. If in addition R is commu
tative, we say that R is a field.
Note that a ring is a division ring iff every non-zero element has a
multiplicative inverse. Similarly | https://ocw.mit.edu/courses/18-703-modern-algebra-spring-2013/0c771e6ff658b800f22aa90016880028_MIT18_703S13_pra_l_15.pdf |
ring is a division ring iff every non-zero element has a
multiplicative inverse. Similarly for commutative rings and fields.
Example 15.7. The following tower of subsets
Q ⊂ R ⊂ C
is in fact a tower of subfields. Note that Z is not a field however, as 2
does not have a multiplicative inverse. Further the subring of Q given
2
MIT OCW: 18.703 Modern AlgebraProf. James McKernanby those rational numbers with odd denominator is not a field either.
Again 2 does not have a multiplicative inverse.
Lemma 15.8. The quaternions are a division ring.
Proof. It suffices to prove that every non-zero number has a multiplica
tive inverse.
Let q = a + bi + cj + dk be a quaternion. Let
q¯ = a − bi − cj − dk,
the conjugate of q. Note that
qq¯ = a
2 + b2
+ c
2 + d2
.
As a, b, c and d are real numbers, this product if non-zero iff q is
non-zero. Thus
p =
q¯
,
a2 + b2 + c2 + d2
is the multiplicative inverse of q.
D
Here is an obvious necessary condition for division rings:
Definition-Lemma 15.9. Let R be a ring. We say that a ∈ R, a = 0,
is a zero-divisor if there is an element b ∈ R, b = 0, such that, either,
ab = 0
or
ba = 0.
If a has a multiplicative inverse in R then a is not a zero divisor.
Proof. Suppose that ba = 0 and that c is the multiplicative inverse of
a. We compute bac | https://ocw.mit.edu/courses/18-703-modern-algebra-spring-2013/0c771e6ff658b800f22aa90016880028_MIT18_703S13_pra_l_15.pdf |
Suppose that ba = 0 and that c is the multiplicative inverse of
a. We compute bac, in two different ways.
On the other hand
bac = (ba)c
= 0c
= 0.
bac = b(ac)
= b1
= b.
Thus b = bac = 0. Thus a is not a zero divisor.
D
Definition-Lemma 15.10. Let R be a ring. We say that R is a
domain if R has no zero-divisors. If in addition R is commutative,
then we say that R is an integral domain.
Every division ring is a domain.
Unfortunately the converse is not true.
3
MIT OCW: 18.703 Modern AlgebraProf. James McKernanExample 15.11. Z is an integral domain but not a field.
In fact any subring of a division ring is clearly a domain. Many of
the examples of rings that we have given are in fact not domains.
Example 15.12. Let X be a set with more than one element and let
R be any ring. Then the set of functions from X to R is not a domain.
Indeed pick any partition of X into two parts, X1 and X2 (that is
suppose that X1 and X2 are disjoint, both non-empty and that their
union is the whole of X). Define f : X −→ R, by
and g : X −→ R, by
f (x) =
0 x ∈ X1
1 x ∈ X2,
g(x) =
1 x ∈ X1
0 x ∈ X2.
Then f g = 0, but neither f not g is zero. Thus f is a zero-divisor.
Now | https://ocw.mit.edu/courses/18-703-modern-algebra-spring-2013/0c771e6ff658b800f22aa90016880028_MIT18_703S13_pra_l_15.pdf |
Then f g = 0, but neither f not g is zero. Thus f is a zero-divisor.
Now let R be any ring, and suppose that n > 1. I claim that Mn(R)
is not a domain. We will do this in the case n = 2. The general is not
much harder, just more involved notationally. Set
A = B =
0 1
0 0
.
Then it is easy to see that
AB =
0 0
0 0
.
Note that the definition of an integral domain involves a double
negative. In other words, R is an integral domain iff whenever
where a and b are elements of R, then either a = 0 or b = 0.
ab = 0,
4
MIT OCW: 18.703 Modern AlgebraProf. James McKernan
MIT OpenCourseWare
http://ocw.mit.edu
18.703 Modern Algebra
Spring 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-703-modern-algebra-spring-2013/0c771e6ff658b800f22aa90016880028_MIT18_703S13_pra_l_15.pdf |
MEASURE AND INTEGRATION: LECTURE 15
Lp spaces. Let 0 < p < ∞ and let f : X
function. We define the Lp norm to be
→
C be a measurable
�f �p =
and the space Lp to be
��
X
�1/p
|f |p dµ
,
Lp(µ) = {f : X → C f is measurable and �f �p
|
< ∞}.
Observe that �f �p = 0 if and only if f = 0 a.e. Thus, if we make
the equivalence relation f ∼ g ⇐⇒ f = g a.e, then �·� makes Lp a
normed space (we will define this later).
If µ is the counting measure on a countable set X, then
�
f dµ =
�
f (x).
Then Lp is usually denoted �p, the set of sequences sn such that
X
x∈X
�
∞
�
�1/p
|sn|p
< ∞.
A function f is essentially bounded if there exists 0 ≤ M < ∞ such
n=1
that f (x) ≤ M for a.e. x ∈ X. The space L∞ is defined as
|
|
L∞(µ) = {f : X → C f essentially bounded}
|
with the L∞ norm
�f �∞
|
= inf{M f (x) ≤ M a.e. x ∈ X}.
| |
Proposition 0.1. If f ∈ L∞, then f (x)
|
| ≤ �f �∞
a.e.
Proof. By definition of inf, there exists Mk → �f �∞ such that f (x) <
Mk a.e, or, equivalently, | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
exists Mk → �f �∞ such that f (x) <
Mk a.e, or, equivalently, there exists Nk with µ(Nk ) = 0 such that
c . Let N = ∪∞ Nk . Then µ(N ) = 0. If
f (x)| ≤ Mk for all x ∈ Nk
|
x ∈ N c =
∩∞
. Thus,
|
�
f (x)| ≤ �f �∞ for all x ∈ N c .
|
k=1(Nk )c, then f (x)| ≤ Mk since Mk
→ �f �∞
k=1
|
|
Date: October 23, 2003.
1
2
MEASURE AND INTEGRATION: LECTURE 15
Theorem 0.2. Let 1 ≤ p ≤ ∞ and 1/p + 1/q = 1. Let f ∈ Lp(µ) and
g ∈ Lq (µ). Then f g ∈ L1(µ) and
�f g�
1 ≤ �f �
��
i.e.,
p �g�q
�1/p ��
�
|
f g| dµ ≤
|f |p
�1/q
.
| |q
g
Proof. If 1 < p < ∞, this is simply H¨older’s inequality. If p = 1,
q = ∞, then f (x)g(x)
|
| ≤ �g�∞ |
�
|
f (x) a.e. Thus,
�
|f g| ≤ �g�
|
f .
|
�
Theorem 0.3. Let 1 ≤ p ≤ ∞. Let f, g ∈ Lp(µ). Then f + g ∈ Lp(µ)
and �f + g�
.
p ≤ �f �
p + �g�p
Proof. If 1 < p < ∞, this is | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
≤ �f �
p + �g�p
Proof. If 1 < p < ∞, this is simply Minkowski’s inequality. If p = 1,
�
�
then
f +
|
|
f +
|
f + g| ≤
|
�
g
f �
f + g�∞ ≤ �
| | ⇒ �
�
|
g is true. If p = ∞, then f + g
|
�∞
�g
∞ +
| ≤ |
.
|
Normed space and Banach spaces. A normed space is a vector
space V together with a function �·� : V
x� < ∞
.
⇐⇒
| �
x + y� ≤ �
(a) 0 ≤ �
(b) �x� = 0
(c) �αx� = α x� for all α ∈ C.
|
(d) �
R such that
x� + �y�
x = 0.
→
.
For example, Lp(µ) is a normed space if two functions f, g are consid
ered equal if and only if f = g a.e. Also, Rn with the Euclidean norm
is a normed space.
A metric space is a set M together with a function d : M × M R→
such that
(a) 0 ≤ d(x, y) < ∞.
(b) d(x, x) = 0.
(c) d(x, y) > 0 if x = y.
(d) d(x, y) = d(y, x).
(e) d(x, y) ≤ d(x, z) + d(z, y).
A normed space is a metric space with metric d(f, g) = �f − g�.
Recall that xi → x ∈ M if limn→∞ d(xn, x) = 0. A sequence {xi} is
Cauchy if for every � > 0 there exists N (�) such that d(xj , xk ) ≤ � | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
{xi} is
Cauchy if for every � > 0 there exists N (�) such that d(xj , xk ) ≤ � for
all j, k ≥ N (�).
Claim: if xn → x, then it is Cauchy. We know that limn→∞ d(xn, x) =
0, so given � > 0, there exists N such that d(xk , x) < �/2 for all k > N .
for j, k > N , d(xk , xj ) ≤ d(xj , x) + d(x, xk ) < �.
�
MEASURE AND INTEGRATION: LECTURE 15
3
However, a Cauchy sequence does not have to converge. For example,
consider the space R \ {0} (the punctured real line) with the absolute
value norm. The sequence xn = 1/n is Cauchy but it does not converge
to a point in the space.
A metric space is called complete if every Cauchy sequence converges.
By the BolzanoWeierstrass theorem, Rn is complete. (Every Cauchy
sequence is bounded, so it has a convergent subsequence and must
converge.)
A normed space (V, �·�) that is complete under the induced metric
d(f, g) = �f − g� is called a Banach space.
RieszFischer theorem.
Lemma 0.4. If {fn} is Cauchy, then there exists a subsequence fnk
such that d(f
, fnk ) ≤ 2−k .
nk+1
Theorem 0.5. For 1 ≤ p ≤ ∞ and for any measure space (X, M, µ),
the space Lp(µ) is a Banach space.
Proof. Let 1 ≤ p < ∞ and let {fn} ∈ Lp(µ) be a Cauchy sequence. | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
< ∞ and let {fn} ∈ Lp(µ) be a Cauchy sequence.
By the lemma, there exists
< · · ·
�
2
�
< 2−k
such that fnk+1
and g =
, fnk
�
�
∞
fni+1 − f
limk→∞ gk
�
i=1
< n
with
�
�
k
1
�
fni+1 − f
�
p
By Minkowski’s inequality,
subsequence
a
�k
Let gk
.
�
�
.
ni p
�
�
=
i=1
=
n
n
ni
p
�gk �p ≤
k
� �
fni+1 − f
�
k
�
�
�
ni p
<
2−i
< 1.
i=1
i=1
Consider gk . By Fatou’s lemma,
p
�
p
lim inf gk ≤ lim inf
p
gk ,
�
and so
�
Thus, the series
gp ≤ 1 ⇒ g(x) < ∞ a.e.
∞
�
fn1 (x) +
(fni+1 (x) − fni (x))
i=1
converges absolutely a.e. Define
f (x) =
�
�
∞
i=1(f
fn1 (x) +
0
ni+1 (x) − fni (x)) where it converges;
otherwise.
4
MEASURE AND INTEGRATION: LECTURE 15
The partial sum
k−1
�
fn1 (x) +
(fni+1 (x) − fni (x)) = fnk (x),
and so
i=1
lim fnk (x) = f (x) a.e.
k→∞
Thus we have shown that every Cauchy sequence has a convergent
nk →
subsequence, and we | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
we have shown that every Cauchy sequence has a convergent
nk →
subsequence, and we NTS that f
Given � > 0, there exists N such that �fn − fm�p < � for all n, m >
p
f in L .
N . We have that
|
f − fm
|
p = lim inf fnk − fm
|
|p
since fnk
→
f a.e. Thus,
�
X
|
f − fm
p =
|
�
X
|
lim inf fnk − fm
|
p
|
fnk − fm
|
p
�
≤ lim inf
X
p
< � .
This implies that �f − f
pm�
< �, and thus
�p ≤ �f − fm�
m�p →
�f �p = �f − fm + f + m
p + �fm�p < ∞.
We conclude that f ∈ Lp and �f − f
0 as m → ∞.
Now let p = ∞ and let {fn} be a Cauchy sequence in L∞(µ). Let
fk(x)|
Ak = {x
> �fk �∞}
| |
and
Bm,n = {x
These sets all have measure zero. Let
� �
�
| |
∞
|
fn(x) − fm(x) > �fn − fm�∞}.
N =
Ak ∪
∞
�
.
Bm,n
Then N has measure zero.
k=1
n,m=1
|
For x ∈ N c , fn is a Cauchy sequence of complex numbers. Thus,
fn → f by completeness of C uniformly. Since �f
is bounded,
c
c
|
fk (x) < M for all x ∈ N . Thus, f (x) < M for all x ∈ N . Letting
0 as n → ∞. �
f = 0 on N , we have �f �� | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
x ∈ N . Letting
0 as n → ∞. �
f = 0 on N , we have �f �∞ < ∞ and �f
Theorem 0.6. Let 1 ≤ p ≤ ∞ and {fn} be a Cauchy sequence in Lp(µ)
n�p →
0. Then fn has a subsequence which converges
such that �f − f
pointwise almost everywhere to f (x).
n − f �∞ →
k �∞
MEASURE AND INTEGRATION: LECTURE 15
5
Proof. Since �f − fn� → 0, fn → f in measure. By the previous
�
theorem, there exists a subsequence which converges a.e.
Examples in R.
p
(1) A sequence in Lp can converge a.e. without converging in Lp.
Let fk = k2χ(0,1/k). Then
��
�1/p
�fk �p =
(0,1/k)
k2p
= k2(1/k)1/p = k2−1/p < ∞.
Thus fk ∈ Lp and fk → 0 on R, but �fk �
p → ∞
.
(2) A sequence can converge in Lp without converging a.e. (HW
problem).
(3) A sequence can belong to Lp1 ∩ Lp2 and converge in Lp1 without
converging in Lp2 . Let fk = k−1χ(k,2k). Then fk → 0 pointwise
and �fk � = k−1k1/p = k1/p−1 . If p > 1, then �fk �
0 as
k → ∞, so fk → 0 in Lp norm. But �fk � = 1 so fk �→ 0 in L1 .
p →
p | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
0 in Lp norm. But �fk � = 1 so fk �→ 0 in L1 .
p →
p
1 | https://ocw.mit.edu/courses/18-125-measure-and-integration-fall-2003/0c85400465928cf42a19e5b87bdc349c_18125_lec15.pdf |
Lecture 5
8.251 Spring 2007
Lecture 5 - Topics
• Nonrelativistic strings
• Lagrangian mechanics
Reading: Zwiebach, Chapter 4
Non-Relativistic Strings
Study nonrelativistic strings first to develop intuition and math notation before
moving to the relativistic strings that we actually care about.
Non-relativistic string:
Characterized by:
Tension, T0: [T0] = [Force] = [Energy/Length] = M [v2]
Mass/Length: µ0
2
T0 ≈ µ0v
Natural velocity: v =
�
T0/µ0
L
Transverse Oscillation: Mark point P on string and see it moving up and down:
y(P, t), x(P, t) = x(P )
(x not dependent on t)
Small Oscillation:
�
∂y
�
�
�
∂x
Consider small section of string:
�
�
(t, x)
�
�
<< 1
1
Lecture 5
8.251 Spring 2007
Approximate tensions on endpoints as equal (good for transverse waves, terrible
for longitudinal)
dFν = T0
(t, x + dx) − T0
(t, x)
∂y
∂x
∂y
∂x
∂2y
∂x2
= T0
(t, x)dx
≈ µ0dx
∂2y
∂t2
∂2y
∂x2
−
1 ∂2y
T0/µ0 ∂t2
= 0
The Wave Equation! t, x are parameters. Motion described by y(t, x). (If had
motion in more than 1 dimension �y(t, x))
Stretching of string:
�
Δl = dx2 + dy2 − dx
�
= dx( 1 + (dy/dx)2 − 1)
= dx(dy/dx)2
1
2
((small))
General form of wave equation:
v: velocity of wave, v = | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/0cca13a5a5fa624cb90d0b457692b8d6_lec5.pdf |
dx(dy/dx)2
1
2
((small))
General form of wave equation:
v: velocity of wave, v = T0/µ0
�
∂2f
∂x2
−
1 ∂2f
v2 ∂t2
= 0
General Solution:
y(x, t) = h+(x − v0t) + h (x + v0t)
−
Note: the h’s are function of 1 variable (x ± v0t) not 2 variables x and t inde
pendently.
Boundary Conditions: Behavior of endpoints at all times (special points at all
times)
Open string:
y(t, x = 0) = 0
(Dirichlet condition - for fixed end point)
∂y
∂x
(t, x = 0) = 0 (Free BD, Neumann condition)
2
Lecture 5
8.251 Spring 2007
For free endpoint (hoop on string), means string must be perp. here
Initial Conditions: All points on string at some t0 (all points at special time)
y(λ, t = 0)
∂y
∂x
(x, t = 0)
Example:
Fixed Endpoints:
y(t, 0) = h+(−v0t) + h (v0t) = 0
= h+(−u) + h−(u)
−
Let u = v0t
h−(u) = −h+(−u)
y(t, x = a) = 0 = h+(a − v0t) + h−(a + v0t)
h+(a − v0t) = −h−(a + v0t) = h+(−a − v0t)
Let u = −a − v0t
h+(u + 2a) = h+(u)
Variational Principle
Consider point mass m doing 1D motion x(t).
Assume x(ti) = xi, x(tf ) = xf . Under the influence of potential V ( | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/0cca13a5a5fa624cb90d0b457692b8d6_lec5.pdf |
D motion x(t).
Assume x(ti) = xi, x(tf ) = xf . Under the influence of potential V (x)
Know:
3
Lecture 5
8.251 Spring 2007
Possible motions:
Not possible:
Given a path:
4
Lecture 5
8.251 Spring 2007
Functional: S : x(t) ⇒ � (not a function of time)
Hamilton’s Principle: Principal path makes S stationary.
Call true path x(t). Consider new path x(t) + δx(t)
S[x(t) + δx(t)] = S[x] + θ[(δx)2]
Assume δx(ti) = 0, δx(tf ) = 0
Lagrangian:
L(t) = Kinetic Energy - Potential Energy
�
t2
S =
L(t)dt =
� �
t2 1
2
t1
�
m( ˙x(t))2 − V (x(t)) dt
S[x + δx] =
t1
� �
tf 1
2
ti
�
m( ˙x + δx˙ )2 − V (x + δx) dt
� �
tf
�
∂V
∂x
ti
= S[x] +
xδ ˙
m ˙ x −
(x(t)δx(t)) dt +
m(δ ˙
tf 1
2
�
ti
�
−
x(t))2
��
θ(δx2)
1
2
V ��(δx)2
�
Need to eliminate second term.
� tf
xδ ˙
ti
be true.
∂V
[m ˙ x − ∂x (x(t)δ(x(t)))]dt must go away for S[x + δx] = S[x] + θ[(δx)2] to
Call this the variation δS
δS =
�
tf
ti
�
d
dt
dt
Integrate by parts
(m ˙
xδx) − m¨
xδx − V �(x(t))δ | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/0cca13a5a5fa624cb90d0b457692b8d6_lec5.pdf |
dt
Integrate by parts
(m ˙
xδx) − m¨
xδx − V �(x(t))δ(x(t))
�
dS = mx˙ (tf )δx(tf ) − mx˙ (ti)δx(ti) +
�
tf
dtδx(t)[−mx¨ − V �(x(t))]
ti
δx(tf ) = δ(ti) = 0 from before.
The integral
�
tf dtδx(t)[−mx¨ − V �(x(t))] must be 0 too, so:
ti
mx¨ = −V �(x(t))
5
Lecture 5
8.251 Spring 2007
String Lagrangian
T : Kinetic energy = 1 µ0dx ∂y
∂t
2
� �2
Potential Energy = �
string
� �2
ΔlT0 = � a 1 dx ∂y T0
0 2
∂x
L =
�
a
0
�
1
µ0(∂y/∂t)2 −
dx
2
�
1
T0(∂y/∂t)2
2
S =
�
tf
ti
L(t)dt
Call L: Lagrangian Density
So:
L =
1
2
µ0(
∂y
∂t
)2
−
1
∂y
)
(
2 ∂t
S =
�
tf
a
�
dt
ti
0
�
dxL
∂y ∂y
,
∂t ∂x
�
δy(ti, x) = 0
δy(tf , x) = 0
Don’t know δy(x = 0, t) or δy(x = a, t)
δS =
�
tf
a
�
dt
ti
0
�
∂L
dx
∂y˙
δy˙ +
�
δy�
∂L
∂y�
6
Lecture 5
Let:
δS =
�
tf
�
dt
ti
0
8 | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/0cca13a5a5fa624cb90d0b457692b8d6_lec5.pdf |
�y�
6
Lecture 5
Let:
δS =
�
tf
�
dt
ti
0
8.251 Spring 2007
tP = ∂L/∂y˙
P x = ∂L/∂y�
�
tf
δS =
� �
a
P t
∂(δy)
∂t
�
+ P x
∂(δy)
∂x
ti
0
a
�
dx −δy(x, t)
�
∂P t
∂t
+
∂P x
∂x
�� �
+
a
0
dxP t[δy]tf +
ti
�
tf
ti
P
x[δy]x=a
x=0
δy(ti) = δy(tf ) = 0
Must have:
∂P t
∂t
+
∂P x
∂x
= 0 = µ0
∂2y
∂t2
− T0
∂2y
∂x2
Some kind of conservation law like ∂µJ µ = 0
�
tf
ti
dtP x[δy]x=a =
x=0
�
tf
ti
For ∗ ∈ 0, a:
dt[P x(t, x = a)δy(t, x = a) − P x(t, x = 0)δy(t, x = 0)]
P x(t, x∗)δy(t, x∗)
∗
Dirichlet condition:
y(t, x ) = fixed, δy(t, x ) = 0
Free boundary condition:
P x(t, x∗) = 0, ∂y/∂x = 0 (Neumann condition)
∗
7 | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/0cca13a5a5fa624cb90d0b457692b8d6_lec5.pdf |
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Physics Department
Physics 8.07: Electromagnetism II
Prof. Alan Guth
October 17, 2012
LECTURE NOTES 9
TRACELESS SYMMETRIC TENSOR APPROACH
TO LEGENDRE POLYNOMIALS
AND SPHERICAL HARMONICS
In these notes I will describe the separation of variable technique for solving Laplace’s
equation, using spherical polar coordinates. The solutions will involve Legendre polyno-
mials for cases with azimuthal symmetry, and more generally they will involve spherical
harmonics.
I will construct these solutions using traceless symmetric tensors, but in
Lecture Notes 8 I describe how the solutions in this form relate to the more standard ex-
pressions in terms of Legendre polynomials and spherical harmonics. (Logically Lecture
Notes 8 should come after these notes, although they were posted first.) If you are start-
ing from scratch, I think that the traceless symmetric tensor method is the simplest way
to understand this mathematical formalism. If you already know spherical harmonics, I
think that you will find the traceless symmetric tensor approach to be a useful addition
to your arsenal of mathematical methods. The symmetric traceless tensor approach is
particularly useful if one needs to extend the formalism beyond what we will be doing —
for example, there are analogues of spherical harmonics in higher dimensions, and there
are also vector spherical harmonics that are useful for expanding vector functions of angle.
Vector spherical harmonics are used in the most general treatments of electromagnetic
radiation, although we will not be introducing them in this course.
I don’t know a reference for the traceless symmetric tensor method, which is the main
reason I am writing these notes. For the standard method, limited to the case of azimuthal
If you would like to see an
symmetry, our textbook by Griffiths should be sufficient.
additional reference on spherical harmonics, which are needed when there is no azimuthal
symmetry, then I would recommend J.D. Jackson, Classical Electrodynamics, 3rd
Edition (John Wiley & Sons, 1999), Sections 3.1, 3.2, 3.5, and 3.6.
1. LAPLACE’S EQUATION IN SPHERICAL COORDINATES:
In spherical | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
, 3.5, and 3.6.
1. LAPLACE’S EQUATION IN SPHERICAL COORDINATES:
In spherical coordinates, Laplace’s equation can be written as
∇2
ϕ(r, θ, φ) =
(cid:2)
(cid:1)
r2
∂ϕ
∂r
+
1
r2
1 ∂
r2 ∂r
∇2
θ ϕ = 0 ,
where the angular part is given by
∇2
θ ϕ ≡
1
sin θ
∂
∂θ
(cid:2)
(cid:1)
sin θ
∂ϕ
∂θ
+
1
∂2
ϕ
sin2 θ ∂φ2
.
(9.1)
(9.2)
8.07 LECTURE NOTES 9, FALL 2012
p. 2
It would be more logical to write ∇2
way. It is sometimes useful to rewrite the first term in Eq. (9.1) using
θ as ∇2
θ,φ, but it would be tiresome to write it that
(cid:2)
(cid:1)
r2 ∂ϕ
∂r
1 ∂
r2 ∂r
=
1 ∂2
r ∂r2
(rϕ) .
(9.3)
To use the method of separation of variables, we can seek a solution of the form
ϕ(r, θ, φ) = R(r)F (θ, ϕ) .
Then Laplace’s equation can be written as
0 =
r2
RF
∇2
ϕ =
(cid:2)
(cid:1)
r2
dR
dr
+
1
F
d
1
R dr
∇2
θ F .
(9.4)
(9.5)
Since the first term on the right-hand side depends only on r, and the second term
depends only on θ and φ, the only way that the equation can be satisfied is if each term
is a constant. Thus we can write
∇2
1
F
(cid:1)
R
r2 d
dr
1 d
R d
r
θ F = Cθ ,
(cid | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
1
F
(cid:1)
R
r2 d
dr
1 d
R d
r
θ F = Cθ ,
(cid:2)
= −C .
θ
2. THE EXPANSION OF F (θ, φ):
We now wish to find the most general solution to the equation
∇2
θ F = Cθ F ,
(9.6)
(9.7)
(9.8)
which is a rewriting of Eq. (9.6). If such a function F can be found, we say that F is an
eigenfunction of the operator ∇2
θ, with eigenvalue Cθ.
A function of angles (θ, φ) can equivalently be thought of as a function of the unit
vector nˆ that points in the direction of θ and φ, which can be written explicitly as
nˆ = sin θ cos φ eˆ1 + sin θ sin φ eˆ2 + cos θ eˆ3 ,
(9.9)
where eˆ1, ˆe2, and eˆ3 can also be written as eˆx, ˆey, and eˆz. I am labeling the unit vectors
using the numbers 1, 2, and 3 when I am thinking about summing over the indices, and
otherwise I use x, y, and z.
I now claim that the most general function of (θ, φ) can be written as a power series
in nˆ, or more precisely as a power series in the components of nˆ. I will not prove this,
8.07 LECTURE NOTES 9, FALL 2012
p. 3
but it is true at least for square-integrable piece-wise continuous functions F (θ, φ). Such
a power series can be written as
F (ˆn) = C (0) +
(1)
Ci nˆi +
(2)
Cij nˆinˆ
j + . . . +
(cid:9))
(
Ci1i2...i n(cid:1) ˆi1 nˆi2 . . . nˆi(cid:1) + . . . ,
(9.10)
((cid:9))
where repeated indices are summed from 1 to 3 (as Cartesian coordinates). Note that | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
. . ,
(9.10)
((cid:9))
where repeated indices are summed from 1 to 3 (as Cartesian coordinates). Note that
Ci1i2...i ni
(cid:1) ˆ 1 nˆi2 . . . nˆi(cid:1) represents the general term in the series, where the first three terms
correspond to (cid:16) = 0, (cid:16) = 1, and (cid:16) = 2. The indices i1, i2, . . . , i(cid:9) represent (cid:16) different
indices, like i and j; since they are repeated, they are each summed from 1 to 3.
The coefficients
((cid:9))
C
rank of the tensor. Note that
they can also be considered a scalar, a vector, and a matrix.
Ci
Ci1i2...i(cid:1) are called tensors, and the number of indices is called the
Cij are special cases of tensors, although
, and
(0),
(1)
(2)
It is possible to impose restrictions on the coefficients of Eq. (9.10) without actually
restricting what can appear on the right-hand side of the equations. In particular, we
will insist that
1) The tensors
((cid:9))
Ci1i2...i(cid:1) are symmetric under any reordering of the indices:
((cid:9))
Ci1i
((cid:9))
C
2...i(cid:1) = j1j2...j
,(cid:1)
(9.11)
where {j1, j2, . . . , j(cid:9)} is any permutation of {i1, i2, . . . , i(cid:9)}.
((cid:9))
2) The tensors
Ci1i2...i are traceless, in the sense that if any two indices are set equal
the result is equal to zero. Since the tensors are already
to each other and summed,
assumed to be symmetric, it does not matter which indices are summed, so we can
choose the last two:
(cid:1)
((cid:9))
Ci1i2...i
(cid:1) 2jj−
= 0 | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
the last two:
(cid:1)
((cid:9))
Ci1i2...i
(cid:1) 2jj−
= 0
.
(9.12)
To explain why these restrictions on the C((cid:9))’s do not impose any restriction on the right-
Cij , but I think you will be able to see
hand side of Eq. (9.10), I will use the example of
is symmetric can be seen
that that the argument applies to all (cid:16). The insistence that
(2)
(2)
Cij
to make no difference to the right-hand side of Eq. (9.10), because
Cij multiplies the
symmetric tensor nˆinˆj. Thus, if
Cij had an antisymmetric part, it would not contribute
to the right-hand side of Eq. (9.10). The requirement of tracelessness is less obvious, but
suppose that
Cij were not traceless. Then we could write
(2)
(2)
(2)
(2)
Cii = λ = 0 .
(9.13)
(cid:6)
p. 4
(9.14)
(9.15)
(9.16)
8.07 LECTURE NOTES 9, FALL 2012
We could then define a new quantity,
C˜(2)
ij =
(2)
1
Cij − λδij .
3
It follows that C˜(2)
ij
is traceless:
˜(2)
Cii =
(2)
Cii − λδ
ii = λ − λδ
ii = 0 ,
1
3
1
3
1
3
since δii = 3. The original term
(2)
Cij nˆinˆj can then be expressed in terms of C˜(2)
ij :
(cid:3)
(2)
Cij ninj
ˆ ˆ =
˜(2)
Cij + λδ
ij
(cid:4)
nˆinˆ
˜(2)
j = Cij nˆinˆj + λ ,
1
3
where we used the fact that δij nˆinˆj = 1, since nˆ is a unit vector. The extra term, | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
we used the fact that δij nˆinˆj = 1, since nˆ is a unit vector. The extra term, 1 λ3 ,
can then be absorbed into a redefinition of C(0):
C˜(0) = C(0) + λ .
1
3
Finally, we can write
C(0) +
(1)
i
C nˆi +
(2)
Cij nˆ nˆj = C˜(0)
i
(1)
+ Ci nˆi + C˜ij nˆinˆj ,
(2
)
(9.17)
(9.18)
so we can insist that the tensor that multiplies nˆinˆj be traceless with no restriction on
what functions can be expressed in this form.
I will call the (cid:16)’th term of this expansion F(cid:9)(ˆn), so
F(cid:9)
(ˆn) =
((cid:9))
i1i2...i
(cid:1)
C
nˆ nˆ . . . nˆ
i2
i
1
i(cid:1)
.
(9.19)
3. EVALUATION OF ∇2Fθ (cid:5)(ˆn):
To evaluate ∇2
θF(cid:9)(ˆn), we are going to take advantage of a convenient trick. Instead
of dealing directly with F(cid:9)(ˆn), we will instead introduce a radial variable r, using it to
define a coordinate vector
(cid:21)r = rn .ˆ
(9.20)
Following the notation of Griffiths (see his Eq. (1.19)), I will denote the coordinates of (cid:21)r
by x, y, and z, or in index notation I will call them xi. So
(cid:21)r = xieˆi = x1eˆ1 + x2eˆ2 + x3eˆ3 .
(9.21)
8.07 LECTURE NOTES 9, FALL 2012
p. 5
Then, given any F(cid:9)(ˆn) of the form given in Eq. (9.19), we can define a function F˜ | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
F(cid:9)(ˆn) of the form given in Eq. (9.19), we can define a function F˜(cid:9)((cid:21)r ) by
F˜
(cid:9)((cid:21)r ) = Ci1i2...i xi
(cid:1)
1 xi2 . . . xi(cid:1) = r F(cid:9)(ˆn) .
(cid:9)
((cid:9))
(9.22)
((cid:9))
Ci i ...i
Note that
but we are defining F˜(cid:9)((cid:21)r ) by multiplying
indices, instead of multiplying by ˆ
is the same rank (cid:16)
1 2
(cid:1)
traceless symmetric tensor used to define
(cid:9)(ˆn),
Ci i ...i by xi1 xi2 . . . xi and then summing over
((cid:9))
F
(cid:1)
1 2
(cid:1)
ni1 nˆi2 . . . nˆi(cid:1) and then summing.
Now we can make use of Eq. (9.1), which relates the full Laplacian ∇2 to the angular
Laplacian, ∇2
θ. We will find that in this case the full Laplacian and the radial derivative
piece of Eq. (9.1) will both be simple, so we will be able to determine the angular Laplacian
by evaluating the other terms in Eq. (9.1).
To evaluate ∇2F˜(cid:9)((cid:21)r ), we start with (cid:16) = 0. Clearly
∇2F˜0((cid:21)r ) = ∇2C(0) = 0 ,
since the derivative of a constant vanishes. Similarly for (cid:16) = 1,
∇2F˜
1((cid:21)r ) = ∇ Ci xi = 0 ,
2
(1)
(9.23)
(9.24)
since the first derivative produces a constant, so the second derivative vanishes. The first
nontrivial case is (cid:16) = 2:
∇2F˜ ((cid:21)r ) = � | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
first
nontrivial case is (cid:16) = 2:
∇2F˜ ((cid:21)r ) = ∇2
2
(2)
Cij xixj
=
Cij
(2) ∂
∂
∂xm ∂xm
(xixj)
=
)
(2
Cij
∂
∂xm
[δimxj + δjmxi]
= 2
(2)
C [
ij
δimδjm] = 2Cii = 0 ,
(2)
(9.25)
where in the last step we used the all-important fact that
at one more case, and then I hope it will be clear that it generalizes. For (cid:16) = 3,
is traceless. We will look
(2)
Cij
∇2F3 (cid:21)r
˜ ( ) =
(3)
∇2Cijkxixjxk
(3) ∂
∂
=
Cijk
∂xm ∂xm
(cid:5)
(xixjxk)
∂
∂xm
(cid:5)
=
(3)
Cijk
=
(3)
Cijk
=
(3)
Cijk
δimxjxk + (terms that symmetrize in ijk)
(cid:6)
δimδjmxk + (terms that symmetrize in ijk)
(cid:5)
(cid:6)
δij xk + (terms that symmetrize in ijk)
=
(3)
Ciik xk + (terms that symmetrize in ijk) = 0 ,
(cid:6)
(9.26)
8.07 LECTURE NOTES 9, FALL 2012
p. 6
((cid:9))
Ci1i2...i(cid:1) that caused the term to vanish. Thinking
where again it is the tracelessness of
about the general term, one can see that after the derivatives are calculated, there are
(cid:16)−2 factors of xi that remain, but there are still (cid:16) indices on
Ci1i2...i(cid:1) . Since all indices are
Ci1i2...i which are contracted (i.e., set equal to
summed, there are always two indices on
(cid | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
all indices are
Ci1i2...i which are contracted (i.e., set equal to
summed, there are always two indices on
(cid:1)
to vanish by the tracelessness condition.
each other) and summed, which causes the result
The bottom line, then, is that
((cid:9))
((cid:9))
∇2F˜(cid:9)((cid:21)r ) = 0
for all (cid:16).
(9.27)
To see what this says about ∇2
θF(cid:9)(ˆn), recall that F˜(cid:9)((cid:21)r ) = r F(cid:9)(ˆn). Using Eq. (9.1),
(cid:9)
we can write
0 =
2
˜∇ F(cid:9)(r ) =
(cid:21)
(cid:7)
2
r
1 ∂
r2 ∂r
(cid:8)
1
+ ∇2
r2
θ F˜(cid:9)((cid:21)r )
∂F˜
(cid:9)((cid:21)r )
∂r
(cid:2)
(cid:9)
(cid:1)
r2 dr
dr
=
1 d
r2 dr
(cid:9)
F(cid:9)(ˆn) + r(cid:9)
∇2
1
r2
θ F(cid:9)(ˆn)
(cid:10)
= r(cid:9)−2
(cid:16)((cid:16) + 1)F(cid:9)(ˆn) + ∇2
θ F(cid:9)(ˆn)
,
and therefore
∇2
θ F(cid:9)(ˆn) = −(cid:16)((cid:16) + 1)F(cid:9)(ˆn) .
(9.28)
(cid:6)
and eigen-
Thus, we have found the eigenfunctions
(cid:5)
−(cid:16)((cid:16) + 1)
(cid:6)
values
of the differential operator ∇2
θ. This is a very useful resul
t!
(cid:5)
F(cid:9) n
(ˆ) =
((cid:9))
Ci1i2...i nˆi1 n� | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
5)
F(cid:9) n
(ˆ) =
((cid:9))
Ci1i2...i nˆi1 nˆi2 . . . nˆi
(cid:1)
(cid:1)
4. GENERAL SOLUTION TO LAPLACE’S EQUATION IN SPHERICAL
COORDINATES:
Now that we know the eigenfunctions of ∇2
θ, we can return to the solution to Laplace’s
equation by the separation of variables in spherical coordinates. We now know that in
Eq. (9.6), the only allowed values of Cθ are −(cid:16)((cid:16) + 1), where (cid:16) is an integer. Thus,
Eq. (9.7) becomes
(cid:1)
(cid:2)
d
dr
r2 dR
dr
= (cid:16)((cid:16) + 1)R ,
(9.29)
and we can look for solutions by trying R(r) = rp. We find consistency provided that
p(p + 1) = (cid:16)((cid:16) + 1) ,
(9.30)
8.07 LECTURE NOTES 9, FALL 2012
p. 7
which is a quadratic equation with two roots, p = (cid:16) and p = −((cid:16) + 1). Since we found
two solutions to a second order linear differential equation, we know that any solution
can be written as a linear sum of these two. Thus we can write
R(cid:9)(r) = A(cid:9)r(cid:9)
+
B
(cid:9)
r(cid:9)+1
,
(9.31)
allowing for different values of A(cid:9) and B(cid:9) for each (cid:16). The most general solution to
Laplace’s equation, in spherical coordinates, can then be written as
(cid:11) (cid:1)
∞
(cid:9)
A(cid:9)r +
Φ((cid:21)r ) =
(cid:9)=0
(cid:2)
B
(cid:9)
r(cid:9)+1
((cid:9))
1i2... (cid:1)
i nˆi
Ci
1 nˆi | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
:9)+1
((cid:9))
1i2... (cid:1)
i nˆi
Ci
1 nˆi2 . . . nˆi(cid:1) ,
(9.32)
where the A(cid:9)’s and B(cid:9)’s are arbitrary constants, and each
symmetric tensor.
((cid:9))
Ci1i2...i(cid:1) is an arbitrary traceless
In Lecture Notes 8 it is shown explicitly that the (cid:16)’th term here, when compared
with the standard expansion in the spherical harmonic functions Y(cid:9)m(θ, φ), corresponds
to the sum of all terms with the same (cid:16), but for all m. The Y(cid:9)m’s are defined for integer
values of m from −(cid:16) to (cid:16), so there are 2(cid:16) + 1 terms for each value of (cid:16).
5. COUNTING THE NUMBER LINEARLY INDEPENDENT TRACELESS
SYMMETRIC TENSORS:
Using the fact that
((cid:9))
Ci1i2...i
is traceless and symmetric, we can determine how many
(cid:1)
rs exist, for any given (cid:16). In other words, how many real
linearly independent such tenso
constants are needed to parameterize the most general traceless symmetric tensor of rank
(cid:16)
?
We begin by calculating Nsym((cid:16)), the number of linearly independent symmetric
tensors of rank (cid:16). Note that we are postponing the consideration of tracelessness. For
grounding, we can start by saying that it takes one number to specify S(0), a rank 0
symmetric tensor, because it is just a number. (I am using S for symmetric tensors,
(1)
while reserving C for traceless symmetric tensors.) It takes 3 numbers to specify
,
(2)
can each be specified independently. For Sij ,
since the 3 values
Sji , so there are fewer
)
(2)
(2
S22 ,
12 ,
than 9 independent values; there are 6, which can be taken to be
(2)
S32 determined by symmetry. Since the order of the
S23 ,
indices does not matter, we | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
, which can be taken to be
(2)
S32 determined by symmetry. Since the order of the
S23 ,
indices does not matter, we can always list the indices in ascending order (as I did for
however, we see the constraints of symmetry:
S2 , and S3
has to equal
S33 , with
S31 , and
(2)
S ,
11
(2
)
S13 ,
(1)
S1 ,
(2)
S21 ,
(2)
Sij
Si
(1)
(2)
(1)
(2)
(2)
(2)
S
8.07 LECTURE NOTES 9, FALL 2012
p. 8
(2)
Sij ) and then each independent entry will occur once. When the indices are so written
in ascending order, then the index values are completely determined if we just specify
how many indices are equal to 1, how many are equal to 2, and how many are equal to 3.
If it helps, we can imagine three hats, labeled 1, 2, and 3, and (cid:16) indistinguishable balls,
representing the (cid:16) indices of a rank (cid:16) tensor. There is then a 1:1 correspondence between
independent tensor elements and the different ways that the balls can be put into the
hats. For example, if there are 9 balls, with 3 in the first hat, 2 in the second, and 4 in
the third, then this arrangement of balls corresponds to
S111223333. So now we just have
to figure out how many different ways we can put (cid:16) indistinguishable balls into 3 hats.
(9)
One way of counting the balls-in-hats problem is to imagine first labeling each ball
with a number, so they are no longer indistinguishable. We also introduce 2 dividers,
where 2 is one less than the number of hats. Initially we will also assign numbers to the 2
dividers. Thinking of the balls and dividers together, we have (cid:16)+2 distinguishable objects.
We can imagine listing them in all possible orderings, and with (cid:16) + 2 distinguishable
objects there are ((cid:16) + | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
imagine listing them in all possible orderings, and with (cid:16) + 2 distinguishable
objects there are ((cid:16) + 2)! orderings. For each ordering there is an equivalent balls-in-
hats assignment. The balls to the left of the left-most divider are assigned to hat 1, the
balls between the two dividers are assigned to hat 2, and the balls to the right of the
right-most divider are assigned to hat 3. We have of course overcounted, since many
different orderings of our (cid:16) + 2 objects will lead to the same number of balls in each hat.
However, we can see exactly by how much we have overcounted. We can re-order the (cid:16)
balls without changing the number of balls in each hat, and we can interchange the two
dividers. So we have overcounted by a factor of 2(cid:16)!. Thus,
Nsym((cid:16)) =
((cid:16) + 2)!
2(cid:16)!
= ((cid:16) + 1)((cid:16) + 2) .
1
2
(9.33)
It is easily checked that this gives Nsym(0) = 1, Nsym(1) = 3, and Nsym(2) = 6, consistent
with the examples we started with.
To impose tracelessness, we require that our traceless tensors also satisfy
((cid:9))
Si1...i(cid:1)−2jj = 0 .
(9.34)
How many conditions is this? One can see that the tensor on the left is a symmetric tensor
of rank (cid:16) − 2, with free indices i1 . . . i(cid:9) 2. The number of conditions is then N
sym((cid:16) − 2).
−
The number of linearly indendent traceless symmetric tensors is then given by
Ntraceless sym((cid:16)) = Nsym((cid:16)) − Nsym((cid:16)
−
= 2(cid:16) + 1 .
− 2) = ((cid:16) + 1)((cid:16) + 2) − ((cid:16)
1
2
1
2
− 1)(cid:16)
(9. | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
:16) + 2) − ((cid:16)
1
2
1
2
− 1)(cid:16)
(9.35)
So the correspondence with the standard Y(cid:9)m’s is consistent, as it would have to
be. The spherical harmonic expansion is just a rewriting of Eq. (9.32), with a particular
choice of basis for the 2(cid:16) + 1 independent traceless symmetric tensors of rank (cid:16).
8.07 LECTURE NOTES 9, FALL 2012
p. 9
6. SPECIAL CASE: AZIMUTHAL SYMMETRY:
Azimuthal symmetry means symmetry under rotation about an axis, which we will
take to be the z-axis. Equivalently, we can say that a problem is azimuthally symmetric
if nothing depends on the coordinate φ. To specialize the general expansion (9.10) for a
function of (θ, φ) to the azimuthally symmetric case, we need to construct traceless sym-
metric tensors which are invariant under rotations about the z-axis. One straightforward
way to this is to build the traceless symmetric tensor from the vector zˆ, the unit vector in
the z direction. Note that the x-, y-, and z- components of zˆ are 0, 0, and 1, respectively,
so zˆi = δi3.
[You may have noticed an inconsistency in my notation, as earlier (e.g., Eq. (9.9))
I used eˆ3 or eˆz for the unit vector in the z direction. In this case my inconsistency was
intentional, with two motivations. First, previously we never needed a notation for the
components of a unit basis vector, but here we will. It would be a real pain to write
(ˆez)i. Second, in Lecture Notes 8 I describe a convenient way to construct a basis for the
traceless symmetric tensors, which involves the use of a basis for vectors consisting of zˆ
and two complex vectors uˆ+ and uˆ−. So, the use of zˆ rather than eˆz will remind us that
we are thinking about the (uˆ+, uˆ−, zˆ) basis, rather than the (eˆx, eˆy, eˆz) basis.]
A rank | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
�−, zˆ) basis, rather than the (eˆx, eˆy, eˆz) basis.]
A rank(cid:16) tensor can be constructed from zˆ simply by taking the product zˆi1 zˆi2 . . . zˆi(cid:1) .
This is symmetric, and can be made traceless by extracting the traceless part. Extracting
the traceless part means subtracting terms proportional to one or more Kronecker δ-
functions in such a way that the result is traceless. It gets rather complicated to describe
how this can be done for a general symmetric tensor of arbitrary rank, so I will just
illustrate it by example. Tensors of rank 0 and 1 (i.e., scalars and vectors) are by definition
traceless. I will use curly brackets { . . . } to denote the traceless symmetric part of . . ..
Thus,
{ 1 } = 1 ,
{ zˆi } = ˆzi .
(9.36)
But for rank 2, the trace of zˆizˆj is equal to zˆizˆi = ˆz · zˆ = 1. But we can subtract a
constant times δij so that the result is traceless:
{ zˆizˆj } = ˆzizˆj − 1 δij .
3
The coefficient is 1/3, because the trace of δij is
δii = 3 .
(9.37)
(9.38)
8.07 LECTURE NOTES 9, FALL 2012
p. 10
For rank 3, zˆizˆjzˆk has trace zˆizˆizˆk = ˆzk, but we can make it traceless with a subtraction
{ zˆizˆjzˆk } = ˆzizˆjzˆk − 1
5
(cid:12)
zˆiδjk + ˆzj δik + ˆzkδij
.
(9.39)
(cid:13)
The subtraction must of course be symmetrized, as shown, since we are trying to construct
a traceless symmetric tensor. To | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
(cid:13)
The subtraction must of course be symmetrized, as shown, since we are trying to construct
a traceless symmetric tensor. To verify that 1/5 is the right coefficient to make the
expression traceless, we can take its trace. Since it is symmetric we can sum over any
pair of indices. I will choose to sum over i and j:
δij{ zˆizˆjzˆk } = ˆzizˆizˆk − 1
(cid:12)
zi
5 ˆ δik + ˆziδik + ˆzkδii
= ˆzk − 1 (ˆzk + ˆzk + 3zˆk) = 0 .
5
(cid:13)
(9.40)
For rank 4 there is the option of subtracting terms with either one or two Kronecker
δ-functions, and both are needed to give a traceless result. We can start with arbitrary
coefficients, and see what they have to be:
{ zˆizˆj zˆkzˆm } = ˆzizˆjzˆkzˆm + c1
(cid:12)
zˆizˆjδkm + ˆzizˆkδmj + ˆzizˆmδjk + ˆzj zˆkδim
(cid:13)
(cid:12)
(cid:13)
+ ˆzj zˆmδik + ˆzkzˆmδij
+ c2
δijδkm + δikδjm + δimδjk
.
(9.41)
Within each set of parentheses, the terms are chosen to make the expressi
in i, j, k, and m. If we calculate the trace over i and j, we find
on symmetric
δij { zˆizˆjzˆkzˆm } = ˆzkzˆm + c1
(cid:12)
δkm + ˆzkzˆm + ˆzkzˆm + ˆzkzˆm
(cid:12)
(cid:13)
(cid:13)
+ ˆ | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
�zkzˆm + ˆzkzˆm
(cid:12)
(cid:13)
(cid:13)
+ ˆzkzˆm + 3zˆkzˆm
(cid:12)
(cid:13)
(cid:12)
+ c2
3δkm + δkm + δkm
(cid:13)
= 1 + 7c1 zˆkzˆm + c1 + 5c2 δkm .
(9.42)
For the expression to vanish fo
c1 = −1/7 and c2 = 1/35. Thus,
r all k an
d m, the
two terms
must vanish separately, so
{ zˆ
izˆjzˆkzˆm } = ˆzizˆjzˆkzˆm −
(cid:12)
1
7
+ ˆzjzˆmδik + ˆzkzˆmδij
zˆizˆjδkm + ˆzizˆkδmj + ˆzizˆmδjk + ˆzj zˆkδim
+ 1 δij δkm δikδjm δimδjk
+
+
(cid:13)
(cid:12)
35
(cid:13)
.
(9.43)
It can be shown that any traceless symmetric tensor of rank (cid:16) that is invariant
under rotations about the z-axis is proportional to { zˆi1 . . . zˆi(cid:1)
}. (To see this, note that
in Lecture Notes 8 we construct a complete (2(cid:16) + 1)-dimensional basis for the traceless
symmetry tensors of rank (cid:16). They depend on the azimuthal angle φ as zm(φ) ≡ eimφ,
with m taking integer values from −(cid:16) to (cid:16). Since these functions of φ are orthogonal in
8.07 LECTURE NOTES 9, FALL 2012
p. 11
(cid:14) 2π
0
∗ (φ)zm
zm
(cid:2)
(φ) dφ = 2πδm(cid:2)m, any traceless symmetric tensor of rank | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
)zm
zm
(cid:2)
(φ) dφ = 2πδm(cid:2)m, any traceless symmetric tensor of rank (cid:16)
the sense that
that is independent of φ must proportional to the m = 0 basis tensor.) Since Eq. (9.10)
tells us how to expand an arbitrary function of θ and φ in terms of traceless symmetric
tensors, we can now say that functions of θ alone (i.e., azimuthally symmetric functions
of nˆ) can be expanded as
F (θ) = c0 + c1{ zˆi }nˆi + c2{ zˆizˆj } nˆinˆj + . . . + c(cid:9){ zˆi1 . . . zˆi(cid:1)
} nˆi1 . . . nˆi(cid:1) + . . . ,
(9.44)
where the c(cid:9)’s are constants. This corresponds to what is standardly called an expansion
in Legendre polynomials. In Lecture Notes 8 I show exactly how to relate these terms to
the standard conventions for normalizing the Legendre polynomials, but we can see here
exactly what these functions are. Using zˆ · nˆ = cos θ, we have
{ 1 } = 1
{ zˆi } nˆi = cos θ
{ zˆ zˆj } nˆ
i
inˆj = cos θ −
2
{ zˆizˆjzˆk } nˆinˆjnˆk = cos3 θ − cos θ
{ zˆizˆj zˆkzˆm } nˆ nˆ
4
i jnˆknˆm = cos θ − cos θ +
2
(9.45)
3
35
.
1
3
3
5
6
7
Up to a normalization convention described in Lecture Notes 8, these are the Legendre
polynomials P(cid:9)(cos θ).
7. THE MULTIPOLE EXPANSION:
The most general solution to Laplace’s equation, in spherical coordinates, was given
as Eq. (9.32). We now | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
LE EXPANSION:
The most general solution to Laplace’s equation, in spherical coordinates, was given
as Eq. (9.32). We now wish to apply that result to a common situation: suppose we have
a charged object, and we wish to describe the potential outside of the object. Let’s say for
definiteness that the charge of the object is entirely contained within a sphere of radius
R, centered at the origin. In that case Laplace’s equation will hold for all r > R, so there
should be a solution of the form of Eq. (9.32) that is valid throughout this region. At
infinity the potential of a localized charge distribution will always approach a constant,
which we can take to be zero, we can see that the A(cid:9) coefficients that appear in Eq. (9.32)
must all vanish. The B(cid:9) factors can be absorbed into the definition of
Ci ...i , so we can
write the expansion as
((cid:9))
1
(cid:1)
Φ(
(cid:21)r ) =
∞
(cid:11)
(cid:9)=0
1
r(cid:9)+1
C
((cid:9))
i1...i n(cid:1) ˆi1 . . . nˆi(cid:1) .
(9.46)
8.07 LECTURE NOTES 9, FALL 2012
p. 12
Since each successive term comes with an extra factor of 1/r, at large distances the sum
is dominated by the first term or maybe the first few terms. All the information about
Ci1...i(cid:1) , so knowledge of the first
the charge distribution of the object is contained in the
((cid:9))
1...i(cid:1) is enough to describe the field at large distances, no matter how complicated
ject.
Ci
few
the ob
((cid:9))
The first few terms of this series have special names: the (cid:16) = 0 term is the monopole
term, the (cid:16) = 1 term is the dipole, the (cid:16) = 2 term is the quadrupole, and the (cid:16) = 3 | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
term is the dipole, the (cid:16) = 2 term is the quadrupole, and the (cid:16) = 3 term
is the octupole.
If we want to calculate the
Ci1...i(cid:1) in terms of the charge distribution, we can start
with the general equation for the potential of an arbitrary charge distribution:
((cid:9))
V ((cid:21)r ) =
(cid:15)
1
4π%0
ρ((cid:21)r (cid:4))
|(cid:21)r − (cid:21)r (cid:4)|
d3x .
(9.47)
The multipole expansion can then be derived by expanding 1/ |(cid:21)r − (cid:21)r (cid:4)| in a power series
in (cid:21)r (cid:4).
I’ll begin by doing it as Griffiths does, which gives the simplest — but not the most
useful — form of the multipole expansion. Griffiths rewrote the denominator as
1
|(cid:21)r − (cid:21)r (cid:4)|
(cid:16)
=
2
|
(cid:21)r
|
1
+ |2 − 2 ·
(cid:21) (cid:4)
(cid:21)r r
|
(cid:21)r
(cid:4)
= √
1
r2 + r(cid:4)2 − 2rr(cid:4) cos
(cid:4)
θ
,
(9.48)
where r and r(cid:4) are the lengths of the vectors (cid:21)r and (cid:21)r (cid:4), respectively, and θ is the angle
between these vectors. Next he used the fact that the Legendre polynomials can be
defined by the generating function
g(x, λ) = √
1
1 + λ2 − 2λx
,
(9.49)
which means that the Legendre polynomials P(cid:9)(x) can be obtained by expanding g(x, λ)
in a power series in λ
:
g(x, λ) = √
1
1 + − 2
λ2
λx
=
∞
(cid:11)
(cid:9)=0
λ(cid:9)P
(cid:9)(x) | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
2
λx
=
∞
(cid:11)
(cid:9)=0
λ(cid:9)P
(cid:9)(x) .
(9.50)
Eq. (9.50) is sometimes taken as the definition of the Legendre polynomials, and some-
times it is derived from another definition. In any case, if we accept Eq. (9.50) as valid,
then
1
(cid:21)r − (cid:21)r (cid:4)|
|
=
(cid:16)
r
1 +
1
(cid:13)2
(cid:12)
r(cid:2)
r
− 2 r(cid:2)
r
cos θ(cid:4)
=
1
r
(cid:9)=0
(cid:1)
∞
(cid:11)
(cid:2)
(cid:9)
(cid:4)
r
r
P (cos θ) .
(cid:9)
(9.51)
8.07 LECTURE NOTES 9, FALL 2012
p. 13
Inserting this relation into Eq. (9.47), we find
V ((cid:21)r ) =
1
π%
0
4
∞
(cid:11) 1
r +1
(cid:9)
(cid:9)=0
(cid:15)
(cid:4)(cid:9)
r ρ((cid:21)r (cid:4))P
(cid:9)(cos θ(cid:4)) d x .
3
(9.52)
((cid:9))
This is the easiest way that I know to show that there is an expansion of V ((cid:21)r ) in powers of
1/r, but the complication is that cos θ(cid:4) appears inside the integral. If we could implement
Eq. (9.46), we would be able to calculate (or maybe measure) a small number of the
Ci1...i(cid:1) , and then we would be able to evaluate V ((cid:21)r ) at large distances in any
quantities
direction. To use Eq. (9.52) directly, however, one would have to repeat the integration
for every direction of (cid:21)r . Griffiths works around this problem by massaging the formula
to extract the monopole and dipole terms | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
of (cid:21)r . Griffiths works around this problem by massaging the formula
to extract the monopole and dipole terms, and in Problem 3 of Problem Set 5 you had
the opportunity to carry this out for the quadrupole and octupole terms.
The standard method of “improving” Eq. (9.52) is to use spherical harmonics, but
here I will derive the equivalent relations using the traceless symmetric tensor approach.
Instead of expanding 1/ |(cid:21)r − (cid:21)r (cid:4)| in powers of r(cid:4), we will think of it as a function of
i of (cid:21)r (cid:4), and we will expand it as a Taylor series in 3
three variables — the components (cid:21)r (cid:4)
variables. To make the formalism clear, I will define the function
f ((cid:21)r (cid:4)) ≡
1
|(cid:21)r − (cid:21)r (cid:4)|
.
(9.53)
The function can then be expanded in a power series using the standard multi-variable
Taylor expansion:
f ((cid:21)r (cid:4)) = f ((cid:21)0) +
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:16)r (cid:2)=(cid:16)0
∂f
∂x(cid:4)
i
x(cid:4)
i +
1
2!
∂2f
(cid:4)
(cid:4)
∂x ∂x
j
i
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:16)r (cid:2)=(cid:16)0
ix(cid:4) + . . . ,
x(cid:4)
j
(9.54)
where the repeated indices are summed. To separate the
angular behavior, we write
i = r(cid:4)nˆ(cid:4)
x(cid:4)
i ,
(9.55)
so Eq. (9.54) becomes
f ((cid:21)r (cid:4)) = f ((cid:21)0) + r(cid:4)
(cid:4) +
nˆi
∂f
∂x | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
= f ((cid:21)0) + r(cid:4)
(cid:4) +
nˆi
∂f
∂x(cid:4)
i
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:16)
(cid:16)r (cid:2)=0
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:16)r (cid:2)=(cid:16)0
f is a function of (cid:21)r − (cid:21)r (cid:4), the
i can be replaced by derivatives with respect to xi with a
r(cid:4)2
∂2f
(cid:4)
(cid:4)
2! ∂x ∂x
j
i
ˆ(cid:4) ˆ(cid:4)
n n + . . . .
i j
(9.56)
The notation can now be simplified by noting that since
derivatives with respect to x(cid:4)
change of sign:
(cid:1)
∂f
∂x(cid:4)
i
=
∂
∂x(cid:4)
i
1
|(cid:21)r − (cid:21)r (cid:4)
|
(cid:2)
(cid:17)
(cid:17)
(cid:17)
(cid:17)
(cid:16)r (cid:2)=(cid:16)0
(cid:1)
−
=
∂
∂xi
1
|(cid:21)r − (cid:21)r (cid:4)|
(cid:17)
(cid:2)
(cid:17)
(cid:17)
(cid:17)
r (cid:2)=(cid:16)0
(cid:16)
−
=
∂
∂x
i
(cid:2)
(cid:1)
1
|(cid:21)r |
.
(9.57)
8.07 LECTURE NOTES 9, FALL 2012
p. 14
This allows us to write the derivatives in the expansion (9.56) much more simply. The
(cid:16)’th derivative is found by repeating the above operation (cid:16) times:
(cid:17)
(cid:17)
∂(cid:9)f
(cid:17)
(cid:17)
(cid:4)
. . . � | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
(cid:17)
(cid:17)
∂(cid:9)f
(cid:17)
(cid:17)
(cid:4)
. . . ∂x
i(cid:1) (cid:16)r =(cid:16)
(cid:2) 0
∂x(cid:4)
i
1
= (−1)
(cid:9)
∂(cid:9)
∂xi1 . . . ∂xi(cid:1)
1
|(cid:21)r
|
.
Combining Eqs. (9.58) with (9.56),
we can write
(cid:1)
1
|(cid:21)r − (cid:21)r (cid:4)|
=
∞
(cid:11)
(
(cid:9)=0
−1)(cid:9)r(cid:4)(cid:9)
(cid:16)!
(cid:9)
∂
1
∂x . . . ∂x |(cid:21)r |
i1
i
(cid:1)
(cid:2)
nˆ(cid:4)
i1
. . . nˆ(cid:4)
i(cid:1)
.
(9.58)
(9.59)
Note that the quantity in parentheses in the equation above is traceless, because
(cid:1)
∂(cid:9)
∂xi∂xi∂xi+2 . . . ∂xi(cid:1)
1
|(cid:21)r |
(cid:2)
∇2
=
(cid:9)
∂
∂xi+2 . . . ∂xi(cid:1)
=
∂(cid:9)
∂xi+2 . . . ∂xi(cid:1)
∇2
1
|(cid:21)r |
1
|(cid:21)r |
= 0 ,
(9.60)
because ∇2(1/|(cid:21)r |) = 0 except at (cid:21)r = 0. So we can see the traceless symmetric tensor
formalism emerging.
To evaluate this quantity, we will work out the first several terms until we recognize
the pattern. We write
and adopt the abbreviation
(cid:21)r ≡ rn ,ˆ
∂i ≡
∂
xi
.
It is useful to start by evaluating the derivatives of the basic quantities r and n� | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
ˆ
∂i ≡
∂
xi
.
It is useful to start by evaluating the derivatives of the basic quantities r and nˆi:
∂ r = ∂
i
i(xjxj)1 2
/
(cid:5)
(cid:6)
1
= (
2
x
kxk)− /2∂i(xjxj) = 2x
1
jδij =
x
i
r
= ˆni ,
∂i j
nˆ = ∂
i
xj
r
=
δij
r
−
1
r2
(cid:12)
x ∂ r = δ
j i
1
r
−
ij
nˆ nˆ
i j
.
1
r
2
(cid:13)
It is then straightforward to show that
(cid:2)
(cid:1)
∂i
∂i∂j
∂i∂j∂k
1
r
(cid:1)
(cid:2)
1
r
(cid:1) (cid:2)
1
r
1
= − nˆi ,
r2
=
3
r3
{ nˆinˆj } ,
−
=
·
5 3
4
r
{ nˆiˆjnˆk } ,
n
(9.61)
(9.62)
(9.63)
(9.64)
8.07 LECTURE NOTES 9, FALL 2012
p. 15
where { } denotes the traceless symmetric part, and the relevant cases are shown explicitly
in Eqs. (9.36) – (9.39). It becomes clear that the general formula, which can be proven
by induction, is
∂(cid:9)
i1 . . . ∂x
1
|
(cid:21)
i(cid:1) r
=
|
(−1)(cid:9)(2(cid:16) − 1)!!
r +1
(cid:9)
∂x
{ nˆii . . . nˆi
(cid:1)
} ,
(9.65)
where
(2(cid:16) − 1)!! ≡ (2(cid:16) − 1)(2(cid: | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
9.65)
where
(2(cid:16) − 1)!! ≡ (2(cid:16) − 1)(2(cid:16) − 3)(2(cid:16) − 5) . . . 1 =
(2(cid:16))!
2(cid:9)(cid:16)!
, with ( 1)!!
−
≡ 1 .
(9.66)
Inserting this result into Eq. (9.59), we find
1
|(cid:21)r − (cid:21)r (cid:4)|
=
∞
(cid:11) (2(cid:16) −
(cid:16)!
(cid:9)=0
1)!! r(cid:4)(cid:9)
r(cid:9)+1
{ nˆi1 . . . nˆ
i
(cid:1)
One can write this more symmetrically by writing
(cid:4)
} nˆi1 . . . nˆ(cid:4)
i(cid:1) .
(9.67)
1
|(cid:21)r − (cid:21)r (cid:4)|
=
∞
(cid:11) (2
(cid:9)=0
(cid:16) − 1)!! r(cid:4)(cid:9)
+1
r(cid:9)
(cid:16)!
{ i
nˆ . . . nˆ
1
nˆ(cid:4)
} { i1
i
(cid:1)
. . . nˆ
(cid:4)
i
(cid:1)
}
,
(9.67)
i1 . . . nˆ(cid:4)
i(cid:1)
} differs from nˆ(cid:4)
since { nˆ(cid:4)
i1 . . . nˆ(cid:4)
i(cid:1) by terms proportional to Kronecker δ-functions,
which vanish when summed with the traceless tensor { nˆi1 . . . nˆi(cid:1)
Starting with
Eq. (9.67), one can if one wishes drop the curly brackets around either factor (but not
both!).
}.
Inserting this expression for 1/|(cid:21)r − (cid:21)r (cid:4)| into Eq. (9.47), we have | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
this expression for 1/|(cid:21)r − (cid:21)r (cid:4)| into Eq. (9.47), we have the final result
V ((cid:21)r ) =
1
4π%0
∞
(cid:11)
(cid:9)=0
1
r(cid:9)+1
((cid:9))
i1...i n(cid:1) ˆi1 . . . nˆi(cid:1) ,
C
(9.68)
where
(cid:15)
((cid:9))
Ci1...i(cid:1) =
(2(cid:16)
− 1)!!
(cid:16)!
ρ((cid:21)r (cid:4))
{ (cid:21)r (cid:4)
i1 . . . (cid:21)r (cid:4)
i(cid:1)
} d3x(cid:4) .
(9.69)
Note that the coefficient in the above expression can also be written as
(2(cid:16) − 1)!!
(cid:16)!
=
(2(cid:16))!
2(cid:9)((cid:16)!)2
.
(9.70)
8.07 LECTURE NOTES 9, FALL 2012
p. 16
For purposes of illustration, I will write out the first two terms — the monopole and
dipole terms — in a bit more detail. The monopole term can be written as
where
The dipole term is
where
Vmono((cid:21)r ) =
1 Q
4π%0 r
,
(cid:15)
Q = C(0) =
ρ((cid:21)r (cid:4)) d3x(cid:4) .
Vdip((cid:21)r ) =
· nˆ
1 p(cid:21)
4π%0 r2
(cid:15)
,
pi =
(1)
Ci =
ρ((cid:21)r (cid:4))(cid:21)ri d3x .
(9.71)
(9.72)
(9.73)
(9.74)
MIT OpenCourseWare
http://ocw.mit.edu
8.07 Electromagnetism II
Fall 2012
For information about citing these materials or our Terms of Use, visit: http:// | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
.edu
8.07 Electromagnetism II
Fall 2012
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-07-electromagnetism-ii-fall-2012/0ccc978da12536a8d95333ec7726c555_MIT8_07F12_ln9.pdf |
LECTURE 10
• Readings: Section 3.6
Lecture outline
• More on continuous r.v.s
• Derived distributions
Review
Conditioning “slices” the joint PDF
• Recall the stick-breaking example:
• Pictorially:
Buffon’s Needle (1)
• Parallel lines at distance
Needle of length (assume )
• Find P(needle intersects one of the lines).
• Midpoint-nearest line distance:
• Needle-lines acute angle:
Buffon’s Needle (2)
• Model: uniform and independent.
• When does the needle intersect a line?
Buffon’s Needle (3)
What is a derived distribution?
• It is a PMF or PDF of a function of random
variables with known probability law.
• Example:
• Let: . Note: is a r.v.
• Obtaining the PDF for
involves deriving a distribution.
Why do we derive distributions?
• Sometimes we don’t need to. Example:
– Computing expected values.
• But often they’re useful. Examples:
– Maximum of several r.v.s. (delay models)
– Minimum of several r.v.s (failure models).
– Sum of several r.v.s. (multiple arrivals)
How to find them: Discrete Case
• Consider:
- a single discrete r.v.:
- and a function:
• Obtain probability mass for each
possible value of :
How to find them: Continuous Case
• Consider:
- a single continuous r.v.:
- and a function:
• Two step procedure:
1. Get CDF of :
2. Differentiate to get:
• Why go to the CDF?
Example 1
: uniform on
•
• Find PDF of
• Solution:
1. Get the CDF:
2. Differentiate:
Example 2
• Joan is driving from Boston to New York.
Her speed is uniformly distributed
between 30 and 60 mph. What is the
distribution of the duration of the trip?
• PDF of the velocity :
• Let:
• Find .
The PDF of .
• Use this to check that if is normal,
then is also normal. | https://ocw.mit.edu/courses/6-041-probabilistic-systems-analysis-and-applied-probability-spring-2006/0cced51cb2e7e6062d04055e7e4c1684_lec10.pdf |
Letkvr
(0
4k '..
Uotrzh
(M&L DaviI
CAeA
--Axt~
'I\
C
^-
£
A LI) c t
'C-i
t
+ At,
_ Y
( t)
v\c\,
'Grc
a'.
a
:W w
Lk'
[
6oic~pLP
( 1e4- MI
-\
s
fS
t,') k,
t
JL L4
P =
I,-
a C , w J\L
4&
,
-�
� 1_^�-r-111·---4-�-
^1·-
.. <1
AlF
,
. .. ..
..
4ewvn Orq~or
\1~
.erS
~),-
s-e. ric lassici
\..'Ž
4V
W.
...........
~ -1-k
�
--
I�--
skic
A
4aeAth
<01~
<C"
\
" I V\)cnx = Co l S,
\f; Se',
-. l
n-\
Iin
nOc
compLCo
rmak't eAebAt
(ewv
S Is,
/~osor
+
'; = <F;n-lX
e
c
A JCK
11';( K,,)Id)
.0, fe~
w t-K
eM;C\;LAi
C-
A, =.)
\V1
\
euptss,
Ao
U3H
Ak-, I
Al
t'L
1
SW
,'2
IA-.> ==!·Tal
9
N) V.,
a~ ~
11CS~ c~
c rl
.
l4.-
-cF\n,,,
4d
C
P EK (L\
z1
±L ,
t
--ie
K 1.
-3e
i
- ......
I .-.. -- 5ojv | https://ocw.mit.edu/courses/8-322-quantum-theory-ii-spring-2003/0ce4f20acbdfa718cd95bd716de2c74b_83227Lecture6.pdf |
--ie
K 1.
-3e
i
- ......
I .-.. -- 5ojv
os~e.~
t)_.oe-F.
_
>QLYK~.
..
I..
......
.-- . .-.- I
...
AP.
-9
-
.r
r
4
, '..AJ~'-E.....
J'."'-
.
4
C
'!V' ......
.~k..............
~~'c-
"' '' : '
-~._.~lwge~n
,~ut..a.
............
Tr~
AV
c~f~tJ-9-')1--
e \~~e!
2
2ie
---
rk
~.~,
17Jl;
C-OA~A·-(
...
........... 2
'!.....~........Ch);,~
..............................
~~~~~~~~._..
... . G
e~_p\·~r(
Hw.
... ,............................
...... ......
....
~.......C..
......
i
........
=
-
-
·-t~·.- -
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~...
if
So~ -'rrrc ~ !<x/,xH
r I
si
gh.
'
r-
-:
['-4jf
\5, HI
T <ZI 4ID)C i SL
c2
F
E
ru*AeeJ
-
- -
e- 0
_ _I
el~
.
ii
i~ce
(1'A
J
I-I
Y,
m
, 11,
"v M1, M4
Im %
J
(23;
\
CL,,',l^e-
- -CV-wt;
) ~
~-A
or
3C
0
.
IV
so
cp
p
J
10
14-?,
1~-->
kk\,A- 6ps-le
(Fk3
:1
F
-
Co
;-J
-
S
I,~ht
Ait6acU
iC' N
ez&0 | https://ocw.mit.edu/courses/8-322-quantum-theory-ii-spring-2003/0ce4f20acbdfa718cd95bd716de2c74b_83227Lecture6.pdf |
2
R. STANLEY, HYPERPLANE ARRANGEMENTS
LECTURE 1
Basic definitions, the intersection poset and the
characteristic polynomial
1.1. Basic definitions
The following notation is used throughout for certain sets of numbers:
N nonnegative integers
P
positive integers
Z
integers
Q
rational numbers
R
real numbers
R+ positive real numbers
C
complex numbers
[m] the set
1, 2, . . . , m
{
when m
N
≤
}
We also write [tk]ψ(t) for the coefficient of tk in the polynomial or power series ψ(t).
For instance, [t2](1 + t)4 = 6.
A finite hyperplane arrangement A is a finite set of affine hyperplanes in some
vector space V ∪= K n, where K is a field. We will not consider infinite hyperplane
arrangements or arrangements of general subspaces or other objects (though they
have many interesting properties), so we will simply use the term arrangement for
a finite hyperplane arrangement. Most often we will take K = R, but as we will see
even if we’re only interested in this case it is useful to consider other fields as well.
To make sure that the definition of a hyperplane arrangement is clear, we define a
linear hyperplane to be an (n
1)-dimensional subspace H of V , i.e.,
≤
where κ is a fixed nonzero vector in V and κ v is the usual dot product:
v
{
V : κ v = 0
·
}
,
−
H =
An affine hyperplane is a translate J of a linear hyperplane, i.e.,
�
(κ1, . . . , κn)
·
(v1, . . . , vn) =
κivi.
V : κ v = a
·
where κ is a fixed nonzero vector in V and a | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
) =
κivi.
V : κ v = a
·
where κ is a fixed nonzero vector in V and a
v
{
J =
K.
≤
,
}
·
≤
If the equations of the hyperplanes of A are given by L1(x) = a1, . . . , Lm(x) =
am, where x = (x1, . . . , xn) and each Li(x) is a homogeneous linear form, then we
call the polynomial
QA(x) = (L1(x)
a1)
(Lm(x)
am)
−
· · ·
−
the defining polynomial of A. It is often convenient to specify an arrangement
by its defining polynomial. For instance, the arrangement A consisting of the n
coordinate hyperplanes has QA(x) = x1x2
· · ·
Let A be an arrangement in the vector space V . The dimension dim(A) of
A is defined to be dim(V ) (= n), while the rank rank(A) of A is the dimension
of the space spanned by the normals to the hyperplanes in A. We say that A is
essential if rank(A) = dim(A). Suppose that rank(A) = r, and take V = K n . Let
xn.
LECTURE 1. BASIC DEFINITIONS
3
Y be a complementary space in K n to the subspace X spanned by the normals to
hyperplanes in A. Define
·
If char(K) = 0 then we can simply take W = X. By elementary linear algebra we
have
≤
≤
≡
}
{
W =
v
V : v
y = 0
y
Y
.
(1)
codimW (H
W ) = 1
⊕
⊕
⊕
≤
A | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
1)
codimW (H
W ) = 1
⊕
⊕
⊕
≤
A
}
≤
W : H
A. In other words, H
W is a hyperplane of W , so the set AW :=
for all H
is an essential arrangement in W . Moreover, the arrangements A
H
{
and AW are “essentially the same,” meaning in particular that they have the same
intersection poset (as defined in Definition 1.1). Let us call AW the essentialization
of A, denoted ess(A). When K = R and we take W = X, then the arrangement A
AW orthogonally to
is obtained from AW by “stretching” the hyperplane H
W . Thus if W � denotes the orthogonal complement to W in V , then H �
AW if
and only if H �
A. Note that in characteristic p this type of reasoning fails
since the orthogonal complement of a subspace W can intersect W in a subspace
of dimension greater than 0.
W �
W
�
≤
⊕
≤
≤
Example 1.1. Let A consist of the lines x = a1, . . . , x = ak in K 2 (with coordinates
x and y). Then we can take W to be the x-axis, and ess(A) consists of the points
x = a1, . . . , x = ak in K.
Now let K = R. A region of an arrangement A is a connected component of
the complement X of the hyperplanes:
X = Rn
H.
−
H⊆A
�
Let R(A) denote the set of regions of A, and let
r(A) = #R(A),
the number of regions. For instance, the arrangement A shown below has r(A) = 14.
R(A) is open and convex
It is a simple exercise to | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
(A) = 14.
R(A) is open and convex
It is a simple exercise to show that every region R
(continuing to assume K = R), and hence homeomorphic to the interior of an n-
dimensional ball Bn (Exercise 1). Note that if W is the subspace of V spanned by
R(AW ).
the normals to the hyperplanes in A, then R
≤
W is bounded. If A
We say that a region R
is essential, then relatively bounded is the same as bounded. We write b(A) for
R(A) is relatively bounded if R
R(A) if and only if R
W
≤
≤
⊕
⊕
≤
4
R. STANLEY, HYPERPLANE ARRANGEMENTS
−
→
k
the number of relatively bounded regions of A. For instance, in Example 1.1 take
K = R and a1 < a2 <
< ak . Then the relatively bounded regions are the
1. In ess(A) they become the (bounded) open
regions ai < x < ai+1, 1
intervals (ai, ai+1). There are also two regions of A that are not relatively bounded,
viz., x < a1 and x > ak.
· · ·
i
→
x
{
A (closed) half-space is a set
Rn : x κ
R. If
c
}
≤
H is a hyperplane in Rn, then the complement Rn
H has two (open) components
whose closures are half-spaces. It follows that the closure R of a region R of A is
a finite intersection of half-spaces, i.e., a (convex) polyhedron (of dimension n). A
bounded | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
��nite intersection of half-spaces, i.e., a (convex) polyhedron (of dimension n). A
bounded polyhedron is called a (convex) polytope. Thus if R (or R) is bounded,
then R is a polytope (of dimension n).
for some κ
Rn , c
⊂
−
≤
≤
¯
¯
¯
·
An arrangement A is in general position if
H1, . . . , Hp}
{
H1, . . . , Hp}
{
∗
∗
A, p
n
A, p > n
→
dim(H1
H1
⊕ · · · ⊕
Hp
⊕ · · · ⊕
Hp) = n
= .
�
⊆
⊆
p
−
For instance, if n = 2 then a set of lines is in general position if no two are parallel
and no three meet at a point.
Let us consider some interesting examples of arrangements that will anticipate
some later material.
Example 1.2. Let Am consist of m lines in general position in R2 . We can compute
r(Am) using the sweep hyperplane method. Add a L line to Ak (with AK
in
general position). When we travel along L from one end (at infinity) to the other,
every time we intersect a line in Ak we create a new region, and we create one new
region at the end. Before we add any lines we have one region (all of R2). Hence
r(Am) = #intersections + #lines + 1
∅ {
L
}
�
Example 1.3. The braid arrangement Bn in K n consists of the hyperplanes
�
=
+ m + 1.
m
2
Bn : xi
xj = 0, 1
i < j
n.
−
→
→
n
2
⎜
�
Thus Bn has
hyperplanes. To count the number of regions when K | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
→
→
n
2
⎜
�
Thus Bn has
hyperplanes. To count the number of regions when K = R, note
that specifying which side of the hyperplane xi
xj = 0 a point (a1, . . . , an) lies
on is equivalent to specifying whether ai < aj or ai > aj . Hence the number of
regions is the number of ways that we can specify whether ai < aj or ai > aj for
1
n. Such a specification is given by imposing a linear order on the
Sn (the symmetric group of all
ai’s. In other words, for each permutation w
permutations of 1, 2, . . . , n), there corresponds a region Rw of Bn given by
i < j
−
→
→
≤
Rw =
(a1, . . . , an)
{
≤
Rn : aw(1) > aw(2) >
.
> aw(n)}
· · ·
Hence r(Bn ) = n!. Rarely is it so easy to compute the number of regions!
Note that the braid arrangement Bn is not essential; indeed, rank(Bn) = n
When char(K) = 2 the space W
K n of equation (1) can be taken to be
1.
−
∗
W =
(a1, . . . , an)
{
K n : a1 +
≤
+ an = 0
.
}
· · ·
The braid arrangement has a number of “deformations” of considerable interest.
We will just define some of them now and discuss them further later. All these
arrangements lie in K n, and in all of them we take 1
n. The reader who
i < j
→
→ | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
lie in K n, and in all of them we take 1
n. The reader who
i < j
→
→
⇔
LECTURE 1. BASIC DEFINITIONS
5
likes a challenge can try to compute their number of regions when K = R. (Some
are much easier than others.)
•
−
generic braid arrangement : xi
xj = aij , where the aij ’s are “generic”
(e.g., linearly independent over the prime field, so K has to be “sufficiently
large”). The precise definition of “generic” will be given later. (The prime
field of K is its smallest subfield, isomorphic to either Q or Z/pZ for some
prime p.)
semigeneric braid arrangement : xi
•
Shi arrangement : xi
•
−
Linial arrangement : xi
•
−
Catalan arrangement : xi
•
−
semiorder arrangement : xi
•
−
threshold arrangement : xi + xj = 0 (not really a deformation of the braid
•
arrangement, but closely related).
xj = ai, where the ai’s are “generic.”
1) hyperplanes in all).
xj = 0, 1 (so n(n
xj = 1.
xj =
1, 0, 1.
1, 1.
−
xj =
−
−
−
An arrangement A is central if
. Equivalently, A is a translate
�
of a linear arrangement (an arrangement of linear hyperplanes, i.e., hyperplanes
passing through the origin). Many other writers call an arrangement central, rather
H⊆A H, then rank(A) =
than linear, if 0
codim(X). If A is central, then note also that b(A) = 0 [why?].
H⊆A H. If A | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
is central, then note also that b(A) = 0 [why?].
H⊆A H. If A is central with X =
H⊆A H =
≤
There are two useful arrangements closely related to a given arrangement A.
A
If A is a linear arrangement in K n, then projectivize A by choosing some H
to be the hyperplane at infinity in projective space P n−1 . Thus if we regard
≤
K
P n−1 =
(x1, . . . , xn) : xi
{
v if u = κv for some 0 = κ
K
K, not all xi = 0
}
/
,
∪
≤
K, then
≤
where u
∪
H = (
(x1, . . . , xn−1, 0) : xi
{
K, not all xi = 0
}
/
≤
∪
= P n−2
) ∪ K
.
K
A
). Hence (proj( )) =
The remaining hyperplanes in A then correspond to “finite” (i.e., not at infinity)
projective hyperplanes in P n−1 . This gives an arrangement proj(A) of hyperplanes
R of A become identified in
in
proj(
K
P n−1 . When K = R, the two regions R and
r(A). When n = 3, we can draw P 2 as a disk with
antipodal boundary points identified. The circumference of the disk represents the
hyperplane at infinity. This provides a good way to visualize three-dimensional real
linear arrangements. For instance, if A consists of the three coordinate hyperplanes
x1 = 0, x2 = 0, and x3 = 0, then a projective drawing is given by
−
A | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
= 0, x2 = 0, and x3 = 0, then a projective drawing is given by
−
A
R
r
2
1
1
2
3
The line labelled i is the projectivization of the hyperplane xi = 0. The hyperplane
at infinity is x3 = 0. There are four regions, so r(A) = 8. To draw the incidences
among all eight regions of A, simply “reflect” the interior of the disk to the exterior:
⇔
⇔
6
R. STANLEY, HYPERPLANE ARRANGEMENTS
12
24
14
34
23
13
Figure 1. A projectivization of the braid arrangement B4
1
2
3
2
1
Regarding this diagram as a planar graph, the dual graph is the 3-cube (i.e., the
vertices and edges of a three-dimensional cube) [why?].
For a more complicated example of projectivization, Figure 1 shows proj(B4)
(where we regard B4 as a three-dimensional arrangement contained in the hyper
plane x1 + x2 + x3 + x4 = 0 of R4), with the hyperplane xi = xj labelled ij, and
with x1 = x4 as the hyperplane at infinity.
LECTURE 1. BASIC DEFINITIONS
7
We now define an operation which is “inverse” to projectivization. Let A be
an (affine) arrangement in K n, given by the equations
L1(x) = a1,
. . . , Lm(x) = am.
Introduce a new coordinate y, and define a central arrangement cA (the cone over
A) in K n
K = K n+1 by the equations
×
L | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
��ne a central arrangement cA (the cone over
A) in K n
K = K n+1 by the equations
×
L1(x) = a1y,
. . . , Lm(x) = amy, y = 0.
For instance, let A be the arrangement in R1 given by x =
The following figure should explain why cA is called a cone.
−
1, x = 2, and x = 3.
−1
2
3
It is easy to see that when K = R, we have r(cA) = 2r(A). In general, cA has
the “same combinatorics as A, times 2.” See Exercise 1.
1.2. The intersection poset
Recall that a poset (short for partially ordered set) is a set P and a relation
satisfying the following axioms (for all x, y, z
P ):
→
(P1) (reflexivity) x
(P2) (antisymmetry) If x
(P3) (transitivity) If x
→
x
→
y and y
Obvious notation such as x < y for x
used throughout. If x
→
z, then x
→
y and x = y, and y
y in P , then the (closed) interval [x, y] is defined by
x for x
→
→
z.
→
⊂
→
y will be
y and y
x, then x = y.
≤
→
[x, y] =
.
}
is not a closed interval. For basic information on posets
P : x
z
{
z
y
→
→
≤
Note that the empty set
�
not covered here, see [18].
Definition 1.1. Let A be an arrangement in V , and let L(A) be the set of all
nonempty intersections of hyperplanes in | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
, and let L(A) be the set of all
nonempty intersections of hyperplanes in A, including V itself as the intersection
over the empty set. Define x
y (as subsets of V ). In other words,
L(A) is partially ordered by reverse inclusion. We call L(A) the intersection poset
of A.
y in L(A) if x
∀
→
Note. The primary reason for ordering intersections by reverse inclusion rather
than ordinary inclusion is Proposition 3.8. We don’t want to alter the well-established
definition of a geometric lattice or to refer constantly to “dual geometric lattices.”
L(A). In general, if P is a
ˆ
0 for all
0 an element (necessarily unique) such that x
poset then we denote by ˆ
L(A) satisfies x
The element V
V for all x
⊂
≤
≤
⊂
⇔
8
R. STANLEY, HYPERPLANE ARRANGEMENTS
Figure 2. Examples of intersection posets
≤
P . We say that y covers x in a poset P , denoted x � y, if x < y and no z
P
x
satisfies x < z < y. Every finite poset is determined by its cover relations. The
(Hasse) diagram of a finite poset is obtained by drawing the elements of P as dots,
with x drawn lower than y if x < y, and with an edge between x and y if x � y.
Figure 2 illustrates four arrangements A in R2, with (the diagram of) L(A) drawn
below A.
≤
A chain of length k in a poset P is a set x0 < x1 <
< x | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
length k in a poset P is a set x0 < x1 <
< xk of elements of
P . The chain is saturated if x0 � x1 �
� xk . We say that P is graded of rank
· · ·
n if every maximal chain of P has length n. In this case P has a rank function
rk : P
· · ·
N defined by:
rk(x) = 0 if x is a minimal element of P .
rk(y) = rk(x) + 1 if x � y in P .
∃
•
•
rk(x), the length
If x < y in a graded poset P then we write rk(x, y) = rk(y)
of the interval [x, y]. Note that we use the notation rank(A) for the rank of an
arrangement A but rk for the rank function of a graded poset.
−
Proposition 1.1. Let A be an arrangement in a vector space V ∪
. Then the
intersection poset L(A) is graded of rank equal to rank(A). The rank function of
L(A) is given by
= K n
where dim(x) is the dimension of x as an affine subspace of V .
rk(x) = codim(x) = n
dim(x),
−
0
−
−
⊕
Proof. Since L(A) has a unique minimal element ˆ = V , it suffices to show that
(a) if x�y in L(A) then dim(x)
dim(y) = 1, and (b) all maximal elements of L(A)
rank(A). By linear algebra, if H is a | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
)
dim(y) = 1, and (b) all maximal elements of L(A)
rank(A). By linear algebra, if H is a hyperplane and x an affine
have dimension n
x) = 1, so (a) follows. Now suppose
x = x or dim(x)
subspace, then H
dim(H
that x has the largest codimension of any element of L(A), say codim(x) = d. Thus
x is an intersection of d linearly independent hyperplanes (i.e., their normals are
L(A) with e = codim(y) < d. Thus
linearly independent) H1, . . . , Hd in A. Let y
y is an intersection of e hyperplanes, so some Hi (1
d) is linearly independent
Hi) > codim(y). Hence y is not a
Hi =
from them. Then y
�
maximal element of L(A), proving (b).
and codim(y
→
→
−
≤
⊕
⊕
⊕
i
�
⇔
LECTURE 1. BASIC DEFINITIONS
9
2
−1
−2
1
1
−1
−1
1
1
−1
Figure 3. An intersection poset and M¨obius function values
1.3. The characteristic polynomial
A poset P is locally finite if every interval [x, y] is finite. Let Int(P ) denote the
Z, write f (x, y) for
set of all closed intervals of P . For a function f : Int(P )
f ([x, y]). We now come to a fundamental invariant of locally finite posets.
∃
Definition 1.2. Let P be a locally finite poset. Define a function µ = µP | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
P be a locally finite poset. Define a function µ = µP :
Int(P )
Z, called the M¨obius function of P , by the conditions:
∃
µ(x, x) = 1, for all x
P
≤
(2)
µ(x, y) =
−
x⊇z<y
�
µ(x, z), for all x < y in P.
This second condition can also be written
µ(x, z) = 0, for all x < y in P.
x⊇z⊇y
�
0, then we write µ(x) = µ(ˆ
If P has a ˆ
0, x). Figure 3 shows the intersection poset
L of the arrangement A in K 3 (for any field K) defined by QA(x) = xyz(x + y),
L.
together with the value µ(x) for all x
≤
obius function is the M¨
A important application of the M¨
obius inversion for
mula. The best way to understand this result (though it does have a simple direct
proof) requires the machinery of incidence algebras. Let I(P ) = I(P, K) denote
the vector space of all functions f : Int(P )
K. Write f (x, y) for f ([x, y]). For
I(P ) by
f, g
I(P ), define the product f g
∃
≤
≤
f g(x, y) =
f (x, z)g(z, y).
It is easy to see that this product makes I(P ) an associative Q-algebra, with mul
tiplicative identity ζ given by
x⊇z⊇y
�
ζ(x, y) =
1, x = y
0, x < y.
�
y | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
⊇y
�
ζ(x, y) =
1, x = y
0, x < y.
�
y in P . Note that
Define the zeta function α
the M¨obius function µ is an element of I(P ). The definition of µ (Definition 1.2) is
I(P ) of P by α(x, y) = 1 for all x
→
≤
10
R. STANLEY, HYPERPLANE ARRANGEMENTS
equivalent to the relation µα = ζ in I(P ). In any finite-dimensional algebra over a
field, one-sided inverses are two-sided inverses, so µ = α −1 in I(P ).
Theorem 1.1. Let P be a finite poset with M¨obius function µ, and let f, g : P
Then the following two conditions are equivalent:
∃
K.
f (x) =
g(y), for all x
y∗x
�
P
≤
g(x) =
µ(x, y)f (y), for all x
y∗x
�
P.
≤
Proof. The set K P of all functions P
acts (on the left) as an algebra of linear transformations by
∃
K forms a vector space on which I(P )
where f
the statement
≤
K P and �
(�f )(x) =
�(x, y)f (y),
y∗x
�
I(P ). The M¨obius inversion formula is then nothing but
≤
αf = g
√
f = µg.
�
We now come to the main concept of this section.
Definition 1.3. The characteristic polynomial ψA(t) of the arrangement A is de
fined by
(3)
ψA(t) =
µ(x)tdim(x). | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
) of the arrangement A is de
fined by
(3)
ψA(t) =
µ(x)tdim(x).
For instance, if A is the arrangement of Figure 3, then
x⊆L(A)
�
ψA(t) = t3
4t2 + 5t
2 = (t
1)2(t
2).
−
Note that we have immediately from the definition of ψA(t), where A is in K n ,
−
−
−
that
ψA(t) = tn
−
(#A)tn−1 +
· · ·
.
Example 1.4. Consider the coordinate hyperplane arrangement A with defining
xn. Every subset of the hyperplanes in A has a
polynomial QA(x) =
different nonempty intersection, so L(A) is isomorphic to the boolean algebra Bn of
all subsets of [n] =
1, 2, . . . , n
x1x2
· · ·
{
, ordered by inclusion.
}
Proposition 1.2. Let A be given by the above example. Then ψA(t) = (t
1)n .
−
Proof. The computation of the M¨
obius function of a boolean algebra is a standard
result in enumerative combinatorics with many proofs. We will give here a naive
proof from first principles. Let y
L(A), r(y) = k. We claim that
(4)
≤
µ(y) = (
1)k .
−
The assertion is clearly true for rk(y) = 0, when y = ˆ
show that
0. Now let y > ˆ
0. We need to
(5)
1)rk(x) = 0.
−
(
x⊇y
�
LECTURE 1. BASIC DEFINITIONS
11
k
The number of x such that x
i
well-known identity (easily proved by | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
BASIC DEFINITIONS
11
k
The number of x such that x
i
well-known identity (easily proved by substituting q =
⎜
of (q + 1)k )
y and rk(x) = i is
= 0 for k > 0.
1)i
→
, so (5) is equivalent to the
1 in the binomial expans
ion
�
�
−
k
=0(
i
−
k
i
�
⎜
� | https://ocw.mit.edu/courses/18-315-combinatorial-theory-hyperplane-arrangements-fall-2004/0d0d20fa9004b352a20c85b22d8d6a17_lec1.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
8.512 Theory of Solids II
Spring 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 1: Linear Response Theory
Last semester in 8.511, we discussed linear response theory in the context of charge screening and
the freefermion polarization function. This theory can be extended to a much wider range of
areas, however, and is a very useful tool in solid state physics. We’ll begin this semester by going
back and studying linear response theory again with a more formal approach, and then returning
to this like superconductivity a bit later.
1.1 Response Functions and the Interaction Representation
In solid state physics, we ordinarily think about manybody systems, with something on the order
of 1023 particles. With so many particles, it is usually impossible to even think about a wave
function for the whole system. As a result, it is often more useful for us to think in terms of the
macroscopic observable behaviors of systems rather than their particular microscopic states.
One example of such a macroscopic property is the magnetic susceptibility χH ≡ ∂M , which
is a measure of the response of the net magnetization M of a system to an applied magnetic field
H (�r, t). This is the type of behavior we will be thinking about: we can mathematically probe
the system with some perturbing external probe or field (e.g. H (�r, t)), and try to predict what
the system’s response will be in terms of the expectation values of some observable quantities.
Let ˆH be the full manybody Hamiltonian for some isolated system that we are interested in.
We spent most of 8.511 thinking about how to solve for the behavior of a system governed by ˆH.
As interesting as that behavior may be, we will now consider that to be a solved problem. That
is, we will assume the existence | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
as that behavior may be, we will now consider that to be a solved problem. That
is, we will assume the existence of a set of eigenkets {|n�} that diagonalize H with associated
eigenvalues (energies) En.
In addition to ˆH, we now turn on an external probe potential Vˆ , such that the total Hamil
∂H
ˆ
tonian HT ot satisfies:
HT ot = H + Vˆ
ˆ
ˆ
(1.1)
In particular, we are interested in probe potentials that arise from the coupling of some
external scalar or vector field to some sort of “density” in the sample. For example, the external
field can be an electric potential U (�r, t), which couples to the electronic charge density ρˆ(�r) such
that
�
ˆV =
d�r ˆρ (�r) U (�r, t)
where the electron density operator ˆρ (�r) is given by
V
ˆρ (�r) =
N
�
δ (�r − �ri)
i=1
(1.2)
(1.3)
1
Response Functions and the Interaction Representation
2
In first quantized language, with r� ibeing the position of electron i the N electron system. In
second quantized notation, recall
ρˆ(�r) = Ψ† (�r) Ψ (�r)
(1.4)
where Ψ† (�r) and Ψ (�r) are the electron field creation and annihilation operators, respectively.
The momentum space version of the electron density operator, ˆρ (q�), is related to ρˆ(�r)
through the Fourier transforms:
such that
ρˆ(�r) =
�
e q·rρˆ(�q)
i� �
� | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
such that
ρˆ(�r) =
�
e q·rρˆ(�q)
i� �
Ψ (�r) =
q
�
�
i� �r
e k· c�
k
�
k
ρˆ(q�) =
�
q·
e−i� �r
r
�
�
c†
c�
�
k−� kq
=
�
k
(1.5)
(1.6)
(1.7)
(1.8)
Equation (1.7) is the first quantized form of ˆρ (�q), and equation (1.8) is the second quantized
the creation operator for an electron with momentum1 �k−q� and c� the destruction
k
form with c†
�
q
k−�
operator for an electron with momentum �k.
Returning to equation (1.1), we’d like to think about Vˆ as a perturbation on the external
fieldfree system Hamiltonian ˆ
H. This leads us naturally to consider ˆ
H as the unperturbed Hamil
tonian within the interaction picture representation. Recall that this H is a very complicated
beast with all of the electronelectron repulsions included, but for our purposes we just take as a
given that there are a set of eigenstates and energies that diagonalize this Hamiltonian.
ˆ
Recall the formulation of the interaction representation:
i¯
h
∂
∂t
|φ (t)� = ( ˆ
H + Vˆ ) φ (t)�
|
(1.9)
We can “unwind” the natural time dependence due to H from the state ket φ (t)� to form an
interaction representation state ket φ˜ (t)�I by
|
|
ˆ
Ht
|φ (t)�I = e i ˆ φ (t)�
˜
|
|φ | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
ˆ
Ht
|φ (t)�I = e i ˆ φ (t)�
˜
|
|φ (t)�I = e−i ˆ |
Ht φ˜ (t)�
(1.10)
(1.11)
Note that in the absence of Vˆ , these interaction picture state kets are actually the Heisenberg
picture state kets of the system. Also, we have now officially set ¯h = 1. After substituting (1.11)
into (1.9), we obtain
h
i¯
∂
∂t
= e i ˆ V e−i ˆ φ (t)�
Ht ˜
|
Ht ˆ
ˆ= VI φ˜ (t)�
|
(1.12)
(1.13)
1 �k and q� are actually wavevectors, which differ from momenta by a factor of ¯
h. When in doubt, assume ¯
h = 1.
Response Functions and the Interaction Representation
3
where we have set
VˆI = e i ˆ V e−i ˆ
Ht ˆ
Ht
(1.14)
Thus the interaction picture state ket evolves simply according to the dynamics governed solely
by the interaction picture perturbing potential VˆI .
More generally, we can write any observable (operator) in the interaction picture as
We can integrate equation (1.12) with respect to t to get
AI = e i ˆ Ae−i ˆ
ˆ
Ht ˆ
Ht
|φ˜ (t)� = |φ0� − i
� t
−∞
dt� VI (t�) φ˜ (t�)�
ˆ
|
(1.15)
(1.16)
At first it seems like we have not done much to benefit ourselves, since all we have done is
to convert the | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
not done much to benefit ourselves, since all we have done is
to convert the ordinary Schrodinger equation, a PDE, into an integral equation. However, if VˆI
is small, then we can iterate equation (1.16):
|φ˜ (t)� ≈ |φ0� − i
� t
−∞
dt� VˆI (t�) |φ0� + · · ·
(1.17)
The essence of linear response theory is that we focus ourselves on cases where VˆI is suffi
ciently weak that the perturbation series represented by equation (1.17) has essentially converged
after including just the first nontrivial term listed above. This term is linear in VˆI .
Throughout this discussion, we will be working at T = 0, so φ0� is simply the ground state
ˆ
of the nonperturbed total system Hamiltonian H. Note that we have taken our initial time,
i.e. the lower limit of integration in equation (1.16), to be −∞. This is because we want to
imagine turning on the probing potential Vˆ adiabatically, that is so slowly that the system tracks
the ground state for all finite times. If we were to turn on the probe sharply, the system would
exhibit complicated ringing behavior that we are not interested in.
|
We now return to our model experiment for studying the properties of our system. After
applying some probe via the external potential Vˆ , we want to measure the response of some
ˆ
observable of the system ˆ
A. We characterize this response through the expectation value of A,
� ˆA | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
ˆ
A. We characterize this response through the expectation value of A,
� ˆA�:
ˆ |
� ˆ
A� = �φ (t) A φ (t)�
|
| ˜
= φ˜ (t) AI φ (t)�
ˆ
|
�
(1.18)
(1.19)
The key now is to substitute in the approximation for |φ˜ (t) given by equation (1.17) into
equation (1.19). Since we have only kept terms up to linear order in VˆI , we must be careful only
to keep terms to this order. After performing this substitution, we arrive at
A� ≈ �φ0| ˆ|φ0� − i
� ˆ
A
� t
−∞
dt�e ηt�
�φ0 [AI (�r, t) , VI (t�)] φ0�
| ˆ
ˆ
|
(1.20)
The mysterious factor eηt�
comes from our “adiabatic switchingon” of the potential. This
ensures that the system evolves smoothly from t = −∞ to t. Eventually, we will send η
0.→
Since we are interested in positive times t close to 0 when compared with −∞, we don’t need to
worry about the eηt�
messing anything up.
Response Functions
4
The other mysterious piece of equation (1.20) is the appearance of the commutator [AI (�r, t) , VˆI (t�)].
ˆ
These two terms simply come from the two possible terms linear in VˆI arising from the substitu
tions
and
|φ˜ (t)� ≈ |φ0� − i
� t
−∞
dt�VˆI (t�) φ0�
|
�φ˜ (t) | ≈ �φ0 | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
∞
dt�VˆI (t�) φ0�
|
�φ˜ (t) | ≈ �φ0| + i
� t
−∞
ˆ
dt�VI (t�) �φ0|
Note that the integration is with respect to t�, since it comes from the expression for φ˜(t)�
which involves an integration of VˆI with respect to t�. The observable ˆ
A is also a function of space
and time, but there is no reason to integrate over it at this point. This is one way to remember
what to integrate over if you forget some day.
|
1.2 Response Functions
What we’re really interested in, however, is not � ˆ
unperturbed state:
A� itself, but the change in � ˆ
A� relative to the
�δ ˆ
A� = �δ ˆ
A(�r, t)� − �φ0 δ ˆ φ0�
� t
A|
|
−ieηt
= lim
η
0
→
−∞
dt�e η(t� −t)�φ0 [AI (�r, t) , VI (t�)] φ0�
| ˆ
ˆ
|
(1.21)
(1.22)
Now is when we will specialize to the specific type of probe potential describe in the previous
example. For concreteness, we consider the potential of equation (1.2):
Vˆ =
�
V
d�rρˆ(�r) U (�r, t)
U (�r, t) commutes with the Hamiltonian, so the interaction picture representation of Vˆ is
given by
�
V
�
V
�
VˆI = e i ˆ
Ht
=
=
� � �
�
dr�� ρˆ r�� U r��, t e−i ˆ
Ht
�
�
r��, t
dr�� | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
r�� U r��, t e−i ˆ
Ht
�
�
r��, t
dr�� e i ˆ ρ(r��)e−i ˆ
Ht ˆ
HtU
�
dr�� ρˆI (r��) U r��, t
�
V
Substituting this expression for VˆI back into equation (1.22), we obtain:
�
� t
�δ ˆ
−ie ηt
A(�r, t)� = lim
0
→
η
−∞
V
dt�
d�r e η(t� −t) �φ0 [AI (�r, t) , ρˆI (�r�, t�)] φ0� U (�r�, t�)
| ˆ
|
(1.23)
(1.24)
(1.25)
(1.26)
We define the response function χ as the kernel of this expression for �δ ˆ r, t)�:
A(�
�δ ˆ
A(�r, t)� =
�
∞
−∞
dt� d�r�χ(�r, �r�, t − t�) U (�r�, t�)
(1.27)
Electron Density Response to an Applied Electric Potential
5
χ is a function of (t − t�) only, since ˆH is independent of time. The interpretation of equation
(1.27) is that if we “shake” the system with an external potential U (r��, t�), then the response of
the system in terms of some observable ˆA at the point �r and time t is modulated by the response
function χ(�r, �r�, t − t�).
Thus from comparing this definition with equation (1.26), we see that
χ(�r, �r�, t − t�) ≡
(1.28)
− i �φ0 [AI (�r, t) , ρˆI (�r�, | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
�
(1.28)
− i �φ0 [AI (�r, t) , ρˆI (�r�, t�)] φ0� e η(t� −t) θ(t − t�)
| ˆ
|
Note that in equation (1.27) we extended the limits of integration from −∞ to ∞ for conve
nience, and thus have added the Heaviside step function θ(t−t�) to our definition of χ(�r, �r�, t−t�).
Recall that θ(t) = 0 for t < 0 and θ(t) = 1 for t > 0. This ensures causality in our definition of
χ, since the system should not be able to respond to the perturbation before it happens.
Notice also that based on this definition, the response function is purely a function of the
ˆ
system’s unperturbed Hamiltonian H; U does not appear anywhere in the expression. Thus
investigations of χ can reveal information about the systems Hamiltonian.
In this definition, the electron density ˆρI (�r�, t�) appears because we specialized to the case
of an applied external electric potential that couples to the system’s charge density. For a probe
that couples to some other density, such as magnetization density ˆm(r��, t�), we can simply replace
ρˆI (�r�, t�) by ˆmI (r��, t�) in defi nition (1.28).
1.3 Electron Density Response to an Applied Electric Potential
In this section, we will specialize further to the case where we observe the response of the electron
density to an applied potential that couples to the density. Thus we are picking ˆ = ρˆ.
We begin by taking the Fourier transform of equation (1.28) with respect to time: | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
we are picking ˆ = ρˆ.
We begin by taking the Fourier transform of equation (1.28) with respect to time:
A
t�� = t� − t
� 0
χ(�r, �r�, ω) = − i
dt�� e−(iω−η)t��
�φ0 [ρˆI (�r, 0), ρˆI (�r�, t��)] φ0�
|
|
Recall that we have a complete set of eigenstates of ˆH:
−∞
�
|
ˆH n� = En
|n��n| = ˆ
1
|n�
Inserting this complete set of states into the commutator
n
[ ˆρI (�r, 0), ˆρI (�r�, t��)] = [ ˆρI (�r, 0),
|n��n|ˆρI (�r�, t��)]
�
and noting that
and
n
ˆρI = e i ˆHt ˆρe−i ˆHt
e−i ˆHt|n� = e−iEn t|n�
(1.29)
(1.30)
(1.31)
(1.32)
Electron Density Response to an Applied Electric Potential
6
we obtain
χ(�r, �r�, ω) = −i
� 0 �
dt��
−∞
−
n
�
n
�φ0 ρ(�
|ˆ r)|n��n|
ρˆ(�r�) φ0� e
|
i(En −E0 )t�� −(iω−η)t��
(1.33)
�φ0|ρˆ(�r�)|n��n|ρˆ(�r) φ0� e−i(En −E0 )t�� −(iω−η)t��
|
(1.34)
All of the time dependence has now been brought up into the exponentials, so it is trivial to
perform the integration over time. This yield the spectral representation of | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
time dependence has now been brought up into the exponentials, so it is trivial to
perform the integration over time. This yield the spectral representation of χ(�r, r�, ω):
χ(�r, �r�, ω) =
� �φ0|ρˆ(�r)|n��n|
ρˆ(�r�) φ0�
|
iη
ω − (En − E0) +
−
�φ0|ρˆ(�r�)
|n��n|ρ(� |
ˆ r) φ0�
ω + (En − E0) + iη
�
�
If there is translational invariance in the sample, then the response function χ(�r, �r�, ω) should
be simply a function of the difference �r − �r�. In this case, the spatial Fourier transform is simple:
�
n
�
χ(�q, ω) =
=
q·(� r� ) χ(�r − �r�, ω)
1
V
� �φ0 ρˆ(q�)|n��n|ρˆ(−�
d�r d�r�
�
e−i� r−�
|
ω − (En − E0) + iη
q)|φ0�
n
−
�φ0|ρ(−� |n��n|
q)
ˆ
|
ρˆ(q�) φ0�
ω + (En − E0) + iη
�
(1.35)
(1.36)
(1.37)
ρ(�
where ˆ q) ≡ ˆq is given by equations (1.7) and (1.8) in first quantized or second quantized
notation, respectively.
ρ�
Since the electron density ˆ r) is a real function, we have the important relation
ρ(�
ρˆ−� = ρˆ†
q
�
q
which is a simple consequence of the nature of the Fourier transform. This implies that
�φ0 � | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
ˆ†
q
�
q
which is a simple consequence of the nature of the Fourier transform. This implies that
�φ0 ρˆ(q�)|n��n|ρˆ(−q�) φ0� = |�n|ρˆ†(�
2
q)|φ0�|
|
|
Using this along with the relation
we arrive at the next important result:
Im
�
�
1
x + iη
= −πδ(x)
Im{χ(�q, ω)} = −π
{|�n|ρˆ†(q�)
|φ0�|
2δ (ω − (En − E0))
�
n
−|�n|ρ(�
ˆ q)|φ0�|2δ (ω + (En − E0))}
(1.38)
(1.39)
(1.40)
(1.41)
(1.42)
Why are we interested in the imaginary part of χ? The imaginary part of χ gives us
information about dissipation, i.e. the absorption and loss of energy as a result of the interaction
with the probe. We will often use the notation
χ��(�
q, ω) = Im{χ(�
q, ω)}
We can plot χ��(�q, ω) as a function of ω for fixed �q (see plot). The location of the peaks
tells us about the types of excitations being produced. As we will see shortly, it actually turns
out that knowledge of χ��(�
q, ω), can be
reconstructed from χ��(�q, ω) alone.
q, ω) is all we need; the real part of χ(�
q, ω), denoted χ�(�
Sanity Check: Free Fermions
7
1.4 Sanity Check: Free Fermions
To convince ourselves | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
Sanity Check: Free Fermions
7
1.4 Sanity Check: Free Fermions
To convince ourselves that of this formalism is really working, we will try it out on the case of
q, ω). The
free fermions, which we studied last semester in 8.511. Now, χ(�
ground state for free fermions is just the simple spherical Fermi sea, filled up to exactly to the
Fermi energy. The excited states are of the form
q, ω) is simply Π0(�
| k � = |hole at �k, e− at �k + �q�
n�
(1.43)
These single particlehole excitations are the only types of excitations possible in this case,
c� . The matrix elements we need
since the external field U couples to the density ˆq =
ρ�
are simple to calculate as well, since all that is required is a filled initial state below the Fermi
sea (with wave vector �k and an open state above the Fermi sea with wave vector �k + q� to jump
into. Thus
� c†
k � q k
k−�
�
�n� ρˆ† φ0� = (1 − f�
k| �|
q
q
k+�
)f�
k
(1.44)
where f� is 1 if the state with momentum �k is occupied in the ground state, and 0 if it is empty.
k
Substituting this in, we get
�
Letting
�
k
Π0(�
q, ω) =
�
(1 − f�
k+�
q
ω − (�� q − ��
k+�
)f�
k
k) + iη
�
(1 − f | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
ω − (�� q − ��
k+�
)f�
k
k) + iη
�
(1 − f�
k−�
q
ω + (�� q − ��
k+�
)f�
k
k ) + iη
−
(1.45)
�k − q� = �k�
�k = �k� + �q
we can switch the dummy summation variables on the second term and combine both terms into
one:
Π0(�
q, ω) =
� (1 − f� q )f�
k+�
k − (1 − f�
k )f� q
k+�
k) + iη
ω − (�� q − ��
k+�
k − f� q
f�
k+�
ω − (�� q − ��
k+�
k) + iη
�
k
�
�
k
=
(1.46)
(1.47)
(1.48)
This is exactly the Lindhardt formula that we derived in 8.511.
1.5 The Correlation Function S(�r, t)
Let’s switch gears now and talk about another object that we will see is related to the response
function. We define the correlation function
S(�r, t) = �φ0 ρˆH (�r, t)ˆρH (0, 0) φ0�
|
|
(1.49)
S(�r, t) described fluctuations of the electron density across the sample in space and time. Due
to the translational invariance of the sample, we arbitrarily set one of arguments to (r��, t�) = (0, 0)
and observe the density correlation with another point (�r, t). What we want to show next is that
there is a relationship between dissipation and fluctuations.
Fourier transforming S(�r, t) in space yields
The Correlation Function S(�r, t)
8
S(�
|
q, t) = �φ0 ρˆH (�
�
|� | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
, t)
8
S(�
|
q, t) = �φ0 ρˆH (�
�
|�n|ρˆ†
q
ρH (−�
q, t) ˆ
|
q, 0) φ0�
�|φ0�|2 e−i(En −E0 )t
=
(1.50)
(1.51)
n
where the second line follows by inserting a complete set of states between the density operators
and acting the e±i ˆHt operators on the eigenstates to the left and the right. Notice that this is
very similar with what we did earlier on our way to deriving the form of the response function.
Now we take the Fourier transform in time:
�
S(�
q, ω) =
�
d�r
�
dt e iωt
e−i� �
q·rS(�
q, t)
= 2π
|�n|ρˆ†
q�|φ0�|
2δ (ω − (En − E0))
(1.52)
(1.53)
n
This expression for S(�q, ω) is identical to the first (absorptive) term in the expression for the
imaginary part of the response function χ��(�q, ω). This can be restated as the ZeroTemperature
FluctuationDissipation Theorem:
χ��(�
1
q, ω) = −
2
(S(�
q, ω) − S(−�
q, −ω))
(1.54)
This shows that the energy absorbed in a probing experiment of the type described in this
lecture is directly related to density fluctuations across the system. Although so far we have
derived everything at T = 0, the FluctuationDissipation Theorem can be extended to finite
temperatures as well. Using the thermal average | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
Dissipation Theorem can be extended to finite
temperatures as well. Using the thermal average
�
� ˆ =
A�T
�
Tr e−β ˆ A
H ˆ
�
�
Tr e−β ˆ
H
(1.55)
the derivation can be redone to arrive at the finite temperature FluctuationDissipation Theorem:
χ��(�
q, ω) =
1 �
2
e−βω − 1 S(�
q, ω)
�
S(�
q, ω) = − 2 (nB(ω) + 1) χ��(�
q, ω)
where nB(ω) is the Boltzmann statistical factor
nB(ω) =
1
eβω − 1
By playing with these relations, we can further derive the following two results:
S(−�
q, −ω) = e−βωS(�
q, ω) = − χ��(−�
q, ω)
q, −ω)
χ��(�
Equation (1.59) is simply a statement of the law of detailed balance.
(1.56)
(1.57)
(1.58)
(1.59)
(1.60)
1.6 Measuring S(�q, ω)
It is possible to measure S(�q, ω) directly through scattering experiments. Depending on the
particle density of interest, the scattering can be performed using electrons, Xrays, neutrons,
etc. This process is governed by the interaction Hamiltonian
Measuring S(�q, ω)
9
ˆHint
=
�
v(�ri − �R)
i
�
ri − �
i(� R)·q v�
e
q
=
q
�
�
v�q
=
q
�
ρˆ†
q
�
q· �
e−i� R | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
�
�
v�q
=
q
�
ρˆ†
q
�
q· �
e−i� R
(1.61)
(1.62)
(1.63)
where �R is the position of the scattering electron and {�ri} are the sample electrons’ coordinates.
For now, we imagine probing the electron density by sending in high energy (10 100
keV) electrons. These electrons interact with the electrons in the sample through the Coulomb
interaction. Thus
v� =
q
4πe2
q2
(1.64)
If we wanted to perform neutron scattering, then the {�ri} would be the sample’s nuclear
coord inates, and the interaction would be the contact potential
To ensure singlescattering, we need to work in the regime of weak coupling. Thus we can
apply the first order Born Approximation and Fermi’s Golden Rule to obtain the scattering rate
v� =
r
2πb
Mn
δ(�r)
(1.65)
Wi
f =
→
π
2
¯
h
|�f | ˆ
Hint
|i�|
2δ(Ei − Ef )
(1.66)
We take the initial and final states of our scattering probe to be plane waves ki� and |�kf �,
|�
respectively. Then the initial and final states of the system
are
Let
|
i� =
|
f � =
|φ0� ⊗ |�
|n� ⊗ |�
ki�
kf �
Q = �ki − �kf
�
ω = E�
ki
− E�
kf
(1.67)
(1.68)
so that ω > 0 when energy is lost to the system and �Q is the momentum transfer to the system.
Then2
f = 2π
→
q �nv�
�
�
�
�
�
�
� | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
Then2
f = 2π
→
q �nv�
�
�
�
�
�
�
�
�
n
q
�
�
ρˆ†
|q
� φ0�
|
R e i(�
d �
kf −�
�
�
q·
ki )·R e−i� R
�
2
�
�
�
�
�
δ (ω − (En − E0))
(1.69)
Wi
2 h = 0 again.
¯
Q|
= |v �
2 2π
�
|�n|ρˆ†
�
Q
n
Q, ω)
2 S( �
Q|
= |v �
|φ0�|
2 δ(Ef − Ei)
Measuring S(�q, ω)
10
(1.70)
(1.71)
Thus the scattering rate for scattering with a momentum transfer �
hω is
related to the correlation function S( �Q, ω) very simply through a scaling by the square of the
�
Qth Fourier component of the interaction potential.
Q and energy loss ¯ | https://ocw.mit.edu/courses/8-512-theory-of-solids-ii-spring-2009/0d1bb83b0e6471874b6c3e02aff52ad5_MIT8_512s09_lec01_rev2.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.